CN115937284A - Image generation method, device, storage medium and program product - Google Patents

Image generation method, device, storage medium and program product Download PDF

Info

Publication number
CN115937284A
CN115937284A CN202110915064.4A CN202110915064A CN115937284A CN 115937284 A CN115937284 A CN 115937284A CN 202110915064 A CN202110915064 A CN 202110915064A CN 115937284 A CN115937284 A CN 115937284A
Authority
CN
China
Prior art keywords
rendering
image
motion
area
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110915064.4A
Other languages
Chinese (zh)
Inventor
邹俊峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Cloud Computing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Cloud Computing Technologies Co Ltd filed Critical Huawei Cloud Computing Technologies Co Ltd
Priority to CN202110915064.4A priority Critical patent/CN115937284A/en
Publication of CN115937284A publication Critical patent/CN115937284A/en
Pending legal-status Critical Current

Links

Images

Abstract

An image generation method, apparatus, storage medium, and program product are presented. Wherein the method comprises the following steps: the rendering system applied to the terminal device also comprises a far-end device, and the method comprises the following steps: sending first state information to a far-end device, wherein the first state information comprises first position information and first posture information of terminal equipment where the terminal device is located; obtaining an intermediate rendering image according to a first rendering image corresponding to the first state information, indication information of a motion area in the first rendering image, texture information of the motion area and a motion trend of the motion area; obtaining second depth information according to the first depth information of the first rendering image and the motion trend of the motion area; rendering is carried out according to the intermediate rendering image, the second depth information and the second state information to obtain a second rendering image, wherein the occurrence time of the second state information is later than that of the first state information.

Description

Image generation method, device, storage medium and program product
Technical Field
The present application relates to the field of image rendering, and in particular, to an image generation method, an apparatus, a storage medium, and a program product.
Background
Rendering refers to the process of generating images with software from three-dimensional models, which are descriptions of three-dimensional objects in a well-defined language or data structure, including geometric, viewpoint, texture, and lighting information. The image is a digital image or a bitmap image.
However, a large amount of data needs to be transmitted between the remote computing node and the terminal device to complete the cooperative rendering between the remote computing node and the terminal device, resulting in a large amount of transmission resources being occupied.
Disclosure of Invention
In order to solve the above problems, the present application provides an image generation method, an apparatus, a storage medium, and a program product, which can reduce waste of transmission resources and render a more realistic effect on a predicted image.
In a first aspect, an image generation method is provided, which is applied to a terminal device, where a rendering system in which the terminal device is located further includes a remote device, and the method includes:
sending first state information to the remote device, wherein the first state information comprises first position information and first posture information of terminal equipment where the terminal device is located;
obtaining an intermediate rendering image according to a first rendering image corresponding to the first state information, indication information of a motion area in the first rendering image, texture information of the motion area and a motion trend of the motion area;
obtaining second depth information according to the first depth information of the first rendering image and the motion trend of the motion area;
rendering is carried out according to the intermediate rendering image, the second depth information and second state information to obtain a second rendering image, wherein the occurrence time of the second state information is later than that of the first state information.
In the scheme, the second rendering image under the second state information can be predicted according to the first rendering image under the first state information, waste of transmission resources can be reduced, and the predicted image presents a more vivid effect.
In some possible designs, the first rendered image and the second rendered image are displayed.
In the above scheme, the remote device only transmits the first rendering image, but the terminal device can display the first rendering image and the second rendering image, so that waste of transmission resources can be effectively reduced.
In some possible designs, the rendering according to the intermediate rendering image, the second depth information, and the second state information, and obtaining a second rendering image includes:
obtaining three-dimensional point cloud data according to the intermediate rendering image and the second depth information; rendering according to the three-dimensional point cloud data and the second state information to obtain a second rendering image.
In the scheme, the three-dimensional point cloud data is obtained according to the intermediate rendering image, and then the second rendering image is generated according to the second state information and the three-dimensional point cloud data, so that the predicted second rendering image can be ensured to be more vivid by considering both the motion trend factor and the state information change factor.
In some possible designs, the obtaining an intermediate rendering image according to the first rendering image corresponding to the first state information, the indication information of the motion region in the first rendering image, the texture information of the motion region, and the motion trend of the motion region includes: determining a predicted position of a motion area according to indication information of the motion area in the first rendering image and the motion trend of the motion area; and obtaining the intermediate rendering image according to the first rendering image, the texture information of the motion area and the predicted position of the motion area.
In some possible designs, before obtaining an intermediate rendered image according to a first rendered image corresponding to the first state information, indication information of a motion region in the first rendered image, texture information of the motion region, and a motion trend of the motion region, the method further includes:
receiving a first rendering image corresponding to the first state information, first depth information of the first rendering image, indication information of a motion area in the first rendering image, texture information of the motion area, and a motion trend of the motion area, which are sent by the remote device.
In some possible designs, the obtaining an intermediate rendering image according to the first rendering image corresponding to the first state information, the indication information of the motion region in the first rendering image, the texture information of the motion region, and the motion trend of the motion region includes:
dividing a first rendering image into a dynamic rendering area and a static rendering area according to the first rendering image corresponding to the first state information and indication information of a motion area in the first rendering image, wherein the dynamic rendering area comprises the motion area;
obtaining the intermediate rendering image according to the dynamic rendering area, the indication information of the motion area in the dynamic rendering area, the texture information of the motion area and the motion trend of the motion area;
obtaining second depth information according to the depth information of the dynamic rendering area and the motion trend of the motion area;
obtaining a rendering result of the static rendering area according to the second state information, the texture information of the static rendering area and the depth information of the static rendering area;
rendering according to the intermediate rendering image, the second depth information and the second state information to obtain a rendering result of the dynamic rendering area;
and obtaining the second rendering image according to the rendering result of the static rendering area and the rendering result of the dynamic rendering area.
In the scheme, the first rendering image is divided into the dynamic rendering area and the static rendering area, and only the dynamic rendering area needs to be predicted, so that the calculation amount can be effectively reduced.
In some possible designs, after obtaining the second rendered image, the method further comprises: and filling the blank of the second rendering image.
In a second aspect, an image generating method is provided, which is applied to a remote device, where a rendering system in which the remote device is located further includes a terminal device, and the method includes:
rendering according to the first state information to obtain a first rendering image;
determining a motion area in the first rendering image and a motion trend of the motion area according to the first rendering image and a historical rendering image, wherein the historical rendering image is obtained earlier than the first rendering image, and the first rendering image and the historical rendering image contain the same target;
and sending the first rendering image, first depth information of the first rendering image, indication information of a motion area in the first rendering image, texture information of the motion area and a motion trend of the motion area to the terminal equipment, wherein the indication information of the motion area in the first rendering image is used for indicating the position of the motion area in the first rendering image.
In some possible designs, the method further comprises: storing the first rendered image.
In some possible designs, the method further comprises: dividing the first rendering image into a dynamic rendering area and a static rendering area according to indication information of a motion area in the first rendering image, wherein the dynamic rendering area comprises the motion area; and sending dynamic rendering region indication information to the terminal device, wherein the dynamic region indication information is used for indicating the position of the dynamic rendering region in the first rendering image.
In a third aspect, a terminal device is provided, where a rendering system in which the terminal device is located further includes a far-end device, and the terminal device includes:
the communication module is used for sending first state information to the remote device, wherein the first state information comprises first position information and first posture information of terminal equipment where the terminal device is located;
the rendering module is used for obtaining an intermediate rendering image according to a first rendering image corresponding to the first state information, indication information of a motion area in the first rendering image, texture information of the motion area and a motion trend of the motion area; obtaining second depth information according to the first depth information of the first rendering image and the motion trend of the motion area; rendering according to the intermediate rendering image, the second depth information and second state information to obtain a second rendering image, wherein the occurrence time of the second state information is later than that of the first state information.
In some possible designs, the terminal device further includes: a display module to display the first rendered image and the second rendered image.
In some possible designs, the rendering module is configured to obtain three-dimensional point cloud data according to the intermediate rendering image and the second depth information; rendering according to the three-dimensional point cloud data and the second state information to obtain a second rendering image.
In some possible designs, the rendering module is configured to determine a predicted position of the motion region according to indication information of the motion region in the first rendered image and a motion trend of the motion region; and obtaining the intermediate rendering image according to the first rendering image, the texture information of the motion area and the predicted position of the motion area.
In some possible designs, the communication module is configured to receive a first rendered image, first depth information of the first rendered image, indication information of a motion area in the first rendered image, texture information of the motion area, and a motion trend of the motion area corresponding to the first state information sent by the remote apparatus.
In some possible designs, the rendering module is configured to divide the first rendered image into a dynamic rendering area and a static rendering area according to a first rendered image corresponding to the first state information and indication information of a motion area in the first rendered image, where the dynamic rendering area includes the motion area; obtaining the intermediate rendering image according to the dynamic rendering area, the indication information of the motion area in the dynamic rendering area, the texture information of the motion area and the motion trend of the motion area; obtaining second depth information according to the depth information of the dynamic rendering area and the motion trend of the motion area; obtaining a rendering result of the static rendering area according to the second state information, the texture information of the static rendering area and the depth information of the static rendering area; rendering according to the intermediate rendering image, the second depth information and the second state information to obtain a rendering result of the dynamic rendering area; and obtaining the second rendering image according to the rendering result of the static rendering area and the rendering result of the dynamic rendering area.
In a fourth aspect, a remote device is provided, where a rendering system in which the remote device is located further includes a terminal device, and the remote device includes:
the rendering module is used for rendering according to the first state information to obtain a first rendering image; determining a motion area in the first rendering image and a motion trend of the motion area according to the first rendering image and a historical rendering image, wherein the historical rendering image is obtained earlier than the first rendering image, and the first rendering image and the historical rendering image contain the same target;
the communication module is used for sending the first rendering image, the first depth information of the first rendering image, the indication information of the motion area in the first rendering image, the texture information of the motion area and the motion trend of the motion area to the terminal equipment, wherein the indication information of the motion area in the first rendering image is used for indicating the position of the motion area in the first rendering image.
In some possible designs, the rendering module is further configured to store the first rendered image.
In some possible designs, the rendering module is further configured to divide the first rendered image into a dynamic rendering area and a static rendering area according to indication information of a motion area in the first rendered image, where the dynamic rendering area includes the motion area; the communication module is configured to send dynamic rendering region indication information to the terminal device, where the dynamic region indication information is used to indicate a position of the dynamic rendering region in the first rendered image.
In a fifth aspect, a rendering system is provided, the rendering system comprising the terminal device according to any one of the second aspect and the remote device according to any one of the fourth aspect.
In a sixth aspect, a terminal device is provided, comprising a processor and a memory, the processor being configured to execute instructions stored in the memory to perform the following method:
sending first state information to the remote device, wherein the first state information comprises first position information and first posture information of terminal equipment where the terminal device is located;
and obtaining a second rendering image according to a first rendering image, first depth information of the first rendering image, indication information of a motion area in the first rendering image, texture information of the motion area and a motion trend of the motion area corresponding to second state information and the first state information, wherein the occurrence time of the second state information is later than that of the first state information.
In some possible designs, the terminal device further includes a display for displaying the first rendered image and the second rendered image.
In some possible designs, the processor is further to:
obtaining three-dimensional point cloud data according to the intermediate rendering image and the second depth information;
rendering according to the three-dimensional point cloud data and the second state information to obtain a second rendering image.
In some possible designs, the processor is further to:
determining a predicted position of a motion area according to indication information of the motion area in the first rendering image and a motion trend of the motion area;
and obtaining the intermediate rendering image according to the first rendering image, the texture information of the motion area and the predicted position of the motion area.
In some possible designs, the terminal device further includes a transceiver, and the transceiver is configured to receive a first rendered image corresponding to the first state information, first depth information of the first rendered image, indication information of a motion region in the first rendered image, texture information of the motion region, and a motion trend of the motion region, which are sent by the remote apparatus.
In some possible designs, the processor is further to:
dividing a first rendering image into a dynamic rendering area and a static rendering area according to the first rendering image corresponding to the first state information and indication information of a motion area in the first rendering image, wherein the dynamic rendering area comprises the motion area;
obtaining the intermediate rendering image according to the dynamic rendering area, the indication information of the motion area in the dynamic rendering area, the texture information of the motion area and the motion trend of the motion area;
obtaining second depth information according to the depth information of the dynamic rendering area and the motion trend of the motion area;
obtaining a rendering result of the static rendering area according to the second state information, the texture information of the static rendering area and the depth information of the static rendering area;
rendering according to the intermediate rendering image, the second depth information and the second state information to obtain a rendering result of the dynamic rendering area;
and obtaining the second rendering image according to the rendering result of the static rendering area and the rendering result of the dynamic rendering area.
In a seventh aspect, a remote device is provided, which includes a processor and a memory, the processor being configured to execute instructions stored in the memory to perform the following method:
rendering according to first state information sent by terminal equipment where the terminal device is located to obtain a first rendering image;
determining a motion area in the first rendering image and a motion trend of the motion area according to the first rendering image and a historical rendering image, wherein the historical rendering image is obtained earlier than the first rendering image, and the first rendering image and the historical rendering image contain the same target;
and sending the first rendering image, first depth information of the first rendering image, indication information of a motion area in the first rendering image, texture information of the motion area and a motion trend of the motion area to the terminal equipment, wherein the indication information of the motion area in the first rendering image is used for indicating the position of the motion area in the first rendering image.
In some possible designs, the memory is further to store the first rendered image.
In some possible designs, the processor is further to: dividing the first rendering image into a dynamic rendering area and a static rendering area according to indication information of a motion area in the first rendering image, wherein the dynamic rendering area comprises the motion area;
and sending dynamic rendering region indication information to the terminal device, wherein the dynamic region indication information is used for indicating the position of the dynamic rendering region in the first rendering image.
In an eighth aspect, there is provided a computer readable storage medium comprising computer program instructions which, when executed by a cluster of computing devices, perform the method of any one of the first or second aspects.
In a ninth aspect, there is provided a computer program product comprising instructions which, when executed by a cluster of computer devices, cause the cluster of computer devices to perform the method of any one of the first or second aspects.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
FIG. 1 is a schematic block diagram of a rendering system provided herein;
FIG. 2 is a schematic flow chart diagram of an image generation method provided in the present application;
FIG. 3 is a schematic flow chart diagram of another image generation method provided herein;
FIG. 4 is a schematic flow chart diagram of a third image generation method provided by the present application;
FIG. 5 is a schematic diagram of a dynamic rendering region and a static rendering region in a first rendered image provided herein;
FIG. 6 is a schematic block diagram of a rendering system provided herein;
FIG. 7 is a schematic diagram of a computing device cluster provided herein;
FIG. 8 is a schematic diagram of a connection manner of computing devices in a computing device cluster provided in the present application;
FIG. 9 is a schematic diagram of a connection manner of computing devices in a computing device cluster provided in the present application;
fig. 10 is a schematic structural diagram of a terminal device provided in the present application.
Detailed Description
Referring to fig. 1, fig. 1 is a schematic structural diagram of a rendering system according to the present application. The rendering system may include: one or more terminal devices 10, network devices 20, and remote devices 30. The remote device 30 and the terminal device 10 are typically deployed in different geographical locations.
The rendering system in this embodiment is used for rendering the three-dimensional model of the target scene by the cooperation between the terminal device 10 and the remote device 30 to obtain a two-dimensional rendered image. That is, the remote device 30 completes part of the job of rendering the image, and the terminal device 10 completes part of the job of rendering the image, and the two cooperate together with the help of the network device 20 to be able to generate the rendered image.
The terminal device 10 may be an electronic device with data transceiving, data computing, and image displaying capabilities, such as various types of User Equipment (UE), a mobile phone (mobile phone), a tablet computer (pad), a desktop computer, and the like, may further include a machine smart device, such as a Virtual Reality (VR), an Augmented Reality (AR) terminal device, a self-driving (self-driving) device, a remote medical (remote medical) device, a smart city (smart city) device, and the like, and may further include a wearable device (such as smart glasses, a smart watch, a smart necklace, and the like), and the like. The terminal device may be an electronic device with high configuration and high performance (for example, multiple cores, high master frequency, large memory, etc.), or an electronic device with low configuration and low performance (for example, single core, low master frequency, small memory, etc.). In some scenarios, the names of electronic devices with similar data transceiving, data computing, and image displaying capabilities may not be referred to as terminal devices, for example, terminals, terminal devices, thin terminals, and the like, but for convenience of description, the electronic devices with data transceiving, data computing, and image displaying capabilities are collectively referred to as terminal devices in the embodiments of the present application. A terminal device including a communication module and a rendering module may be provided in the terminal apparatus 10. In a particular embodiment, the communication module and the rendering module may be implemented by program code.
In a particular embodiment, the terminal device may be a Virtual Reality (VR), which may also be referred to as immersive multimedia or computer simulated life, device that may, at least in some cases, replicate or simulate the environment or the presence of entities in locations in the real world or an imaginary world or environment to varying degrees. In an example implementation, a VR system may include a VR headset or Head Mounted Display (HMD) device that may be mounted or worn on a user's head.
The VR device may generate a three-dimensional immersive virtual environment. Users can experience such 3D immersive virtual environments through interaction with various electronic devices. For example, a helmet or other head-mounted device that includes a display, glasses or goggles that a user looks through when viewing the display device may provide audio and visual elements of the 3D immersive virtual environment that the user is to experience. Controllers (e.g., external handheld devices, sensor-equipped gloves, and other such electronic devices) may be paired with the head-mounted device, allowing a user to move through and interact with elements in the virtual environment by manipulating the controller.
A user immersed in a 3D virtual environment wearing a first electronic device (e.g., a head mounted visual display (HMD) device/VR headset) may explore the 3D virtual environment and interact with the 3D virtual environment through various different types of inputs. These inputs may include, for example, physical interactions including, for example, one or more of manipulation of a second electronic device (such as a VR controller) separate from the VR headset, manipulation of the VR headset itself, manipulation of the controller, and directional gaze by hand, arm pose, head motion, and head and eye, among others. First and second electronic devices (e.g., a VR headset and a VR controller) may be operatively coupled or paired to facilitate communication and data exchange therebetween.
The tracking device may track status information of the VR headset or the user's head. The state information may include, for example, one or more of a location (also referred to as a position) and an orientation for any object (physical or virtual) such as a VR controller. The state information may include, for example, one or more of absolute and relative positioning and positions in the physical world and orientations of objects, or one or more of positioning and positions within the virtual world and orientations of objects (e.g., virtual objects or rendered elements). The VR controller or any object may have six degrees of freedom (6 DoF) to move in three-dimensional space. Specifically, the controller or object may be free to change position with forward/backward (surge), up/down (bump), left/right (wiggle) translations on three perpendicular axes: combined with changes in orientation through rotation about three perpendicular axes, commonly referred to as pitch, yaw, and roll.
Thus, according to an example embodiment, the position (or location) of an object in three-dimensional space may be defined by its position in three orthogonal (or perpendicular) axes (e.g., X, Y, Z axes). Thus, the position (or location) of the object may be identified by 3D coordinates (e.g., X, Y, and Z coordinates). Furthermore, the object may be rotated about three orthogonal axes. These rotations may be referred to as pitch, yaw, and roll. Pitch, yaw, and roll rotations may define the orientation of any object. Thus, for example, the orientation of an object may include or may refer to a direction in which the object is pointed or oriented.
According to an example embodiment, the head state information may include information indicative of (or identifying) a head pose, and may include at least one (or may alternatively include both) of location information indicative of a location of the user's head or a virtual reality headset worn by the user, and orientation information indicative of an orientation of the user's head or virtual reality headset. For example, head pose information (e.g., including position information and/or orientation information of the user's head or a virtual reality headset worn by the user) may be received from a tracking device (e.g., which may be part of a VR headset) that tracks the pose of the virtual reality headset worn by the user.
Similarly, the object pose information may include information indicative of (recognizing) a pose of the object, for example, including at least one (or alternatively both) of position information indicative of a position (or location) of the object and orientation information indicative of an orientation of the object. For example, the object may comprise a physical object, such as a VR controller operated by a user. In at least some cases, object pose information may be received from a tracking device that tracks the pose (position and/or orientation) of an object.
The objects may also include virtual objects (e.g., a sword or light sword or animal) that are rendered on a display of the HMD or VR headset and then displayed in the virtual world. For example, a virtual object (motion and/or pose of the virtual object) may be controlled by a VR controller or other physical object. For example, a VR application (or other application or program) may use object pose information (e.g., from a VR controller or other physical object) to change the pose (e.g., position and/or orientation) of one or more virtual objects (or rendering elements) in an image or frame displayed in a virtual world.
Additionally, the motion and/or pose of a virtual object (e.g., a spacecraft, an animal, an elevated spacecraft, or a rocket) may be controlled by the VR application. Further, the VR application may or may not receive pose information for the virtual object from a VR controller or other physical object. Thus, for example, where pose information is received from a VR controller or other physical object (e.g., a VR controller in a physical world for controlling motion or pose of a sword or saber in a virtual world) associated with (e.g., controlling motion and/or pose of) a virtual object, the VR application and pose information of the VR controller or object may be used (e.g., by a VR application) to control the motion and/or pose of the virtual object.
The network device 20 is used to transmit data between the terminal device 10 and the remote device 30 via a communication network of any communication mechanism/communication standard. The communication network may be a wide area network, a local area network, a point-to-point connection, etc., or any combination thereof.
The remote device 30 may be a computing device with data transceiving and data computing capabilities, such as a server, desktop computer, tablet computer, laptop computer, personal computer, and the like. The remote devices may be arranged individually or in a cluster. When the remote device 30 employs a cluster arrangement, the remote device 30 may be disposed on a cloud platform (e.g., public cloud platform, private cloud platform, hybrid cloud platform, and so forth), an edge computing platform, a data center, and so forth. The remote device 30 may include rendering hardware, virtualization services, and remote appliances. Wherein the rendering hardware includes computing resources, storage resources, and network resources. The computing resource may adopt a heterogeneous computing architecture, for example, a Central Processing Unit (CPU) + Graphics Processing Unit (GPU) architecture, a CPU + AI chip, a CPU + GPU + AI chip architecture, and the like, which are not limited herein. The storage resources may include memory, video memory, and other storage devices. The network resources may include network cards, port resources, address resources, and the like. The virtualization service is a service which virtualizes resources of rendering nodes into self-resources such as vCPUs through a virtualization technology, and flexibly isolates mutually independent resources according to the needs of users to run application programs of the users. Generally, the virtualization service may include a Virtual Machine (VM) service and a container (container) service, and the VM and the container may be provided with a remote device including a communication module and a rendering module. In a particular embodiment, the communication module and the rendering module may be implemented by program code.
The terminal device and the remote device both run the same application. Common applications may include: game applications, VR applications, movie special effects, and animations, among others. In a specific embodiment, the terminal device and the remote device may be provided by an application provider. For example, the application may be a game application, and a game developer of the game application installs the remote device on the remote device 30 provided by the cloud service provider, provides the terminal device to the user through the internet, downloads the terminal device, and installs the terminal device on the terminal device of the user.
Referring to fig. 2, fig. 2 is a schematic flow chart of a first image generation method provided in the present application. As shown in fig. 2, the image generation method according to the present embodiment includes:
s101: terminal device respectively acquires a plurality of status information S 1 ,S 2 ,…,S t
In a particular embodiment, each of the plurality of status information includes position information and attitude information. The position information is used for indicating the position of the terminal equipment where the terminal device is located, and the posture information is used for indicating the posture of the terminal equipment where the terminal device is located. In particular, the status information S 1 Including location information L 1 And postureInformation P 1 (ii) a Status information S 2 Including location information L 2 And attitude information P 2 (ii) a 8230; status information S t Including location information L t And attitude information P t . Wherein the status information S 1 Occurs later than the status information S 2 Time of occurrence of (S), status information 2 Occurs later than the status information S 3 The occurrence time of (8230); state information S t-1 Occurs later than the status information S t The occurrence time of (c).
S102: the terminal device converts a plurality of state information S 1 ,S 2 ,…,S t In turn to the remote device. Accordingly, the remote device receives a plurality of status information S transmitted from the terminal device 1 ,S 2 ,…,S t
S103: the remote device is based on a plurality of status information S 1 ,S 2 ,…,S t Generating a plurality of rendered images I 1 ,I 2 ,…,I t
In a specific embodiment, a plurality of rendered images I 1 ,I 2 ,…,I t May be based on a plurality of status information S, respectively 1 ,S 2 ,…,S t And rendering the target scene to obtain a two-dimensional image. There is a one-to-one correspondence between the plurality of state information and the plurality of rendered images. In particular, image I is rendered 1 Is based on the status information S 1 Rendering a target scene to obtain a two-dimensional image; rendering an image I 2 Is based on the status information S 2 Rendering a target scene to obtain a two-dimensional image; 8230j rendering image I t Is based on the status information S t And rendering the target scene to obtain a two-dimensional image. The target scene comprises a three-dimensional model and a light source, wherein the light source is used for illuminating the three-dimensional model, and the three-dimensional model can comprise one or more moving objects. Multiple rendered images I 1 ,I 2 ,…,I t Includes a motion region and a background region in each rendered image. The motion region is obtained by rendering a moving object in the target scene, and the background region is obtained by excluding motionThe region outside the region.
In a more specific embodiment, a plurality of rendered images I 1 ,I 2 ,…,I t May be a set of images that are rendered continuously of a target scene that includes a moving object. For example, suppose that when a moving object in the target scene falls from half the air to the ground, a frame of image I is rendered when the moving object falls by 10 centimeters 1 Rendering a frame of image I when the moving object falls by 20 cm 2 8230shows that when a moving object falls by 70 cm, a frame image I is rendered t And so on. Multiple rendered images I 1 ,I 2 ,…,I t Are typically correlated images. I.e. a plurality of rendered images I 1 ,I 2 ,…,I t Is typically greater than a threshold value, wherein the plurality of rendered images I 1 ,I 2 ,…,I t May include cosine similarity, minkowski distance, manhattan distance, euclidean distance, chebyshev distance, hamming distance, and jaccard similarity coefficient, among others.
S104: the remote device generates first depth information of the first rendering image, indication information of a motion area in the first rendering image, texture information of the motion area, and a motion trend of the motion area.
In a particular embodiment, the first rendered image may be rendered image I t At this time, the history image of the first rendered image may be the rendered image I 1 To rendering image I t-1 Some or all of (a). It will be appreciated that in other embodiments, image I is rendered t May be rendering an image I t-1 Rendering an image I t-2 And the like, and are not particularly limited herein. Hereinafter, the first rendering image is taken as the rendering image I t The description is given for the sake of example.
In a specific embodiment, the first depth information of the first rendered image is used to represent the depth corresponding to each pixel point in the first rendered image. The first depth information is a two-dimensional matrix, and a size of the matrix of the first depth information is the same as a size of the matrix of the first rendered image. Taking the matrix a with the first depth information as below, the matrix B with the first rendered image as below as an example,
Figure BDA0003205221380000091
wherein the element a in the first depth information 11 For representing a pixel point b in the first rendered image 11 Depth of (2), element a in the first depth information 12 For representing a pixel point b in a first rendered image 12 8230the element a in the first depth information 1m For representing a pixel point b in the first rendered image 1m Depth of (2), element a in the first depth information 21 For representing a pixel point b in the first rendered image 21 Depth of (2), element a in the first depth information 22 For representing a pixel point b in a first rendered image 22 8230the element a in the first depth information 2m For representing a pixel point b in the first rendered image 2m 8230the element a in the first depth information n1 For representing a pixel point b in the first rendered image n1 Depth of (2), element a in the first depth information n2 For representing a pixel point b in the first rendered image n2 8230the element a in the first depth information nm For representing a pixel point b in the first rendered image nm The depth of (c).
In a particular embodiment, the first depth information of the first rendered image may be obtained by the remote device based on the first state information and the target scene. It will be appreciated that when the first rendered image is rendered image I t Then, the first status information is status information S t
In a specific embodiment, the indication information of the motion region in the first rendered image may be position coordinates of all pixel points in the motion region in the first rendered image, or position coordinates of pixel points at a boundary in the motion region in the first rendered image, or the like.
In a specific embodiment, the indication information of the motion region in the first rendered image may be extracted by: remote device rendering image based on pair I t-1 ,I t Analyzing and determining the position coordinates of the central point of the motion area in the first rendering image; determining the position coordinate of the central point of the motion area in the first depth information according to the position coordinate of the central point of the motion area in the first rendering image, determining the position coordinate of the motion area in the first depth information according to the position coordinate of the central point of the motion area in the first depth information, determining the position coordinate of the motion area in the first rendering image according to the position coordinate of the motion area in the first depth information, and taking the position coordinate of the motion area in the first rendering image as the indication information of the motion area in the first rendering image.
In a more specific embodiment, the remote device renders the image I based on t-1 ,I t Performing analysis to determine a position coordinate of a center point of the motion region in the first rendered image, specifically including: remote device pair rendering image I t-1 ,I t And analyzing to obtain the position coordinates of the pixel points belonging to the motion area in the first rendering image. Since the amount of movement of the pixel point of the background portion in the first rendered image is different from the amount of movement of the pixel point of the motion region in the first rendered image, the remote device renders the image I by the optical flow analysis method t-1 ,I t The amount of movement of each pixel point in the first rendered image is determined through analysis, and the pixel point belonging to the motion area in the first rendered image can be roughly extracted from the first rendered image by analyzing the difference between the amount of movement of the pixel point of the background part in the first rendered image and the amount of movement of the pixel point of the motion area in the first rendered image. However, a large amount of noisy pixel points exist in the motion region in the extracted first rendering image, that is, a part of pixel points which should not belong to the motion region in the first rendering image are extracted, and/or a part of pixel points which should belong to the motion region in the first rendering image are not extracted. The remote device willThe positions of the extracted pixel points are averaged to obtain the position coordinates of the central point of the motion area in the first rendering image. It can be understood that some of the extracted pixels belong to the motion region in the first rendered image, and some pixels are extracted incorrectly without belonging to the motion region in the first rendered image, so that it is difficult for the remote device to determine which pixels belong to the motion region in the first rendered image, and which pixels do not belong to the motion region in the first rendered image. Therefore, the center point of the motion region in the first rendered image needs to be obtained by averaging the positions of the extracted pixel points, and the center point of the motion region in the first rendered image can be determined to be a pixel point which actually belongs to the motion region in the first rendered image. It can be understood that, except that the central point of the motion region in the first rendering image is actually a pixel point of the motion region in the first rendering image, a pixel point obtained by performing a small amount of translation or rotation on the position coordinate of the central point of the motion region in the first rendering image is often also a pixel point actually belonging to the motion region in the first rendering image. For example, the pixel obtained after moving the position coordinate of the center point of the motion region in the first rendered image to the right by three pixels is often the pixel that actually belongs to the motion region in the first rendered image, and is not specifically limited herein.
In a more specific embodiment, the remote apparatus determines the position coordinates of the center point of the motion region in the first depth information from the position coordinates of the center point of the motion region in the first rendered image. In general, the position coordinates of the center point of the motion region in the first rendered image and the position coordinates of the center point of the motion region in the first depth information are the same. Therefore, the remote apparatus may determine the position coordinates of the center point of the motion region in the first depth information from the center point of the motion region in the first rendered image.
In a more specific embodiment, the determining, by the remote device, the position coordinate of the motion region in the first depth information according to the position coordinate of the central point of the motion region in the first depth information specifically includes:
first, the remote device performs edge detection on the first depth information to obtain a first contour curve in the first depth information. It will be appreciated that the depth of the background portion and the depth of the motion region are typically different, and therefore the first contour curve extracted in the first depth information by edge detection comprises the contour curve of the motion region. However, there may be a background portion whose profile is also extracted erroneously, causing interference. Therefore, it is necessary to accurately extract the motion region in the first depth information from the first depth information by excluding the interference through the following connected component search.
Then, the remote device searches the connected domain of the first contour curve by taking the central point of the motion region in the first depth information as a seed point to obtain a first connected domain. Wherein the remote device may perform a connected component lookup on the first profile curve by:
s1: set the stack to empty and set a given seed point (x) 1 ,y 1 ) Pushing the stack;
s2: if the stack is empty, ending the process; otherwise, the stack top element (x) is taken 2 ,y 2 ) As seed points (x) 1 ,y 1 );
S3: from the seed point (x) 1 ,y 1 ) Initially, the color of the pixel is filled up with the color values of the boundary pixel by pixel along the current scan line with ordinate y in both left and right directions until the color of the pixel is equal to the color of the first contour curve. Let the abscissa of the left and right boundaries be x left And x right
S4: on the upper and lower two adjacent scan lines to the current scan line, in the interval [ x ] left ,x right ]For searching the range, the minimum interval needing filling is obtained, and the rightmost point in the minimum interval is used as a seed point (x) 1 ,y 1 ) Push the stack and go to step 2.
Finally, the remote device extracts the motion area in the first depth information from the corresponding position coordinate in the first depth information according to the position coordinate of the first communication domain in the first depth information.
In a more specific embodiment, determining the position coordinates of the motion region in the first rendered image according to the position coordinates of the motion region in the first depth information specifically includes: since the position coordinates of the motion region in the first rendered image and the position coordinates of the motion region in the first depth information are theoretically the same, when the position coordinates of the motion region in the first depth information are determined, the position coordinates of the motion region in the first rendered image are also determined accordingly.
In a specific embodiment, the texture information of the motion region in the first rendered image may be formed by pixel values of pixel points of the motion region. Texture information of the motion region in the first rendered image may be extracted from the position coordinates of the motion region in the first rendered image.
In a particular embodiment, the trend of the motion region in the first rendered image may include a direction of the motion and a speed of the motion. The motion trend of the motion region in the first rendered image may be calculated according to the following manner: the remote device can determine a rendering image I in the plurality of rendering images through an optical flow analysis method 1 Moving the pixel points representing the same motion region to the next frame to render an image I 2 1 (including the magnitude and direction of the velocity), a rendered image I of the plurality of rendered images can be determined by optical flow analysis 2 The pixel points representing the same motion area in the image rendering device are moved to the next frame to render an image I 2 2, 8230, the rendered image I of the plurality of rendered images can be determined by optical flow analysis t-1 The pixel points representing the same motion area in the image rendering device are moved to the next frame to render an image I t The amount of movement t-1. The remote device determines the movement trend of the movement area according to the movement amount 1, the movement amount 2, \8230andthe movement amount t-1.
S105: and the remote device sends the first rendering image, the first depth information, the indication information of the motion area in the first rendering image, the texture information of the motion area and the motion trend of the motion area to the terminal device. Accordingly, the terminal device receives the first rendering image, the first depth information, the indication information of the motion area in the first rendering image, the texture information of the motion area and the motion trend of the motion area, which are sent by the far-end device.
S106: the terminal device determines an intermediate rendering image according to the first rendering image, indication information of a motion area in the first rendering image, texture information of the motion area and a motion trend of the motion area.
In a specific embodiment, the predicted position of the motion area is determined according to the indication information of the motion area in the first rendering image and the motion trend of the motion area; and obtaining the intermediate rendering image according to the first rendering image, the texture information of the motion area and the predicted position of the motion area. In particular, the amount of the solvent to be used,
assuming that the motion trend of the motion region in the first rendered image is moving three pixels to the left, the terminal device may determine, according to the indication information of the motion region in the first rendered image and the motion trend of the motion region in the first rendered image, that the predicted position of the motion region is moving three pixels to the left, and then move the position of the texture information of the motion region in the first rendered image to the left by three pixels to the predicted position of the motion region, thereby obtaining an intermediate rendered image;
assuming that the motion trend of the motion region in the first rendered image is moving three pixels to the right, the terminal apparatus may determine the predicted position of the motion region as moving three pixels to the right according to the indication information of the motion region in the first rendered image and the motion trend of the motion region in the first rendered image, and then move the position of the texture information of the motion region in the first rendered image to the right by three pixels to the predicted position of the motion region, thereby obtaining an intermediate rendered image;
assuming that the motion trend of the motion region in the first rendered image is moving three pixels upward, the terminal device may determine the predicted position of the motion region as moving three pixels upward according to the indication information of the motion region in the first rendered image and the motion trend of the motion region in the first rendered image, and then move the position of the texture information of the motion region in the first rendered image three pixels upward to the predicted position of the motion region, thereby obtaining an intermediate rendered image;
assuming that the motion trend of the motion region in the first rendered image is to move three pixels downward, the terminal device may determine the predicted position of the motion region as moving three pixels downward according to the indication information of the motion region in the first rendered image and the motion trend of the motion region in the first rendered image, and then move the position of the texture information of the motion region in the first rendered image three pixels downward to the predicted position of the motion region, thereby obtaining an intermediate rendered image.
It is understood that the above examples all use the movement trend of the movement area as an example for translation, and in practical applications, the movement trend of the movement area may also be rotation, or translation plus rotation, and the like, and is not limited specifically herein.
Step S107: the terminal device determines second depth information according to the first depth information, indication information of a motion area in the first rendering image, texture information of the motion area and a motion trend of the motion area in the first rendering image.
In a specific embodiment, the terminal device may determine indication information of a motion region in the first depth information according to the indication information of the motion region in the first rendered image, determine a motion trend of the motion region in the first depth information according to the motion trend of the motion region in the first rendered image, and then determine the second depth information according to the first depth information, the indication information of the motion region in the first depth information, and the motion trend of the motion region in the first depth information.
In a more specific embodiment, the terminal device may determine the indication information of the motion region in the first depth information from the indication information of the motion region in the first rendered image. In general, the indication information of the motion region in the first rendered image and the indication information of the motion region in the first depth information correspond. Therefore, the remote apparatus may determine the indication information of the motion region in the first depth information from the indication information of the motion region in the first rendered image.
In a more specific embodiment, the terminal device may determine the motion trend of the motion region in the first depth information from the motion trend of the motion region in the first rendered image. In general, the motion trend of the motion region in the first rendered image and the motion trend of the motion region in the first depth information correspond. Therefore, the remote apparatus may determine the motion trend of the motion region in the first depth information according to the motion trend of the motion region in the first rendered image.
In a more specific embodiment, the determining, by the terminal device, the second depth information according to the first depth information, the indication information of the motion area in the first depth information, and the motion trend of the motion area in the first depth information specifically includes:
assuming that the motion tendency of the motion region in the first depth information is to move three pixels to the left, the terminal apparatus may move the position of the depth value indicated by the indication information of the motion region in the first depth information by three pixels to the left according to the indication information of the motion region in the first depth information and the motion tendency of the motion region in the first depth information, thereby obtaining second depth information;
assuming that the motion tendency of the motion region in the first depth information is to move three pixels to the right, the terminal apparatus may move the position of the depth value indicated by the indication information of the motion region in the first depth information to the right by three pixels according to the indication information of the motion region in the first depth information and the motion tendency of the motion region in the first depth information, thereby obtaining second depth information;
assuming that the trend of the motion region in the first depth information is to move three pixels upward, the terminal apparatus may move the position of the depth value indicated by the indication information of the motion region in the first depth information by three pixels upward according to the indication information of the motion region in the first depth information and the trend of the motion region in the first depth information, thereby obtaining second depth information;
assuming that the motion tendency of the motion region in the first depth information is to move down by three pixels, the terminal apparatus may move down the position of the depth value indicated by the indication information of the motion region in the first depth information by three pixels according to the indication information of the motion region in the first depth information and the motion tendency of the motion region in the first depth information, thereby obtaining the second depth information.
It is understood that the above examples are all described by taking the motion trend as an example of translation, and in practical applications, the motion trend may also be rotation, or translation plus rotation, and the like, and is not limited specifically herein.
S108: and the terminal device generates three-dimensional point cloud data according to the intermediate rendering image and the second depth information.
In a specific embodiment, the generating, by the terminal device, three-dimensional point cloud data according to the intermediate rendering image and the second depth information specifically includes: and the terminal device generates three-dimensional point cloud data according to the position coordinates and the pixel values of all pixel points in the intermediate rendering image and the position coordinates and the depth values of all elements in the second depth information. Taking the intermediate rendering image as the following matrix C, the second depth information as the following matrix D as an example,
Figure BDA0003205221380000131
the terminal apparatus may generate three-dimensional point cloud data from the intermediate rendering image and the second depth information as shown in table 1 below:
TABLE 1 Point cloud data
Numerical value Coordinate value x Coordinate value y Coordinate value z
c 11 1 1 d 11
c 12 1 2 d 12
c 1m 1 m d 1m
c 21 2 1 d 21
c 22 2 2 d 22
c 2m 2 m d 2m
c n1 n 1 d n1
c n2 n 2 d n2
c nm n m d nm
It should be understood that the above table 1 is only an example to show the corresponding relationship between the numerical values and the three-dimensional coordinates (coordinate values x, y, z) in different point cloud data, and in practical applications, the text content and the storage manner of the corresponding relationship may also be in other forms, which are not limited specifically herein.
S109: and the terminal device generates a second rendering image according to the acquired second state information and the point cloud data.
In a specific embodiment, the second status information may be status information S t+1 State information S t+2 And so on. It can be understood that when the second state information is the state information S t+1 Then, the second rendered image may be rendered image I t+1 (ii) a When the second state information is the state information S t+2 Then, the second rendered image may be rendered image I t+2 Etc., and are not particularly limited herein.
It is understood that the remote device may display the first rendered image between step S105 and step S106, and display the second rendered image after step S109.
In the above solution, the remote device only needs to send the first rendered image and some related information to the terminal device, and the terminal device may predict the second rendered image according to the first rendered image and some related information. Therefore, resources required for transmission can be effectively reduced. In practical applications, a third rendered image, a fourth rendered image, and the like may be generated according to the acquired third state information, fourth state information, and the like. Further, the more rendering images the terminal device generates, the more transmission resources can be saved.
Referring to fig. 3, fig. 3 is a schematic flow chart of a second image generation method provided in the present application. As shown in fig. 3, the image generation method according to the present embodiment includes:
s201: terminal device respectively acquires a plurality of status information S 1 ,S 2 ,…,S t
S202: the terminal device converts a plurality of state information S 1 ,S 2 ,…,S t In turn to the remote device. Accordingly, the remote device receives a plurality of status information S transmitted from the terminal device 1 ,S 2 ,…,S t
S203: far awayThe end device is based on a plurality of state information S 1 ,S 2 ,…,S t Generating a plurality of rendered images I 1 ,I 2 ,…,I t
S204: remote device generating rendering image I t The first depth information of (1).
S205: the remote device renders a plurality of images I 1 ,I 2 ,…,I t Rendering an image I t The first depth information of (2) is transmitted to the terminal device. Accordingly, the terminal device receives a plurality of rendering images I transmitted by the remote device 1 ,I 2 ,…,I t And rendering image I t The first depth information of (1).
S206: the terminal device renders images I according to a plurality of 1 ,I 2 ,…,I t Indication information of the motion area in the first rendering image, texture information of the motion area in the first rendering image, and a motion trend of the motion area in the first rendering image are generated.
S207: the terminal device determines an intermediate rendering image according to the first rendering image, indication information of a motion area in the first rendering image, texture information of the motion area and a motion trend of the motion area.
S208: the terminal device determines second depth information according to the first depth information of the first rendering image, indication information of the motion area in the first rendering image, texture information of the motion area in the first rendering image and a motion trend of the motion area in the first rendering image.
S209: and the terminal device generates three-dimensional point cloud data according to the intermediate rendering image and the second depth information.
S210: and the terminal device generates a second rendering image according to the acquired second state information and the point cloud data.
For the sake of simplicity, the present embodiment does not describe definitions of the plurality of status information, the plurality of rendering images, the first depth information of the first rendering image, the indication information of the motion region in the first rendering image, the texture information of the motion region in the first rendering image, the motion trend of the motion region in the first rendering image, and the like, and also does not describe in detail the first depth information in the first rendering image, the indication information of the motion region in the first rendering image, the texture information of the motion region in the first rendering image, and the calculation method of the motion trend of the motion region in the first rendering image, the generation method of the intermediate rendering image, the determination method of the second depth information, the generation method of the point cloud data, and the generation method of the second rendering image, which are specifically described in fig. 2 and related description.
It is to be understood that the embodiment shown in fig. 2 is described by taking the indication information of the motion region in the first rendered image, the texture information of the motion region in the first rendered image, and the motion trend of the motion region in the first rendered image as examples generated by the remote device, the embodiment shown in fig. 3 is described by taking the indication information of the motion region in the first rendered image, the texture information of the motion region in the first rendered image, and the motion trend of the motion region in the first rendered image as examples generated by the terminal device, and in practical applications, the indication information of the motion region in the first rendered image, the texture information of the motion region in the first rendered image, and the motion trend of the motion region in the first rendered image may be partially generated by the remote device and partially generated by the terminal device, which is not specifically limited herein.
Referring to fig. 4, fig. 4 is a schematic flow chart of a third image generation method provided in the present application. As shown in fig. 4, the image generation method according to the present embodiment includes:
s301: terminal device respectively acquires a plurality of status information S 1 ,S 2 ,…,S t
S302: the terminal device converts a plurality of state information S 1 ,S 2 ,…,S t In turn to the remote device. Accordingly, the remote device receives a plurality of status information S transmitted from the terminal device 1 ,S 2 ,…,S t
S303: the remote device is based on a plurality of state information S 1 ,S 2 ,…,S t Generating a plurality of rendered images I 1 ,I 2 ,…,I t
S304: the remote device generates first depth information of the first rendered image, indication information of a motion area in the first rendered image, texture information of the motion area in the first rendered image, and a motion trend of the motion area in the first rendered image.
S305: and the remote device sends the first rendering image, the first depth information, the indication information of the motion area in the first rendering image, the texture information of the motion area and the motion trend of the motion area to the terminal device. Accordingly, the terminal device receives the first rendering image, the first depth information, the indication information of the motion area in the first rendering image, the texture information of the motion area and the motion trend of the motion area, which are sent by the far-end device.
S306: the terminal device divides the first rendering image into a dynamic rendering area and a static rendering area according to the indication information of the motion area in the first rendering image.
In a particular embodiment, the first rendered image may be rendered image I t At this time, the history image of the first rendered image may be the rendered image I 1 To rendering image I t-1 Some or all of (a). It will be appreciated that in other embodiments, image I is rendered t May be rendering an image I t-1 Rendering an image I t-2 Etc., and are not particularly limited herein. Hereinafter, the first rendering image is taken as the rendering image I t The description is given for the sake of example.
In a particular embodiment, as shown in FIG. 5, the first rendered image includes a dynamic rendering region and a static rendering region. The dynamic rendering region includes a motion region in the first rendered image. In a particular embodiment, the dynamic rendering region may be a circumscribed rectangle of the motion region in the first rendered image. The static rendering area is an area of the first rendering image except the dynamic rendering area.
In a specific embodiment, when the dynamic rendering area is a circumscribed rectangle of the motion area in the first rendered image, the dynamic rendering area may be determined by: the abscissa of the rightmost pixel point in the motion area in the first rendering image, the abscissa of the leftmost pixel point in the motion area in the first rendering image, the ordinate of the upmost pixel point in the motion area in the first rendering image, and the ordinate of the downmost pixel point in the motion area in the first rendering image form a circumscribed rectangle of the motion area.
S307: and the terminal device divides the first depth information in the first rendering image into dynamic depth information and static depth information according to the indication information of the motion area in the first rendering image, wherein the dynamic depth information corresponds to the motion area.
In a specific embodiment, the terminal device may determine, according to the indication information of the motion region in the first rendered image, a position coordinate of the motion region in the first depth information of the first rendered image, and then divide the first depth information of the first rendered image into the dynamic depth information and the static depth information according to the position coordinate of the motion region in the first depth information of the first rendered image.
In a specific embodiment, the terminal device may determine the position coordinates of the motion region in the first rendered image according to the indication information of the motion region in the first rendered image, and then determine the position coordinates of the motion region in the first depth information of the first rendered image according to the position coordinates of the motion region in the first rendered image. Here, the position coordinates of the motion region in the first rendered image and the position coordinates of the motion region in the first depth information of the first rendered image are substantially the same.
In a particular embodiment, the first depth information of the first rendered image includes dynamic depth information and static depth information. The dynamic depth information comprises a motion region in the first depth information of the first rendered image. In a specific embodiment, the dynamic depth information may be a bounding rectangle of the motion region in the first depth information of the first rendered image. The static depth information is an area of the first depth information of the first rendered image other than the dynamic depth information.
In a specific embodiment, when the dynamic depth information is a bounding rectangle of the motion region in the first depth information of the first rendered image, the dynamic depth information may be determined by: based on the abscissa of the rightmost element in the motion region in the first depth information of the first rendered image, the abscissa of the leftmost element in the motion region in the first depth information of the first rendered image, the ordinate of the uppermost element in the motion region in the first depth information of the first rendered image, and the ordinate of the lowermost element in the motion region in the first depth information of the first rendered image, the ordinate of the lowermost element in the motion region in the first depth information of the first rendered image constitute a circumscribed rectangle of the motion region in the first depth information.
S308: the terminal device obtains an intermediate dynamic rendering image and dynamic depth information according to a dynamic rendering area in the first rendering image, dynamic depth information in the first depth information of the first rendering image, indication information of a motion area in the first rendering image, texture information of the motion area in the first rendering image, and a motion trend of the motion area in the first rendering image.
In a specific embodiment, a process of generating, by the terminal device, the intermediate dynamic rendering image and the dynamic depth information according to the dynamic rendering area in the first rendering image, the dynamic depth information in the first depth information of the first rendering image, the indication information of the motion area in the first rendering image, the texture information of the motion area in the first rendering image, and the motion trend of the motion area in the first rendering image is similar to a process of generating, by the terminal device, the intermediate rendering image and the second depth information according to the indication information of the motion area in the first rendering image, the texture information of the motion area in the first rendering image, and the motion trend of the motion area in the first rendering image in the embodiment shown in fig. 2, which is specifically referred to above.
S309: and the terminal device obtains a dynamic rendering image according to the second state information, the intermediate dynamic rendering image and the dynamic depth information.
In a specific embodiment, the terminal device obtains three-dimensional intermediate point cloud data according to the intermediate dynamic rendering image and the dynamic depth information, and then generates a dynamic rendering image according to the second state information and the intermediate point cloud data.
In a specific embodiment, a process of the terminal device generating the dynamic rendering image according to the second state information and the intermediate point cloud data is similar to a process of the terminal device generating the second rendering image according to the acquired second state information and the point cloud data in the embodiment shown in fig. 2, which is specifically referred to above.
S310: and the terminal device obtains a static rendering image according to the second state information, the static rendering area and the static depth information.
In a specific embodiment, a process of generating a static rendering image by the terminal device according to the second state information, the static rendering area, and the static depth information is similar to a process of generating a second rendering image by the terminal device according to the acquired second state information and the point cloud data in the embodiment shown in fig. 2, which is specifically referred to above.
S311: and the terminal device obtains a second rendering image according to the dynamic rendering image and the static rendering image.
For the sake of simplicity, the present embodiment does not describe definitions of the plurality of status information, the plurality of rendering images, the first depth information of the first rendering image, the indication information of the motion region in the first rendering image, the texture information of the motion region in the first rendering image, the motion trend of the motion region in the first rendering image, and the like, nor does it describe in detail the first depth information of the first rendering image, the indication information of the motion region in the first rendering image, the texture information of the motion region in the first rendering image, and the calculation method of the motion trend of the motion region in the first rendering image, which please refer to fig. 2 and the related description specifically.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a rendering system provided in the present application. As shown in fig. 6, the rendering system provided in this embodiment includes a terminal device and a remote device. Wherein, the terminal device and the far-end device can communicate with each other.
As shown on the left side of fig. 6, the terminal apparatus includes: a communication module 101, a rendering module 102, and a display module 103.
The communication module 101 is configured to send first state information to the remote apparatus, where the first state information includes first position information and first posture information of a terminal device where the terminal apparatus is located.
A rendering module 102, configured to obtain an intermediate rendering image according to a first rendering image corresponding to the first state information, indication information of a motion region in the first rendering image, texture information of the motion region, and a motion trend of the motion region; obtaining second depth information according to the first depth information of the first rendering image and the motion trend of the motion area; rendering according to the intermediate rendering image, the second depth information and second state information to obtain a second rendering image, wherein the occurrence time of the second state information is later than that of the first state information.
A display module 103, configured to display the first rendered image and the second rendered image.
As shown on the right of fig. 6, the distal device comprises: a rendering module 301 and a communication module 302.
The rendering module 301 is configured to perform rendering according to the first state information to obtain a first rendered image; and determining a motion area in the first rendering image and a motion trend of the motion area according to the first rendering image and a historical rendering image, wherein the historical rendering image is obtained earlier than the first rendering image, and the first rendering image and the historical rendering image contain the same target.
A communication module 302, configured to send the first rendered image, the first depth information of the first rendered image, the indication information of the motion region in the first rendered image, the texture information of the motion region, and the motion trend of the motion region to the terminal device, where the indication information of the motion region in the first rendered image is used to indicate a position of the motion region in the first rendered image.
In this embodiment, the definitions of the first state information, the second state information, the first rendered image, the second rendered image, the first depth information of the first rendered image, the indication information of the motion region in the first rendered image, the texture information of the motion region in the first rendered image, the motion trend of the motion region in the first rendered image, and the like are not described, and the detailed descriptions of the first depth information in the first rendered image, the indication information of the motion region in the first rendered image, the texture information of the motion region in the first rendered image, the calculation method of the motion trend of the motion region in the first rendered image, the generation method of the intermediate rendered image, the determination method of the second depth information, the generation method of the point cloud data, and the generation method of the second rendered image are also not described, specifically refer to fig. 2 and the related descriptions. In the above scheme, the terminal device may be configured to execute the steps executed by the terminal device in the image generation method shown in fig. 2 to 4, and the remote device may execute the steps executed by the remote device in the image generation method shown in fig. 2 to 4.
The embodiment of the application also provides a computing device cluster. As shown in fig. 7, the cluster of computing devices includes at least one computing device 400. One or more computing devices 400 in a cluster of computing devices may include a processor 404, memory 406, and a communication interface 408. The processor 404, the memory 406, and the communication interface 408 are connected by a bus 402. The memory 406 may have stored therein instructions for performing the steps performed by the remote device in the image generation method. For example, memory 406 in fig. 7 holds instructions for performing the functions of rendering module 301, instructions for performing the functions of communication module 302, and instructions for performing storage module 303.
In some possible implementations, one or more computing devices 400 in the cluster of computing devices may also be used to execute some of the instructions of the steps performed by the remote device in the image generation methods shown in fig. 2-4, for example. In other words, a combination of one or more computing devices 400 may collectively execute the instructions of the steps performed by the remote device in the image generation method as shown in fig. 2-4.
It is noted that the memory 406 in different computing devices 400 in a cluster of computing devices may store different instructions.
Fig. 8 shows one possible implementation. As shown in fig. 8, two computing devices 400A and 400B are connected via a communication interface 408. Memory in computing device 400A has stored thereon instructions for performing the functions of rendering module 301 and communication module 302. Memory in computing device 400B has stored thereon instructions for performing the functions of storage module 303. In other words, the memory 406 of the computing devices 400A and 400B collectively store instructions for performing the steps performed by the remote device in the image generation method as shown in fig. 2-4.
The connection manner between the computing device clusters shown in fig. 8 may be to store a large amount of computing device operation data in consideration of the steps performed by the remote device in the image generation method provided by the present application. Thus, consider the memory function being performed by computing device 400B.
It should be understood that the functionality of computing device 400A shown in fig. 8 may also be performed by multiple computing devices 400. Likewise, the functionality of computing device 400B may be performed by multiple computing devices 400.
In some possible implementations, one or more computing devices in a cluster of computing devices may be connected over a network. Wherein the network may be a wide area network or a local area network, etc. Fig. 9 shows one possible implementation. As shown in fig. 9, two computing devices 400C and 400D are connected via a network. In particular, connections are made to the network through communication interfaces in the respective computing devices. In this class of possible implementations, memory 406 in computing device 400C holds instructions to execute rendering module 301. Also, instructions to execute storage module 303 and rendering module 301 are stored in memory 406 in computing device 400D.
The connection manner between the computing device clusters shown in fig. 9 may be to consider that, in the image generation method provided in the present application, the steps performed by the remote device need to store a large amount of computing device operating data, and therefore consider that the functions implemented by the storage module 303 and the rendering module 301 are executed by the computing device 400D.
It should be understood that the functionality of computing device 400C shown in fig. 9 may also be performed by multiple computing devices 400. Likewise, the functionality of computing device 400D may be performed by multiple computing devices 400.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a terminal device provided in the present application. As shown in fig. 10, the terminal device provided in this embodiment may be described by taking a VR device as an example. The VR device may include a housing coupled to a frame, where an audio output device including, for example, speakers mounted in a headset, is also coupled to the frame. The front of the housing is rotated away from the base of the housing so that some of the components housed in the housing are visible. The display may be mounted on an inwardly facing side of the front of the housing. The lens may be mounted in the housing between the user's eye and the display when the front portion is in the closed position against the base portion of the housing.
As shown in fig. 10, the VR device may include a processor 501, a memory 502, a communication module 503, a sensing system 504, and a display 505.
The processor 501 may be used to read and execute computer readable instructions. In particular, processor 501 may mainly include a controller, an operator, and a register. The controller is mainly responsible for instruction decoding and sending out control signals for operations corresponding to the instructions. The arithmetic unit is mainly responsible for performing fixed-point or floating-point arithmetic operation, shift operation, logic operation and the like, and can also perform address operation and conversion. The register is mainly responsible for storing register operands, intermediate operation results and the like temporarily stored in the instruction execution process. In a specific implementation, the hardware architecture of the processor 501 may be an Application Specific Integrated Circuit (ASIC) architecture, a MIPS architecture, an ARM architecture, or an NP architecture, etc.
In some embodiments, the processor 501 may be configured to learn status information of the user and transmit the status information of the user through one or more of the wireless communication processing module and the wired LAN communication processing module. The processor 501 may be configured to parse the first rendered image and the intermediate data (e.g., the first depth information, the indication information of the motion region in the first rendered image, the texture information of the motion region, the motion trend of the motion region, etc.) received by the wireless communication processing module and/or the wired LAN communication processing module. The processor 501 may be configured to perform corresponding processing operations according to the received rendered image and the intermediate data to generate a second rendered image, and instruct the display 505 to display one or more of the first rendered image and the second rendered image.
A memory 502 is coupled to the processor 501 for storing various software programs and/or sets of instructions. In particular implementations, memory 502 may include high-speed random access memory and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory 502 may store an operating system, such as an embedded operating system like uCOS, vxWorks, RTLinux, etc. The memory 502 may also store communication programs that may be used to communicate with the electronic device 100, one or more servers, or additional devices.
The communication module 503 may include one or more of a wireless communication processing module, a wired Local Area Network (LAN) communication processing module, and an RS-232 communication processing module. The wireless communication processing module may include one or more of a bluetooth communication processing module and a Wireless Local Area Network (WLAN) communication processing module. The wireless communication processing module may also include a cellular mobile communication processing module. The cellular mobile communication processing module may communicate with other devices through cellular mobile communication technology. The wired LAN communication processing module may be used to communicate with other devices in the same LAN through a wired LAN, and may also be used to connect to a WAN through a wired LAN, communicating with devices in the WAN. The RS-232 communication processing module can be used for communicating with other equipment through an RS-232 interface.
The sensing system 504 includes various sensors such as audio sensors, image/light sensors, position sensors/tracking devices (inertial measurement units including gyroscopes and accelerometers), cameras, and so forth. Among other things, cameras can be used to capture still images and moving images. The images captured by the camera may be used to help track the physical location (e.g., position/location and/or orientation) of the user and/or the controller in the real world or in a physical environment relative to the virtual environment. A tracking device (for determining or tracking pose information of a physical object or VR controller) which may include, for example, an inertial measurement unit, an accelerometer, an optical detector, or a camera or other sensor/device that detects position or orientation. For example, where a VR controller or a user operating or moving the VR controller can at least partially control a physical object or a pose (e.g., position and/or orientation) of the VR controller, the tracking device can provide pose information. In a particular embodiment, the sensing system 504 may also include a gaze tracking device to detect and track the user's eye gaze. The gaze tracking device may include, for example, an image sensor or a plurality of image sensors to capture images of the user's eyes, such as particular portions of the user's eyes, e.g., the pupils, to detect and track the direction and movement of the user's gaze.
The display 505 may be used to display one or more of the first rendered image and the second rendered image.
It is to be understood that the structure illustrated in fig. 10 does not constitute a specific limitation on the virtual device. In other embodiments of the present application, a virtual appliance may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The present embodiment does not describe definitions of the first state information, the second state information, the first rendered image, the second rendered image, the first depth information of the first rendered image, the indication information of the motion region in the first rendered image, the texture information of the motion region in the first rendered image, the motion trend of the motion region in the first rendered image, and the like, nor does it describe in detail the first depth information in the first rendered image, the indication information of the motion region in the first rendered image, the texture information of the motion region in the first rendered image, and the motion trend calculation method of the motion region in the first rendered image, the intermediate rendered image generation method, the second depth information determination method, the point cloud data generation method, and the second rendered image generation method, which are specifically described in fig. 2 and related descriptions. In the above scheme, the terminal device may be configured to execute the steps executed by the terminal device in the image generation method shown in fig. 2 to 4.
In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, memory Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.

Claims (23)

1. An image generation method is applied to a terminal device, wherein a rendering system in which the terminal device is located further comprises a remote device, and the method comprises the following steps:
sending first state information to the remote device, wherein the first state information comprises first position information and first posture information of terminal equipment where the terminal device is located;
obtaining an intermediate rendering image according to a first rendering image corresponding to the first state information, indication information of a motion area in the first rendering image, texture information of the motion area and a motion trend of the motion area;
obtaining second depth information according to the first depth information of the first rendering image and the motion trend of the motion area;
rendering according to the intermediate rendering image, the second depth information and second state information to obtain a second rendering image, wherein the occurrence time of the second state information is later than that of the first state information.
2. The method of claim 1, wherein the first rendered image and the second rendered image are displayed.
3. The method according to claim 1 or 2, wherein the rendering according to the intermediate rendered image, the second depth information and the second state information, obtaining a second rendered image comprises:
obtaining three-dimensional point cloud data according to the intermediate rendering image and the second depth information;
rendering according to the three-dimensional point cloud data and the second state information to obtain a second rendering image.
4. The method according to any one of claims 1 to 3, wherein the obtaining an intermediate rendering image according to the first rendering image corresponding to the first state information, indication information of a motion region in the first rendering image, texture information of the motion region, and a motion trend of the motion region comprises:
determining a predicted position of a motion area according to indication information of the motion area in the first rendering image and the motion trend of the motion area;
and obtaining the intermediate rendering image according to the first rendering image, the texture information of the motion area and the predicted position of the motion area.
5. The method according to any one of claims 1 to 4, wherein before obtaining the intermediate rendered image according to the first rendered image corresponding to the first state information, the indication information of the motion region in the first rendered image, the texture information of the motion region, and the motion trend of the motion region, the method further comprises:
receiving a first rendering image corresponding to the first state information, first depth information of the first rendering image, indication information of a motion area in the first rendering image, texture information of the motion area, and a motion trend of the motion area, which are sent by the remote device.
6. The method according to claim 1, wherein the obtaining an intermediate rendering image according to the first rendering image corresponding to the first state information, indication information of a motion area in the first rendering image, texture information of the motion area, and a motion trend of the motion area comprises:
dividing a first rendering image into a dynamic rendering area and a static rendering area according to the first rendering image corresponding to the first state information and indication information of a motion area in the first rendering image, wherein the dynamic rendering area comprises the motion area;
obtaining the intermediate rendering image according to the dynamic rendering area, the indication information of the motion area in the dynamic rendering area, the texture information of the motion area and the motion trend of the motion area;
obtaining second depth information according to the depth information of the dynamic rendering area and the motion trend of the motion area;
obtaining a rendering result of the static rendering area according to the second state information, the texture information of the static rendering area and the depth information of the static rendering area;
rendering according to the intermediate rendering image, the second depth information and the second state information to obtain a rendering result of the dynamic rendering area;
and obtaining the second rendering image according to the rendering result of the static rendering area and the rendering result of the dynamic rendering area.
7. An image generation method is applied to a far-end device, a rendering system in which the far-end device is located further comprises a terminal device, and the method comprises the following steps:
rendering according to the first state information to obtain a first rendering image;
determining a motion area in the first rendering image and a motion trend of the motion area according to the first rendering image and a historical rendering image, wherein the historical rendering image is obtained earlier than the first rendering image, and the first rendering image and the historical rendering image contain the same target;
and sending the first rendering image, first depth information of the first rendering image, indication information of a motion area in the first rendering image, texture information of the motion area and a motion trend of the motion area to the terminal equipment, wherein the indication information of the motion area in the first rendering image is used for indicating the position of the motion area in the first rendering image.
8. The method of claim 7, further comprising:
storing the first rendered image.
9. The method according to claim 7 or 8, characterized in that the method further comprises:
dividing the first rendering image into a dynamic rendering area and a static rendering area according to indication information of a motion area in the first rendering image, wherein the dynamic rendering area comprises the motion area;
and sending dynamic rendering region indication information to the terminal device, wherein the dynamic region indication information is used for indicating the position of the dynamic rendering region in the first rendering image.
10. A terminal device, wherein a rendering system in which the terminal device is located further includes a remote device, the terminal device comprising:
the communication module is used for sending first state information to the remote device, wherein the first state information comprises first position information and first posture information of terminal equipment where the terminal device is located;
the rendering module is used for obtaining an intermediate rendering image according to a first rendering image corresponding to the first state information, indication information of a motion area in the first rendering image, texture information of the motion area and a motion trend of the motion area; obtaining second depth information according to the first depth information of the first rendering image and the motion trend of the motion area; rendering is carried out according to the intermediate rendering image, the second depth information and second state information to obtain a second rendering image, wherein the occurrence time of the second state information is later than that of the first state information.
11. The apparatus of claim 10, wherein the terminal apparatus further comprises:
a display module to display the first rendered image and the second rendered image.
12. The apparatus of claim 10 or 11, wherein the rendering module is configured to obtain three-dimensional point cloud data according to the intermediate rendering image and the second depth information; rendering according to the three-dimensional point cloud data and the second state information to obtain a second rendering image.
13. The apparatus according to any one of claims 10 to 12, wherein the rendering module is configured to determine a predicted position of the motion area according to indication information of the motion area in the first rendered image and a motion trend of the motion area; and obtaining the intermediate rendering image according to the first rendering image, the texture information of the motion area and the predicted position of the motion area.
14. The apparatus according to claim 10 or 13, wherein the communication module is configured to receive a first rendered image corresponding to the first state information, first depth information of the first rendered image, indication information of a motion region in the first rendered image, texture information of the motion region, and a motion trend of the motion region, which are sent by the remote apparatus.
15. The apparatus according to claim 10, wherein the rendering module is configured to divide the first rendered image into a dynamic rendering area and a static rendering area according to a first rendered image corresponding to the first state information and indication information of a motion area in the first rendered image, where the dynamic rendering area includes the motion area; obtaining the intermediate rendering image according to the dynamic rendering area, the indication information of the motion area in the dynamic rendering area, the texture information of the motion area and the motion trend of the motion area; obtaining second depth information according to the depth information of the dynamic rendering area and the motion trend of the motion area; obtaining a rendering result of the static rendering area according to the second state information, the texture information of the static rendering area and the depth information of the static rendering area; rendering according to the intermediate rendering image, the second depth information and the second state information to obtain a rendering result of the dynamic rendering area; and obtaining the second rendering image according to the rendering result of the static rendering area and the rendering result of the dynamic rendering area.
16. A remote apparatus, wherein a rendering system in which the remote apparatus is located further includes a terminal apparatus, the remote apparatus comprising:
the rendering module is used for rendering according to the first state information to obtain a first rendering image; determining a motion area in the first rendering image and a motion trend of the motion area according to the first rendering image and a historical rendering image, wherein the historical rendering image is obtained earlier than the first rendering image, and the first rendering image and the historical rendering image contain the same target;
the communication module is used for sending the first rendering image, the first depth information of the first rendering image, the indication information of the motion area in the first rendering image, the texture information of the motion area and the motion trend of the motion area to the terminal equipment, wherein the indication information of the motion area in the first rendering image is used for indicating the position of the motion area in the first rendering image.
17. The apparatus of any of claims 16, wherein the rendering module is further configured to store the first rendered image.
18. The apparatus according to any of claims 16 or 17, wherein the rendering module is further configured to divide the first rendered image into a dynamic rendering area and a static rendering area according to indication information of a motion area in the first rendered image, where the dynamic rendering area includes the motion area; the communication module is configured to send dynamic rendering region indication information to the terminal device, where the dynamic region indication information is used to indicate a position of the dynamic rendering region in the first rendered image.
19. A rendering system comprising the terminal apparatus of claim 10 and the remote apparatus of claim 16.
20. A terminal device comprising a processor and a memory, the processor being configured to execute instructions stored in the memory to perform the method of:
sending first state information to the remote device, wherein the first state information comprises first position information and first posture information of terminal equipment where the terminal device is located;
and obtaining a second rendering image according to a first rendering image, first depth information of the first rendering image, indication information of a motion area in the first rendering image, texture information of the motion area and a motion trend of the motion area, wherein the second state information corresponds to the first state information, and the occurrence time of the second state information is later than that of the first state information.
21. A remote device comprising a processor and a memory, the processor being configured to execute instructions stored in the memory to perform the method of:
rendering according to first state information sent by terminal equipment where the terminal device is located to obtain a first rendering image;
determining a motion area in the first rendering image and a motion trend of the motion area according to the first rendering image and a historical rendering image, wherein the historical rendering image is obtained earlier than the first rendering image, and the first rendering image and the historical rendering image contain the same target;
and sending the first rendering image, first depth information of the first rendering image, indication information of a motion area in the first rendering image, texture information of the motion area and a motion trend of the motion area to the terminal equipment, wherein the indication information of the motion area in the first rendering image is used for indicating the position of the motion area in the first rendering image.
22. A computer-readable storage medium comprising computer program instructions that, when executed by a cluster of computing devices, perform the method of any of claims 1 to 10.
23. A computer program product comprising instructions which, when executed by a cluster of computer devices, cause the cluster of computer devices to perform the method of any one of claims 1 to 10.
CN202110915064.4A 2021-08-10 2021-08-10 Image generation method, device, storage medium and program product Pending CN115937284A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110915064.4A CN115937284A (en) 2021-08-10 2021-08-10 Image generation method, device, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110915064.4A CN115937284A (en) 2021-08-10 2021-08-10 Image generation method, device, storage medium and program product

Publications (1)

Publication Number Publication Date
CN115937284A true CN115937284A (en) 2023-04-07

Family

ID=86549318

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110915064.4A Pending CN115937284A (en) 2021-08-10 2021-08-10 Image generation method, device, storage medium and program product

Country Status (1)

Country Link
CN (1) CN115937284A (en)

Similar Documents

Publication Publication Date Title
US10725297B2 (en) Method and system for implementing a virtual representation of a physical environment using a virtual reality environment
JP7008730B2 (en) Shadow generation for image content inserted into an image
US11893702B2 (en) Virtual object processing method and apparatus, and storage medium and electronic device
CN116194867A (en) Dynamic configuration of user interface layout and inputs for an augmented reality system
US20240031678A1 (en) Pose tracking for rolling shutter camera
US11615506B2 (en) Dynamic over-rendering in late-warping
WO2022187073A1 (en) Modeling objects from monocular camera outputs
CN116848556A (en) Enhancement of three-dimensional models using multi-view refinement
US20240029197A1 (en) Dynamic over-rendering in late-warping
US20220375110A1 (en) Augmented reality guided depth estimation
US20220109617A1 (en) Latency determinations for human interface devices
CN115937284A (en) Image generation method, device, storage medium and program product
WO2022150252A1 (en) Three-dimensional scan registration with deformable models
Košťák et al. Mobile phone as an interactive device in augmented reality system
Unal et al. Augmented Reality and New Opportunities for Cultural Heritage
US20220375026A1 (en) Late warping to minimize latency of moving objects
WO2015030623A1 (en) Methods and systems for locating substantially planar surfaces of 3d scene
WO2022246384A1 (en) Late warping to minimize latency of moving objects
CN117321472A (en) Post-warping to minimize delays in moving objects
WO2022245649A1 (en) Augmented reality guided depth estimation
CN117425869A (en) Dynamic over-rendering in post-distortion
TW201901356A (en) Virtual reality system simulating inputs of computer peripheral human input device (hid) and controlling method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication