CN116012519A - Cloud rendering system based on digital twinning - Google Patents

Cloud rendering system based on digital twinning Download PDF

Info

Publication number
CN116012519A
CN116012519A CN202310026317.1A CN202310026317A CN116012519A CN 116012519 A CN116012519 A CN 116012519A CN 202310026317 A CN202310026317 A CN 202310026317A CN 116012519 A CN116012519 A CN 116012519A
Authority
CN
China
Prior art keywords
lightmap
renderer
illumination
data
cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310026317.1A
Other languages
Chinese (zh)
Inventor
杨波
刘凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Yunyun Shutwin Technology Co ltd
Original Assignee
Guangzhou Yunyun Shutwin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Yunyun Shutwin Technology Co ltd filed Critical Guangzhou Yunyun Shutwin Technology Co ltd
Priority to CN202310026317.1A priority Critical patent/CN116012519A/en
Publication of CN116012519A publication Critical patent/CN116012519A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Generation (AREA)

Abstract

The invention relates to the technical field of cloud rendering and discloses a cloud rendering system based on digital twinning, which comprises a cloud renderer running on a GPU, a Web3D renderer running on a browser and a twinning data service center, wherein the twinning data service center is respectively connected with the cloud renderer and the Web3D renderer, and the cloud renderer and the Web3D renderer are communicated through a WebSocket protocol. According to the digital twin-based cloud rendering system provided by the invention, the voxel global illumination algorithm is used for directly storing illumination data into the illumination map, then illumination information is transmitted to the front end in a H.264 video stream mode, the characteristic of irrelevance of the view point of the illumination map is fully utilized, the front end can acquire illumination data only by sampling the map in a loader stage, and the illumination data is colored to the surface of an object according to UV coordinates, so that the problem of non-overlapping of the object is solved.

Description

Cloud rendering system based on digital twinning
Technical Field
The invention relates to the technical field of cloud rendering, in particular to a cloud rendering system based on digital twinning.
Background
With the continuous development of the Web3D technology, more and more websites apply the Web3D technology to display commodities and game entertainment, so that the ornamental value and the interactivity of the websites are improved. As a core API library for Web3D applications, webGL is supported by almost all Web browsers. However, in order to embed in a web page, webGL has to sacrifice a certain rendering capability, and currently mainstream browsers still only support the WebGL2.0 standard based on opengles3.0. This means that some of the mainstream techniques often used in three-dimensional rendering, such as computational shaders, geometric shaders, and surface subdivision, cannot be used in the Web, which greatly limits the rendering effect of the Web3D virtual scene.
Disclosure of Invention
The invention provides a digital twin-based cloud rendering system, which uses a voxelized global illumination algorithm to directly store illumination data into an illumination map, then transmits illumination information to a front end in a H.264 video stream mode, fully utilizes the characteristic of irrelevance of the view point of the illumination map, and the front end can acquire the illumination data only by sampling the map in a loader stage and colors the illumination data to the surface of an object according to UV coordinates, thereby solving the problem of non-overlapping of the object.
The invention provides a digital twin-based cloud rendering system, which comprises a cloud renderer running on a GPU, a Web3D renderer running on a browser and a twin data service center, wherein the twin data service center is respectively connected with the cloud renderer and the Web3D renderer, and the cloud renderer and the Web3D renderer are communicated through a WebSocket protocol;
the user edits the scene through a scene editor in the Web3D renderer, the cloud renderer receives the changed data in the scene editor and synchronizes the changed data to an original 3D scene, and GBuffer precomputation, lightmap generation and Lightmap coding streaming transmission are further carried out;
the Web3D renderer receives the transmitted data, performs Lightmap decoding, web front-end direct illumination calculation and Web renderer post-processing, and obtains and outputs a final rendering effect;
the twin data service center receives all data in the operation process of the cloud renderer and the Web3D renderer, and analyzes, stores and forwards all data in the operation process.
Further, in the cloud renderer,
the GBuffer pre-calculation is used for pre-calculating model related information needed to be used in virtual scene illumination calculation, and the model related information is stored in a related map; wherein the related information includes depth information, normal information, or vertex information;
the method comprises the steps that the Lightmap is generated to determine a computing range of the Lightmap according to user interaction operation, marking is conducted, and the Lightmap is generated through a map space global illumination algorithm;
the streaming transmission of the Lightmap codes is that H.264 codes are carried out on the Lightmap and the streaming transmission is carried out on the Lightmap to the Web front end;
further, when the model related information needed to be used in the virtual scene illumination calculation is pre-calculated and stored in the related map, if the position of the model in the scene changes, the stage needs to be continuously updated, otherwise, the model is only needed to be generated once.
Further, in the Web3D renderer,
the Lightmap is decoded to decode and restore the H.264 data stream received from the cloud server to the Lightmap;
the direct illumination calculation of the Web front end is to render the direct illumination of the Web3D scene through a Web renderer;
the Web renderer post-processes global illumination diffuse reflection and specular reflection data obtained by sampling the Lightmap to obtain a final rendering effect and output.
Further, the generating the Lightmap includes:
acquisition of UV coordinates of Lightmap: parameterizing a triangle network into a two-dimensional UV space, scaling and translating UV, and packing the UV into a range of [0,1] to form a Lightmap graph set, wherein the UV of the Lightmap ensures that each surface in a scene corresponds to one block of the graph one by one and is not overlapped with other objects; the parameterization and packaging operation of UV are completed through UVALAS or RizomUV, and finally the generated UV coordinate data are filled into a second set of UV channels at the top of the model;
the texel generation and storage of Lightmap: determining a coloring equation and a direct illumination part and an indirect illumination part of the coloring equation decomposition, wherein the direct illumination part is directly rendered by a front-end Web application program, the indirect illumination part is provided by a real-time global illumination algorithm, a Light Probe for recording SH or an environment map, and finally is combined into a diffuse reflection part and a high illumination part, and 6 low-precision RGB565 color values are directly stored in a 32-bit channel of RGBA 8888.
Further, the coloring equation is:
Figure BDA0004044742040000021
wherein ,L0 For the emitted radiance, the color of the surface of the object is seen, denoted by rgb;
Figure BDA0004044742040000022
the irradiance of the kth light source; l (L) k Is the incident light direction vector; v is the line of sight; the direction n is the surface normal vector of the point; c diff And c spec The diffuse reflection refractive index and the high light reflection refractive index of the object surface are obtained; m is the high light diffusion coefficient or roughness in the high light model;
the direct illumination part and the indirect illumination part are as follows:
f shade (v,k)=f direct (k)+f indirect (v,k)
the final combination into a diffuse reflection part and a high light part is as follows:
f indirect (v)=c diff ×diff(L Ind )+R F (θ)×spec(L Ind )
wherein ,
Figure BDA0004044742040000031
is indirect irradiance; diff is taken as E Ind Is a diffuse reflection portion of (a); spec is the highlight portion; RF is Fresnel Term.
Further, the Lightmap calculation range is determined as: for each Lightmap texel, the corresponding position is first in the visible range of the present frame or the last two frames, and then calculated by the following method: and calculating the whole Lightmap in the stage of loading the model scene by the client, and calculating the refraction part global illumination related to the view point by using the VRLM method only for the parts which are changed by the camera when the user roams in the scene and only the camera is changed.
Further, the determination of the Lightmap calculation range is followed by marking as: when a user roams in a 3D scene of a client, predicting position data of two last frames from the current frame by adopting a DeadReckwoning algorithm according to the difference between the time when the front end of the last frame receives the data and the time when the last frame receives the data, rendering 3 depth maps from 3 camera view angles, and recording depth information of each visible patch on the depth maps so as to mark visible information on the Lightmap; namely: the world position of each legal texel on the Lightmap is converted into a camera space through camera perspective matrix transformation, then the transformed coordinate z channel value is compared with the value of the corresponding position of the depth map, and if the difference between the current texel value and the depth map value is within a certain threshold value, the texel is considered to be visible.
The beneficial effects of the invention are as follows:
1. according to the invention, global illumination information is stored in a view-independent Lightmap but not a view-dependent data structure, and all illumination information in a scene can be stored in one map; because the cloud end cannot timely transmit new illumination data due to the problems of network environment delay and the like, a user can still see the illumination transmitted before. And through a prediction mechanism, the fact that enough indirect illumination data can still be rendered in the next frames can be ensured.
2. According to the roaming behavior of the user, the global illumination calculation range and the frequency are dynamically adjusted, when the user roams in a scene, only illumination information of a visible surface under the current viewpoint is calculated, only information of a part of texels on the Lightmap is changed in one rendering, the information only needs to be updated when elements related to illumination in the scene are changed, and once the Lightmap is completely calculated once and the scene is not changed, diffuse reflection illumination does not need to be recalculated.
3. The illumination data does not need to be rendered and transmitted frame by frame, and the client does not need to do excessive synthesis and processing work, so that the burden is reduced, and the sense of reality of the picture is enhanced.
Drawings
Fig. 1 is a schematic structural diagram of a digital twin-based cloud rendering system according to the present invention.
Fig. 2 is a schematic workflow diagram of a cloud end renderer and a Web3D renderer in the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1 and 2, the invention provides a digital twinning-based cloud rendering system, which comprises a cloud renderer running on a GPU, a Web3D renderer running on a browser and a twinning data service center, wherein the twinning data service center is respectively connected with the cloud renderer and the Web3D renderer, and the cloud renderer and the Web3D renderer are communicated through WebSocket protocol;
the user edits the scene through a scene editor in the Web3D renderer, the cloud renderer receives the changed data in the scene editor and synchronizes the changed data to an original 3D scene, and GBuffer precomputation, lightmap generation and Lightmap coding streaming transmission are further carried out;
in the cloud-renderer in question,
the GBuffer pre-calculation is used for pre-calculating model related information needed to be used in virtual scene illumination calculation, and the model related information is stored in a related map; wherein the related information includes depth information, normal information, or vertex information; if the model position in the scene changes, the stage needs to be updated continuously, otherwise, the model position in the scene needs to be generated once.
The method comprises the steps that the Lightmap is generated to determine a computing range of the Lightmap according to user interaction operation, marking is conducted, and the Lightmap is generated through a map space global illumination algorithm;
the computing range of the Lightmap is determined as to be that for each Lightmap texel, the corresponding position of the Lightmap texel is required to be within the visible range of the present frame or the last two frames, and then the Lightmap texel is computed by the following computing method: and calculating the whole Lightmap in the stage of loading the model scene by the client, and calculating the refraction part global illumination related to the view point by using the VRLM method only for the parts which are changed by the camera when the user roams in the scene and only the camera is changed. In addition, the calculation method that can be used includes 1, if the light source changes dynamically, calculating diffuse reflection and refraction partial global illumination by using VRLM. 2. If the light source remains unintelligible for a long time after being changed, the part without the back VRLM mark is calculated on a 2/3 basis.
The Lightmap encoding streaming is to perform h.264 encoding on the Lightmap and stream the Lightmap to the Web front end.
The Web3D renderer receives the transmitted data, performs Lightmap decoding, web front-end direct illumination calculation and Web renderer post-processing, and obtains and outputs a final rendering effect;
in the Web3D renderer as described above,
the Lightmap is decoded to decode and restore the H.264 data stream received from the cloud server to the Lightmap;
the direct illumination calculation of the Web front end is to render the direct illumination of the Web3D scene through a Web renderer;
the Web renderer post-processes global illumination diffuse reflection and specular reflection data obtained by sampling the Lightmap to obtain a final rendering effect and output.
The twin data service center receives all data in the operation process of the cloud renderer and the Web3D renderer, analyzes, stores and forwards all data in the operation process, achieves information interaction, continuously accumulates twin data along with the operation of the cloud renderer and the Web3D renderer, and provides basic data support for problem analysis, prediction and tracing in the rendering process.
The twin data service center can transmit data to the remote monitoring service system, so that the remote monitoring service system can acquire all data in the operation process of the cloud renderer and the Web3D renderer through the twin data service center, analysis, processing and mining can be conveniently performed, and the monitoring of the working states of the cloud renderer and the Web3D renderer is realized.
When a problem occurs in the rendering process, based on all data stored in the twin data service center in the operation process of the cloud renderer and the Web3D renderer, the historical operation process and scenes of the cloud renderer and the Web3D renderer are reproduced, so that the reason for the problem in the rendering process can be conveniently found.
The determination of the calculation range of the Lightmap is followed by marking as: when a user roams in a 3D scene of a client, predicting position data of two frames of cameras from the current frame by adopting a DeadReckwonicing algorithm according to the difference between the time when the front end of the previous frame receives the data and the time when the front end of the previous frame receives the data (transmitted to a server), rendering 3 depth maps from 3 camera angles after obtaining parameters of the current camera and the two frames of cameras, and recording depth information of each visible patch on the depth maps so as to mark visible information on a Lightmap; namely: the world position of each legal texel on the Lightmap is converted into a camera space through camera perspective matrix transformation, then the transformed coordinate z channel value is compared with the value of the corresponding position of the depth map, and if the difference between the current texel value and the depth map value is within a certain threshold value, the texel is considered to be visible.
In addition, when the light source is unchanged, the texel of the previously stored illumination information does not need to repeatedly calculate the illumination of the diffuse reflection part, and only needs to calculate the relevant specular reflection information of the view point. For each pixel, 2 boolean values need to be stored to determine if it needs diffuse, specular calculations. Therefore, the label information is recorded by using a mapping of 8-bit integer format (R8), and the two Boolean values are packed into an 8-bit integer number in a bit operation mode.
The process of generating the Lightmap comprises the following steps:
the illumination information is stored in a mapping space, which is called illumination mapping (Lightmap), the illumination mapping records a storage structure of the illumination information on the surface of an object, UV coordinates of the mapping picture elements are in one-to-one correspondence with UV coordinates of a 3-bit model in a virtual scene, and the illumination information is indexed according to UV. Constructing and storing viewpoint-independent global illumination information in the Lightmap includes acquisition of UV coordinates of the Lightmap and texel generation and storage of the Lightmap.
Acquiring UV coordinates of a Lightmap, expressing a scene model as a triangular grid, parameterizing the triangular network into a two-dimensional UV space in order to save illumination pixels on the scene surface into a map, scaling, translating and packing UV into a range of [0,1] to form a Lightmap map set, wherein the UV of the Lightmap ensures that each surface in the scene corresponds to one block of the map one by one and does not overlap with other objects; the parameterization and packaging operation of UV are completed through UVALAS or RizomUV, and finally the generated UV coordinate data are filled into a second set of UV channels at the top of the model.
The texel generation and storage of Lightmap, given the line-of-sight vector v when the object is illuminated by n non-planar light sources, determines the coloring equation as:
Figure BDA0004044742040000061
wherein ,L0 For the emitted radiance, the color of the surface of the object is seen, denoted by rgb;
Figure BDA0004044742040000062
the irradiance of the kth light source; l (L) k Is the incident light direction vector; v is the line of sight; the direction n is the surface normal vector of the point; c diff And c spec The diffuse reflection refractive index and the high light reflection refractive index of the object surface are obtained; m is the high light diffusion coefficient or roughness in the high light model;
in real-time rendering, the shading equations are decomposed into a direct illumination portion and an indirect illumination portion:
f shade (v,k)=f direct (k)+f indirect (v,k)
the direct illumination part is directly rendered by a front-end Web application program, and the indirect illumination part is provided by a real-time global illumination algorithm, a Light Probe for recording SH or an environment map, and is finally combined into a diffuse reflection part and a highlight part:
f indirect (v)=c diff ×diff(L Ind )+R F (θ)×spec(L Ind )
wherein ,
Figure BDA0004044742040000063
is indirect irradiance; diff is taken as E Ind Is a diffuse reflection portion of (a); spec is the highlight portion; RF is Fresnel Term.
Finally, 6 low-precision RGB565 color values are directly stored in a 32-bit channel of RGBA8888, the specular reflection amplitude illumination is simplified into brightness, the brightness is stored in a transparent channel, and the RGB stores diffuse reflection irradiance.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article or method that comprises the element.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes using the descriptions and drawings of the present invention or directly or indirectly applied to other related technical fields are included in the scope of the invention.

Claims (8)

1. The cloud rendering system based on digital twinning is characterized by comprising a cloud renderer running on a GPU, a Web3D renderer running on a browser and a twinning data service center, wherein the twinning data service center is respectively connected with the cloud renderer and the Web3D renderer, and the cloud renderer and the Web3D renderer are communicated through a WebSocket protocol;
the user edits the scene through a scene editor in the Web3D renderer, the cloud renderer receives the changed data in the scene editor and synchronizes the changed data to an original 3D scene, and GBuffer precomputation, lightmap generation and Lightmap coding streaming transmission are further carried out;
the Web3D renderer receives the transmitted data, performs Lightmap decoding, web front-end direct illumination calculation and Web renderer post-processing, and obtains and outputs a final rendering effect;
the twin data service center receives all data in the operation process of the cloud renderer and the Web3D renderer, and analyzes, stores and forwards all data in the operation process.
2. The digital twin based cloud rendering system of claim 1, wherein in the cloud renderer,
the GBuffer pre-calculation is used for pre-calculating model related information needed to be used in virtual scene illumination calculation, and the model related information is stored in a related map; wherein the related information includes depth information, normal information, or vertex information;
the method comprises the steps that the Lightmap is generated to determine a computing range of the Lightmap according to user interaction operation, marking is conducted, and the Lightmap is generated through a map space global illumination algorithm;
the Lightmap encoding streaming is to perform h.264 encoding on the Lightmap and stream the Lightmap to the Web front end.
3. The cloud rendering system according to claim 2, wherein when the model related information needed in the virtual scene illumination calculation is pre-calculated and stored in the related map, if the model position in the scene changes, the phase needs to be continuously updated, otherwise only needs to be generated once.
4. The digital twinning-based cloud rendering system of claim 1, wherein in the Web3D renderer,
the Lightmap is decoded to decode and restore the H.264 data stream received from the cloud server to the Lightmap;
the direct illumination calculation of the Web front end is to render the direct illumination of the Web3D scene through a Web renderer;
the Web renderer post-processes global illumination diffuse reflection and specular reflection data obtained by sampling the Lightmap to obtain a final rendering effect and output.
5. The digital twinning-based cloud rendering system of claim 2, wherein the generating the Lightmap includes:
acquisition of UV coordinates of Lightmap: parameterizing a triangle network into a two-dimensional UV space, scaling and translating UV, and packing the UV into a range of [0,1] to form a Lightmap graph set, wherein the UV of the Lightmap ensures that each surface in a scene corresponds to one block of the graph one by one and is not overlapped with other objects; the parameterization and packaging operation of UV are completed through UVALAS or RizomUV, and finally the generated UV coordinate data are filled into a second set of UV channels at the top of the model;
the texel generation and storage of Lightmap: determining a coloring equation and a direct illumination part and an indirect illumination part of the coloring equation decomposition, wherein the direct illumination part is directly rendered by a front-end Web application program, the indirect illumination part is provided by a real-time global illumination algorithm, a Light Probe for recording SH or an environment map, and finally is combined into a diffuse reflection part and a high illumination part, and 6 low-precision RGB565 color values are directly stored in a 32-bit channel of RGBA 8888.
6. The digital twin based cloud rendering system of claim 5, wherein the shading equation is:
Figure QLYQS_1
wherein ,L0 For the emittance of the outgoing radiation, the colour of the surface of the object is seen by rgbA representation;
Figure QLYQS_2
the irradiance of the kth light source; l (L) k Is the incident light direction vector; v is the line of sight; the direction n is the surface normal vector of the point; c (C) diff And c spec The diffuse reflection refractive index and the high light reflection refractive index of the object surface are obtained; m is the high light diffusion coefficient or roughness in the high light model;
the direct illumination part and the indirect illumination part are as follows:
f shade (v,k)=f direct (k)+f indirect (v,k)
the final combination into a diffuse reflection part and a high light part is as follows:
f indirect (v)=c diff ×diff(L Ind )+R F (θ)×spec(L Ind )
wherein ,
Figure QLYQS_3
is indirect irradiance; diff is taken as E Ind Is a diffuse reflection portion of (a); spec is the highlight portion; RF is Fresnel Term.
7. The digital twinning-based cloud rendering system of claim 5, wherein determining a Lightmap calculation range is: for each Lightmap texel, the corresponding position is first in the visible range of the present frame or the last two frames, and then calculated by the following method: and calculating the whole Lightmap in the stage of loading the model scene by the client, and calculating the refraction part global illumination related to the view point by using the VRLM method only for the parts which are changed by the camera when the user roams in the scene and only the camera is changed.
8. The digital twinning-based cloud rendering system of claim 2, wherein the determining of the Lightmap calculation range is followed by marking as: when a user roams in a 3D scene of a client, predicting position data of two last frames from the current frame by adopting a DeadReckwoning algorithm according to the difference between the time when the front end of the last frame receives the data and the time when the last frame receives the data, rendering 3 depth maps from 3 camera view angles, and recording depth information of each visible patch on the depth maps so as to mark visible information on the Lightmap; namely: the world position of each legal texel on the Lightmap is converted into a camera space through camera perspective matrix transformation, then the transformed coordinate z channel value is compared with the value of the corresponding position of the depth map, and if the difference between the current texel value and the depth map value is within a certain threshold value, the texel is considered to be visible.
CN202310026317.1A 2023-01-09 2023-01-09 Cloud rendering system based on digital twinning Pending CN116012519A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310026317.1A CN116012519A (en) 2023-01-09 2023-01-09 Cloud rendering system based on digital twinning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310026317.1A CN116012519A (en) 2023-01-09 2023-01-09 Cloud rendering system based on digital twinning

Publications (1)

Publication Number Publication Date
CN116012519A true CN116012519A (en) 2023-04-25

Family

ID=86020859

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310026317.1A Pending CN116012519A (en) 2023-01-09 2023-01-09 Cloud rendering system based on digital twinning

Country Status (1)

Country Link
CN (1) CN116012519A (en)

Similar Documents

Publication Publication Date Title
US7463269B2 (en) Texture data compression and rendering in 3D computer graphics
US6330281B1 (en) Model-based view extrapolation for interactive virtual reality systems
KR102560187B1 (en) Method and system for rendering virtual reality content based on two-dimensional ("2D") captured images of a three-dimensional ("3D") scene
US7990389B2 (en) Graphic system comprising a pipelined graphic engine, pipelining method and computer program product
US20090021513A1 (en) Method of Customizing 3D Computer-Generated Scenes
US10733786B2 (en) Rendering 360 depth content
US10636201B2 (en) Real-time rendering with compressed animated light fields
US20170323471A1 (en) 3D rendering method and 3D graphics processing device
US20210383590A1 (en) Offset Texture Layers for Encoding and Signaling Reflection and Refraction for Immersive Video and Related Methods for Multi-Layer Volumetric Video
US10229537B2 (en) System and method for compressing and decompressing time-varying surface data of a 3-dimensional object using a video codec
Hladky et al. Tessellated shading streaming
US8248405B1 (en) Image compositing with ray tracing
US10652514B2 (en) Rendering 360 depth content
CN116012519A (en) Cloud rendering system based on digital twinning
CN115761188A (en) Method and system for fusing multimedia and three-dimensional scene based on WebGL
Vardis et al. Real-time radiance caching using chrominance compression
CN115298699A (en) Rendering using shadow information
EP1398734A2 (en) Texture mapping
JP7370363B2 (en) Information processing device, program and drawing method
EP3598384A1 (en) Rendering 360 depth content
US20220292763A1 (en) Dynamic Re-Lighting of Volumetric Video
Schertler et al. Visualization of Scanned Cave Data with Global Illumination.
EP3598395A1 (en) Rendering 360 depth content
Piazzolla et al. Animated Point Clouds Real-Time Rendering for Extended Reality
Cao et al. Reprojection of textured depth map for network rendering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination