CN111968215A - Volume light rendering method and device, electronic equipment and storage medium - Google Patents

Volume light rendering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111968215A
CN111968215A CN202010747145.3A CN202010747145A CN111968215A CN 111968215 A CN111968215 A CN 111968215A CN 202010747145 A CN202010747145 A CN 202010747145A CN 111968215 A CN111968215 A CN 111968215A
Authority
CN
China
Prior art keywords
volume
rendering
illumination information
light
cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010747145.3A
Other languages
Chinese (zh)
Other versions
CN111968215B (en
Inventor
彭通
周陶生
王鹏
徐丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perfect World Beijing Software Technology Development Co Ltd
Original Assignee
Perfect World Beijing Software Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perfect World Beijing Software Technology Development Co Ltd filed Critical Perfect World Beijing Software Technology Development Co Ltd
Priority to CN202010747145.3A priority Critical patent/CN111968215B/en
Publication of CN111968215A publication Critical patent/CN111968215A/en
Application granted granted Critical
Publication of CN111968215B publication Critical patent/CN111968215B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

The application relates to a volume light rendering method, a volume light rendering device, an electronic device and a storage medium, wherein the method comprises the following steps: creating a high-definition rendering pipeline; rendering a volume cloud model to a rendering target in a high-definition rendering pipeline such that cloud layers are displayed to a screen space, wherein the volume cloud model is used to represent the cloud layers in a virtual scene; calculating volume illumination information corresponding to each pixel point in a screen space; and rendering in a high-definition rendering pipeline according to the volume illumination information, so that the volume light corresponding to the cloud layer is displayed to a screen space. The technical scheme realizes the rendering of the volume cloud in the HDRP, so that the volume cloud effect with high visual fidelity is generated in a scene, the interaction with the volume light is increased, and a great effect is played in the aspect of improving the texture of the picture. The volume light effect is added, the stereoscopic impression and the hierarchy of the image are richer, and the real perception of the scene is improved.

Description

Volume light rendering method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a volume light rendering method and apparatus, an electronic device, and a storage medium.
Background
When a light beam penetrates through the colloid, a bright "path" in the colloid can be observed from the direction of the vertical incident light due to the scattering effect of the colloid particles on the light beam, which is called the tyndall phenomenon. Such effects are often referred to as Volumetric Light (Volumetric Light) in real-time rendering. When sunlight penetrates through the gaps of the cloud layer, a light column, also called God Ray, is formed. The illumination under the special effect gives a visual sense of space to a player than the illumination in the past game, and further enables the player to have a more real sense.
Therefore, how to simulate the volume light effect corresponding to the volume cloud is a technical problem to be solved in the prior art.
Disclosure of Invention
In order to solve the technical problem or at least partially solve the technical problem, embodiments of the present application provide a volume light rendering method, apparatus, electronic device and storage medium.
According to an aspect of an embodiment of the present application, there is provided a volume light rendering method including:
creating a high-definition rendering pipeline;
rendering a volumetric cloud model to a rendering target in the high-definition rendering pipeline such that cloud layers are displayed to a screen space, wherein the volumetric cloud model is used to represent the cloud layers in a virtual scene;
calculating volume illumination information corresponding to each pixel point in the screen space;
rendering in the high-definition rendering pipeline according to the volume illumination information, so that the volume light corresponding to the cloud layer is displayed to the screen space.
Optionally, the calculating the volume illumination information corresponding to each pixel point in the screen space includes:
acquiring a view cone corresponding to the visual angle of a camera and a sun shadow map corresponding to a solar light source;
discretely processing the view frustum into a three-dimensional texture image;
calculating first illumination information corresponding to each voxel in the three-dimensional texture image according to the sun shadow map;
sampling according to light rays emitted by a main viewpoint of the camera view angle to obtain second illumination information of the voxel in the view line direction of the camera view angle;
and calculating the volume illumination information according to the first illumination information and the second illumination information.
Optionally, the calculating the volume illumination information according to the first illumination information and the second illumination information includes:
when the cloud layer shielding exists in the direction of the solar light source, acquiring the shadow intensity corresponding to the cloud layer in the direction of the solar light source;
calculating third illumination information corresponding to the voxel according to the shadow intensity and the first illumination information;
and calculating the volume illumination information according to the third illumination information and the second illumination information.
Optionally, the discretely processing the view frustum into a three-dimensional texture image includes:
determining a volume parameter corresponding to the three-dimensional texture image according to the screen resolution;
discretely processing the view cone into a three-dimensional texture image corresponding to the volume parameter;
the sampling according to the light emitted from the main viewpoint of the camera view angle includes:
determining a sampling step length according to the volume parameter;
and sampling according to the light emitted by the main viewpoint of the camera view angle by the sampling step length.
Optionally, the determining, according to the screen resolution, a volume parameter corresponding to the three-dimensional texture image includes:
determining a width value and a height value in the volume parameter according to the screen resolution;
determining depth values in the volume parameter with a preset resolution lower than the screen resolution.
Optionally, the rendering in the high-definition rendering pipeline according to the volume illumination information includes:
performing atmospheric scattering sampling on the render target to obtain scattering light information corresponding to each pixel of the render target;
and rendering the volume illumination information according to the scattered light information.
Optionally, the sampling atmospheric scattering of the render target includes:
when an edge pixel of an opaque object is sampled, sampling sky colors adjacent to the edge pixel;
using the sky color as a color of the edge pixel.
Optionally, the obtaining of the solar shadow map corresponding to the solar light source includes:
acquiring a camera depth texture map obtained from the camera view angle and a light source depth texture map obtained from the solar light source direction;
and carrying out shadow collection calculation in the screen space according to the camera depth texture mapping and the light source depth texture mapping to obtain the sun shadow mapping.
Optionally, the obtaining the sun shadow map by performing shadow collection calculation in the screen space according to the camera depth texture map and the light source depth texture map includes:
determining a first depth value of each pixel in the camera depth texture map and a world space coordinate corresponding to the first depth value;
converting the world space coordinates of the pixels into light source space coordinates corresponding to the light source depth texture map;
comparing a second depth value corresponding to the light source space coordinate in the light source depth texture map with the first depth value;
and when the pixel is determined to be positioned in the shadow according to the comparison result of the first depth value and the second depth value, obtaining the sun shadow map according to the pixel value of the pixel positioned in the shadow.
According to another aspect of embodiments of the present application, there is provided a volume light rendering apparatus including:
a creation module to create a high definition rendering pipeline;
a first rendering module to render a volumetric cloud model to a rendering target in the high-definition rendering pipeline such that clouds are displayed to a screen space, wherein the volumetric cloud model is to represent the clouds in a virtual scene;
the calculation module is used for calculating volume illumination information corresponding to each pixel point in the screen space;
and the second rendering module is used for rendering in the high-definition rendering pipeline according to the volume illumination information, so that the volume light corresponding to the cloud layer is displayed to the screen space.
According to another aspect of an embodiment of the present application, there is provided an electronic device including: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the above method steps when executing the computer program.
According to another aspect of embodiments of the present application, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the above-mentioned method steps.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
the method has the advantages that the volume cloud rendering is realized in the HDRP, so that the volume cloud effect with high visual fidelity is generated in a scene, the interaction with volume light is increased in the volume cloud simulation process, and the method plays a great role in improving the texture of pictures. The volume light effect is added, the stereoscopic impression and the hierarchy of the image are richer, and the real perception of the scene is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
Fig. 1 is a flowchart of a volume light rendering method according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a method for volumetric light rendering according to another embodiment of the present application;
FIG. 3 is a flow chart of a method for volumetric light rendering according to another embodiment of the present application;
FIG. 4 is a flow chart of a method for volumetric light rendering according to another embodiment of the present application;
FIG. 5 is a flow chart of a method for volumetric light rendering according to another embodiment of the present application;
FIG. 6 is a flow chart of a method for volumetric light rendering according to another embodiment of the present application;
FIG. 7 is a flow chart of a method for volumetric light rendering according to another embodiment of the present application;
fig. 8 is a block diagram of a volume light rendering apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Volume Clouds (Volumetric Clouds), commonly referred to as Volumetric Clouds, use image engines to simulate the translucent, random appearance of a real cloud.
Unity has introduced a High Definition Rendering Pipeline (HDRP) that provides High visual fidelity and is a programmable rendering Pipeline suitable for PCs and host platforms in order to improve the image performance of the engine. Relative to a traditional rendering pipeline, the HDRP can completely customize the implementation mode of the pipeline through C # scripts. At present, HDRP is still in a trial stage and lacks of realization of a plurality of specific rendering effects. In the application, high-definition volume cloud with vivid effect and rendering of volume light corresponding to the volume cloud are achieved based on HDRP.
First, a volume light rendering method based on volume cloud provided by an embodiment of the present invention is described below.
Fig. 1 is a flowchart of a volume light rendering method according to an embodiment of the present disclosure. As shown in fig. 1, the method comprises the steps of:
step S11, creating a high-definition rendering pipeline;
step S12, rendering a volume cloud model to a rendering target in a high-definition rendering pipeline so as to display a cloud layer to a screen space, wherein the volume cloud model is used for representing the cloud layer in the virtual scene;
step S13, calculating volume illumination information corresponding to each pixel point in the screen space;
and step S14, rendering in a high-definition rendering pipeline according to the volume illumination information, so that the volume light corresponding to the cloud layer is displayed to the screen space.
In the embodiment, the volume cloud is rendered in the HDRP, so that a volume cloud effect with high visual fidelity is generated in a scene, interaction with volume light is increased in a volume cloud simulation process, and a great effect is achieved in the aspect of improving the texture of pictures. The volume light effect is added, the stereoscopic impression and the hierarchy of the image are richer, and the real perception of the scene is improved.
The above steps S11 to S14 will be explained in detail.
In step S11, there are two ways for Unity to create an HDRP project, one is to upgrade to an HDRP project on the basis of the original project, and the other is to create a new HDRP project. In this embodiment, both the two modes can be adopted.
In step S12, the volume cloud model is first added to the HDRP, and the specific operations include: after Volume Lighting (Volume Lighting) is turned on, a Volume cloud model is added under the Volume frame.
Optionally, the volume cloud model in this embodiment is a model obtained by simulation in advance by using a cloud simulation technique. Cloud simulation methods include, but are not limited to, the following:
(1) cloud simulation techniques based on physical methods, such as particle systems, bubble modeling, or voxel modeling. For example, a cellular automaton algorithm is used to simulate the physical change process of the volume cloud;
(2) cloud simulation techniques based on existing empirical models, such as texture mapping methods or noise function methods. For example, a three-dimensional volume cloud model is constructed by using a Perlin noise function, and after a time dimension is increased, the generation or disappearance of particles is controlled according to the number of frames of program operation, so that the physical change of the volume cloud is realized.
The volume cloud model is rendered to a rendering target in the HDRP, so that the rendered cloud layer is displayed to a screen. The render target (render target) is a video buffer for rendering pixels. In this step, the volume cloud model may be rendered to a default rendering target, which is a background buffer, and is physically a piece of video memory containing information to be drawn in the next frame. A new render target may also be created using the RenderTarget2D class, leaving a new region in display memory for drawing the volume cloud. Optionally, each part of the content of the image may be respectively drawn into different render targets, and then the image elements are compiled and combined to form final background buffer data. The display card uses an Effect class to draw the pixels of the scene by reading the data in the rendering target, so that the cloud layer and the shadow are displayed on the screen.
In step S13, since the HDRP provides the Volume Lighting (Volumetric Lighting) component, after the "Volumetrics" option for controlling the global effect is selected in the HDRP, a sensitivity Volume is created to surround the whole scene requiring the package range, a Volumetric fog function is added to the scene setting, and a light source is added to the package range, so that the Volume Lighting effect can be added to the scene.
In step S14, a volume light is rendered according to the calculated volume illumination information, and a volume light effect corresponding to the cloud layer is displayed in the scene.
In the following, a detailed description is given of the process of calculating the volume illumination information in step S13, and in this embodiment, the volume light is realized based on the sun Shadow map (Shadow map) and the Ray stepping (Ray Marching) of the solar light source.
Fig. 2 is a flowchart of a volume light rendering method according to another embodiment of the present application. As shown in fig. 2, the step S13 includes the following steps:
step S21, acquiring a view cone corresponding to the camera view angle and a sun shadow map corresponding to the sun light source;
step S22, discrete processing the view cone into a three-dimensional texture image;
step S23, calculating first illumination information corresponding to each voxel in the three-dimensional texture image according to the sun shadow map;
step S24, sampling is carried out according to the light emitted by the main viewpoint of the camera view angle, and second illumination information of voxels in the view line direction of the camera view angle is obtained;
in step S25, volume illumination information is calculated from the first illumination information and the second illumination information.
In the above-described steps S21 to S23, a view frustum of the camera space is projected to a space represented by a three-dimensional texture, each voxel in the three-dimensional texture storing its current spatial information. Whether each voxel in the three-dimensional texture space is blocked, namely whether the voxel is illuminated or not can be judged based on the sun shadow map. Specifically, by multiplying the world coordinates of a voxel point (obtained in a Fragment Shader (Fragment Shader) or obtained by depth back-deriving the voxel point) with the view projection matrix (viewProjectionMatrix) of a light source, uv texture coordinates of the voxel point under the Shadow map are obtained and compared with the depth recorded by the Shadow map, and if the depth recorded by the Shadow map is greater than or equal to the point, it indicates that the point is not in Shadow and should be illuminated, otherwise, it indicates that the point is blocked and should not be illuminated. Therefore, illumination information corresponding to each voxel can be obtained based on the solar Shadow map.
In step S24, raymanching is performed in each direction from the camera main viewpoint to obtain the illumination information in the current camera view direction. Specifically, starting from a main viewpoint, advancing a point along a ray each time, sampling the brightness of each point, and summing the scattering brightness of all passing sampling points to obtain the color of a voxel; the brightness of the emitted light is inversely proportional to the distance from the light source; the light scattering brightness in the medium is iterated.
In step S25, the first illumination information and the second illumination information of each voxel are combined to obtain the illumination result of the voxel after the sun rays block, so as to form the volume light effect.
In this embodiment, illumination information of each voxel point in the three-dimensional texture is calculated based on the solar Shadow map, and is combined with illumination information of each voxel in the line of sight direction obtained by raymanching to obtain illumination information of the voxel in the three-dimensional space after the solar Ray is shielded, so that volume light shielded by the cloud layer, i.e., God Ray, is formed to give a visual sense of space to a viewer, and further, the viewer has a more real sense.
In addition, when RayMarching is used for calculating illumination information, the intensity of scattered light in all directions is the same, no change exists, the reality is not enough, and in addition, the volume light cannot be shielded. Thus, optionally, when calculating the intensity of the scattered light, the intensity should not be attenuated by the distance to the light source alone, but should also be multiplied by at least one of the following factors:
(1) the scattering factor (HG formula), HG formula indicates that the brightness of the light scattered in each direction should be different, and the sum of the brightness of the scattered light should be the same as the brightness of the light incident on the dust, i.e., energy conservation.
(2) Light projection ratio factor: according to the Beer-Lambert method, the ratio of the intensity of incident light to the intensity of transmitted light can be described in terms of the density of the substance and the distance to the light source.
(3) Shading factor: such as Shadow Mapping, etc.
By adjusting the factors, the calculated volume light has more changes, and the reality is further improved.
In an alternative embodiment, in the case of cloud occlusion, the illumination information of each voxel may be further adjusted according to the shadow intensity (ShadowStrength) of the cloud. Fig. 3 is a flowchart of a volume light rendering method according to another embodiment of the present application. As shown in fig. 3, the step S25 includes:
step S31, when cloud cover shielding exists in the direction of the solar light source, obtaining the shadow intensity corresponding to the cloud cover in the direction of the solar light source;
step S32, calculating third illumination information corresponding to the voxel according to the shadow intensity and the first illumination information;
in step S33, volume illumination information is calculated from the third illumination information and the second illumination information.
In this embodiment, the corresponding voxel value in the solar Shadow map is adjusted according to the Shadow Strength projected by the cloud layer in the sunlight direction, where a value range of the Shadow Strength is [0,1], and when the value range of the Shadow Strength is 0, it indicates that there is no Shadow. The finally obtained volume illumination information can show the shadow effect of the cloud layer on the sunlight.
In another alternative embodiment, when the view frustum is discretely processed in step S22, the volume parameter of the three-dimensional texture image needs to be obtained, and the parameter may be a default or may be determined according to the screen resolution. Fig. 4 is a flowchart of a volume light rendering method according to another embodiment of the present application. As shown in fig. 4, step S22 includes:
step S41, determining the volume parameter corresponding to the three-dimensional texture image according to the screen resolution;
in step S42, the viewing cones are discretized into three-dimensional texture images corresponding to the volume parameters.
In step S41, the width value, the height value, and the depth value in the volume parameter may be determined in different manners, respectively. Optionally, step S41 includes: determining a width value and a height value in the volume parameter according to the screen resolution; depth values in the volume parameter are determined with a preset resolution lower than the screen resolution.
The width value width and the height value height are high resolution, such as 640 or 1280 pixels, related to the screen resolution, and the depth value depth can be selected to be lower resolution, such as 64 or 128 pixels.
After the volume parameter corresponding to the three-dimensional texture image is determined, the sampling step length can be determined according to the volume parameter when RayMarching is carried out. The step S24 includes: determining a sampling step length according to the volume parameter; and sampling according to the light emitted from the main viewpoint of the camera view angle by sampling step length.
Alternatively, the sampling Step size (Step size) may be a fixed Step size (Constant Step), for example, the sampling Step size (Step size) is 5.
Fixed-step Ray marking is very inefficient, and rays can be extended by the same length each time, so that the situation that a solid space is filled by a geometric body cannot be considered. Whereas for Shader, an increase in the number of invalid cycles means a decrease in performance. The Step size can also be made longer if real-time stereoscopic rendering is to be achieved.
For example, when a ray is emitted from the origin of the ray, a distance function, such as a Sphere Hit function, is called once with the origin as the center, and the shortest distance from the point to the spherical surface is calculated. If the error value is smaller than the set error value, the object is considered to be contacted, and the distance value is returned. And if the object is not contacted, advancing the just calculated distance value along the ray vector to obtain a new circle center and continuously solving the distance until the object is judged to be contacted. If the number of the advancing steps is too large, the set maximum number of steps is exceeded or the advancing distance exceeds the maximum distance, the object cannot be contacted.
After distance estimation, the step length of Ray marking is reduced along with approaching to an object, the number of steps required by a Ray hitting a rendering body is greatly reduced, and Ray marking efficiency and Shader performance are improved.
In the simulation of the volume light, the real physical laws need to be considered, wherein the most important point is to consider the influence of atmospheric scattering on the light. The calculation of scattering can satisfy virtual scene continuous time space demand, improves scene visual expression power and authenticity greatly, provides the environment light effect that the reality divide into. However, if the illumination information after atmospheric scattering influence is simultaneously calculated in the Ray marking process, the influence on the calculation efficiency is large. In order to obtain the influence of atmospheric scattering on illumination information and avoid the influence on the calculation efficiency, the following method is adopted for solving the problem.
Fig. 5 is a flowchart of a volume light rendering method according to another embodiment of the present application. As shown in fig. 5, the step S14 includes:
step S51, atmospheric scattering sampling is carried out on the rendering target, and scattering light ray information corresponding to each pixel of the rendering target is obtained;
and step S52, rendering the volume illumination information according to the scattered light information.
In this embodiment, since the calculation result of the volume cloud in the HDRP already includes the rendering result of the atmospheric scattering scene, the RenderTarget corresponding to the volume cloud in the HDRP may be sampled to directly obtain the illumination information after atmospheric scattering.
Due to the calculation error and the difference between the RenderTarget and the screen space corresponding to the volume cloud, when the sky is subjected to color sampling, the sky may be sampled onto an opaque object in the scene. In order to solve the problem of inaccurate sampling, the atmospheric scattering sampling of the rendered target in step S51 includes: when sampling an edge pixel of an opaque object, sampling a sky color adjacent to the edge pixel; the sky color is used as the color of the edge pixels.
Alternatively, opaque objects in the scene may be identified by the depth map corresponding to the current frame. The camera rendering mode is set to depth mode (depth), after which the Unity built-in variable, camera. Among other things, the depth map may store a [0,1] range of non-linearly distributed depth values from Normalized Device Coordinates (NDC). In the depth map, opaque objects are rendered black. Based on the depth values in the depth map, opaque objects in the picture may be identified.
Through the steps S51 to S52, the opaque object is stroked, and the color in the sky around the opaque object is used as the color of the edge pixel, so that the error of the sampling result can be effectively avoided.
Fig. 6 is a flowchart of a volume light rendering method according to another embodiment of the present application. As shown in fig. 6, in another alternative embodiment, the step S21 of obtaining the sun shadow map corresponding to the sunlight source includes:
step S61, obtaining a camera depth texture map obtained from the camera view angle and a light source depth texture map obtained from the sun light source direction;
and step S62, carrying out shadow collection calculation in the screen space according to the camera depth texture mapping and the light source depth texture mapping to obtain a sun shadow mapping.
In step S61, a depth camera is first created at the current camera, resulting in a depth texture map observed from the current camera; a depth camera is then created from the sun's illuminant, resulting in a depth texture map viewed from the sun's illuminant. In step S62, a shadow collection calculation (shadow Collector) is performed once in the screen space, and a sun shadow map, i.e., pixels located in a shadow under the irradiation of sunlight, is obtained.
Fig. 7 is a flowchart of a volume light rendering method according to another embodiment of the present application. As shown in fig. 7, the above-mentioned step S62 shadow collection process includes the steps of:
step S71, determining a first depth value of each pixel in the camera depth texture map and a corresponding world space coordinate thereof;
step S72, converting the world space coordinate of the pixel into a light source space coordinate corresponding to the light source depth texture map;
step S73, comparing a second depth value corresponding to the light source space coordinate in the light source depth texture map with the first depth value;
step S74, when the pixel is determined to be located in the shadow according to the comparison result between the first depth value and the second depth value, obtaining a sun shadow map according to the pixel value of the pixel located in the shadow.
In steps S71 through S74, the world coordinates of the pixel in world space are reconstructed using the depth information, the world space coordinates of each pixel are transformed into light source space, the corresponding depth value of the pixel within the light source depth texture map is determined, the depth value of the pixel in the camera depth texture map is compared with the depth value of the pixel in the light source depth texture map, and if the depth value of the pixel in the camera depth texture map is greater than the depth value in the light source depth texture map, it is determined that the pixel is not reachable by the light source and the pixel is in the shadow. Thus, the resulting sun shadow map contains all shadowed areas in screen space relative to the sun's rays.
In another optional embodiment, the method further comprises:
receiving an editing operation on a volume cloud overlay map in a volume cloud editor;
and adjusting the integral cloud accumulation model according to the editing operation.
In this embodiment, the volume cloud editor provides a GameView window, and a user can edit the Coverage map of the volume cloud in real time in the window, so that not only can the rendering result of the volume cloud be adjusted, but also the shadow of the whole volume cloud and the corresponding volume light effect can be adjusted based on the Coverage map.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application.
Fig. 8 is a block diagram of a volume light rendering apparatus provided in an embodiment of the present application, which may be implemented as part of or all of an electronic device through software, hardware, or a combination of the two. As shown in fig. 8, the volume light rendering apparatus includes:
a creation module 81 for creating a high definition rendering pipeline;
a first rendering module 82 for rendering a volumetric cloud model to a rendering target in a high-definition rendering pipeline such that cloud layers are displayed to a screen space, wherein the volumetric cloud model is used to represent the cloud layers in a virtual scene;
the calculating module 83 is configured to calculate volume illumination information corresponding to each pixel point in the screen space;
and a second rendering module 84, configured to render in the high-definition rendering pipeline according to the volume illumination information, so that the volume light corresponding to the cloud layer is displayed to the screen space.
Optionally, the calculating module 83 includes:
the acquisition submodule is used for acquiring a view cone corresponding to the visual angle of the camera and a solar shadow map corresponding to the solar light source;
the discrete processing submodule is used for discretely processing the view cone into a three-dimensional texture image;
the first calculation submodule is used for calculating first illumination information corresponding to each voxel in the three-dimensional texture image according to the sun shadow map;
the light stepping submodule is used for sampling according to light emitted by a main viewpoint of a camera visual angle to obtain second illumination information of voxels in the visual line direction of the camera visual angle;
and the second calculation submodule is used for calculating the volume illumination information according to the first illumination information and the second illumination information.
Optionally, the second computing submodule is configured to, when it is determined that cloud shielding exists in the solar light source direction, obtain a shadow intensity corresponding to the cloud in the solar light source direction; calculating third illumination information corresponding to the voxel according to the shadow intensity and the first illumination information; and calculating the volume illumination information according to the third illumination information and the second illumination information.
Optionally, the discrete processing sub-module is configured to determine a volume parameter corresponding to the three-dimensional texture image according to the screen resolution; discretely processing the view cone into a three-dimensional texture image corresponding to the volume parameter;
the light ray stepping submodule is used for determining a sampling step length according to the volume parameter; and sampling according to the light emitted from the main viewpoint of the camera view angle by sampling step length.
Optionally, the discrete processing sub-module is configured to determine a width value and a height value of the volume parameter according to the screen resolution; depth values in the volume parameter are determined with a preset resolution lower than the screen resolution.
Optionally, the second rendering module 84 includes:
the sub-module is used for carrying out atmospheric scattering sampling on the rendering target to obtain scattering light ray information corresponding to each pixel of the rendering target;
and the rendering submodule is used for rendering the volume illumination information according to the scattered light information.
Optionally, the sub-module is configured to sample a color of the sky adjacent to an edge pixel when the edge pixel of the opaque object is sampled; the sky color is used as the color of the edge pixels.
Optionally, the obtaining sub-module is configured to obtain a camera depth texture map obtained from a camera view angle and a light source depth texture map obtained from a solar light source direction; and carrying out shadow collection calculation in a screen space according to the camera depth texture mapping and the light source depth texture mapping to obtain a sun shadow mapping.
Optionally, the obtaining sub-module is further configured to determine a first depth value of each pixel in the camera depth texture map and a world space coordinate corresponding to the first depth value; converting the world space coordinates of the pixels into light source space coordinates corresponding to the light source depth texture map; comparing a second depth value corresponding to the light source space coordinate in the light source depth texture map with the first depth value; and when the pixel is determined to be positioned in the shadow according to the comparison result of the first depth value and the second depth value, obtaining the sun shadow map according to the pixel value of the pixel positioned in the shadow.
An embodiment of the present application further provides an electronic device, as shown in fig. 9, the electronic device may include: the system comprises a processor 1501, a communication interface 1502, a memory 1503 and a communication bus 1504, wherein the processor 1501, the communication interface 1502 and the memory 1503 complete communication with each other through the communication bus 1504.
A memory 1503 for storing a computer program;
the processor 1501, when executing the computer program stored in the memory 1503, implements the steps of the method embodiments described below.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (pci) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the following steps of the above-described method embodiments:
creating a high-definition rendering pipeline;
rendering a volume cloud model to a rendering target in a high-definition rendering pipeline such that cloud layers are displayed to a screen space, wherein the volume cloud model is used to represent the cloud layers in a virtual scene;
calculating volume illumination information corresponding to each pixel point in a screen space;
and rendering in a high-definition rendering pipeline according to the volume illumination information, so that the volume light corresponding to the cloud layer is displayed to a screen space.
Optionally, calculating volume illumination information corresponding to each pixel point in the screen space includes:
acquiring a view cone corresponding to the visual angle of a camera and a sun shadow map corresponding to a solar light source;
discretely processing the view frustum into a three-dimensional texture image;
calculating first illumination information corresponding to each voxel in the three-dimensional texture image according to the sun shadow map;
sampling according to light rays emitted by a main viewpoint of a camera view angle to obtain second illumination information of voxels in the view direction of the camera view angle;
and calculating the volume illumination information according to the first illumination information and the second illumination information.
Optionally, calculating the volume illumination information according to the first illumination information and the second illumination information includes:
when cloud layer shielding exists in the direction of the solar light source, acquiring the shadow intensity corresponding to the cloud layer in the direction of the solar light source;
calculating third illumination information corresponding to the voxel according to the shadow intensity and the first illumination information;
and calculating the volume illumination information according to the third illumination information and the second illumination information.
Optionally, discretely processing the view frustum into a three-dimensional texture image, including:
determining a volume parameter corresponding to the three-dimensional texture image according to the screen resolution;
discretely processing the view cone into a three-dimensional texture image corresponding to the volume parameter;
sampling light rays emitted according to a main viewpoint of a camera view angle, comprising:
determining a sampling step length according to the volume parameter;
and sampling according to the light emitted from the main viewpoint of the camera view angle by sampling step length.
Optionally, determining a volume parameter corresponding to the three-dimensional texture image according to the screen resolution includes:
determining a width value and a height value in the volume parameter according to the screen resolution;
depth values in the volume parameter are determined with a preset resolution lower than the screen resolution.
Optionally, rendering is performed in a high-definition rendering pipeline according to the volume illumination information, and includes:
performing atmospheric scattering sampling on the render target to obtain scattering light information corresponding to each pixel of the render target;
and rendering the volume illumination information according to the scattered light information.
Optionally, the atmospheric scattering sampling of the rendered target includes:
when sampling an edge pixel of an opaque object, sampling a sky color adjacent to the edge pixel;
the sky color is used as the color of the edge pixels.
Optionally, obtaining a sun shadow map corresponding to the solar light source includes:
acquiring a camera depth texture mapping obtained from a camera visual angle and a light source depth texture mapping obtained from a solar light source direction;
and carrying out shadow collection calculation in a screen space according to the camera depth texture mapping and the light source depth texture mapping to obtain a sun shadow mapping.
Optionally, performing shadow collection calculation in a screen space according to the camera depth texture map and the light source depth texture map to obtain a sun shadow map, including:
determining a first depth value of each pixel in the camera depth texture map and a corresponding world space coordinate thereof;
converting the world space coordinates of the pixels into light source space coordinates corresponding to the light source depth texture map;
comparing a second depth value corresponding to the light source space coordinate in the light source depth texture map with the first depth value;
and when the pixel is determined to be positioned in the shadow according to the comparison result of the first depth value and the second depth value, obtaining the sun shadow map according to the pixel value of the pixel positioned in the shadow.
It should be noted that, for the above-mentioned apparatus, electronic device and computer-readable storage medium embodiments, since they are basically similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
It is further noted that, herein, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (12)

1. A method of volumetric light rendering, comprising:
creating a high-definition rendering pipeline;
rendering a volumetric cloud model to a rendering target in the high-definition rendering pipeline such that cloud layers are displayed to a screen space, wherein the volumetric cloud model is used to represent the cloud layers in a virtual scene;
calculating volume illumination information corresponding to each pixel point in the screen space;
rendering in the high-definition rendering pipeline according to the volume illumination information, so that the volume light corresponding to the cloud layer is displayed to the screen space.
2. The method of claim 1, wherein the calculating the volume illumination information corresponding to each pixel point in the screen space comprises:
acquiring a view cone corresponding to the visual angle of a camera and a sun shadow map corresponding to a solar light source;
discretely processing the view frustum into a three-dimensional texture image;
calculating first illumination information corresponding to each voxel in the three-dimensional texture image according to the sun shadow map;
sampling according to light rays emitted by a main viewpoint of the camera view angle to obtain second illumination information of the voxel in the view line direction of the camera view angle;
and calculating the volume illumination information according to the first illumination information and the second illumination information.
3. The method of claim 2, wherein said calculating the volumetric lighting information from the first lighting information and the second lighting information comprises:
when the cloud layer shielding exists in the direction of the solar light source, acquiring the shadow intensity corresponding to the cloud layer in the direction of the solar light source;
calculating third illumination information corresponding to the voxel according to the shadow intensity and the first illumination information;
and calculating the volume illumination information according to the third illumination information and the second illumination information.
4. The method of claim 2, wherein discretely processing the view frustum into a three-dimensional texture image comprises:
determining a volume parameter corresponding to the three-dimensional texture image according to the screen resolution;
discretely processing the view cone into a three-dimensional texture image corresponding to the volume parameter;
the sampling according to the light emitted from the main viewpoint of the camera view angle includes:
determining a sampling step length according to the volume parameter;
and sampling according to the light emitted by the main viewpoint of the camera view angle by the sampling step length.
5. The method according to claim 4, wherein the determining the volume parameter corresponding to the three-dimensional texture image according to the screen resolution comprises:
determining a width value and a height value in the volume parameter according to the screen resolution;
determining depth values in the volume parameter with a preset resolution lower than the screen resolution.
6. The method of claim 2, wherein the rendering in the high-definition rendering pipeline according to the volumetric illumination information comprises:
performing atmospheric scattering sampling on the render target to obtain scattering light information corresponding to each pixel of the render target;
and rendering the volume illumination information according to the scattered light information.
7. The method of claim 6, wherein the sampling atmospheric scattering of the rendered target comprises:
when an edge pixel of an opaque object is sampled, sampling sky colors adjacent to the edge pixel;
using the sky color as a color of the edge pixel.
8. The method of claim 2, wherein the obtaining a sun shadow map corresponding to a sun light source comprises:
acquiring a camera depth texture map obtained from the camera view angle and a light source depth texture map obtained from the solar light source direction;
and carrying out shadow collection calculation in the screen space according to the camera depth texture mapping and the light source depth texture mapping to obtain the sun shadow mapping.
9. The method of claim 8, wherein said performing a shadow collection calculation in said screen space from said camera depth texture map and said light source depth texture map to obtain said solar shadow map comprises:
determining a first depth value of each pixel in the camera depth texture map and a world space coordinate corresponding to the first depth value;
converting the world space coordinates of the pixels into light source space coordinates corresponding to the light source depth texture map;
comparing a second depth value corresponding to the light source space coordinate in the light source depth texture map with the first depth value;
and when the pixel is determined to be positioned in the shadow according to the comparison result of the first depth value and the second depth value, obtaining the sun shadow map according to the pixel value of the pixel positioned in the shadow.
10. A volumetric light rendering apparatus, comprising:
a creation module to create a high definition rendering pipeline;
a first rendering module to render a volumetric cloud model to a rendering target in the high-definition rendering pipeline such that clouds are displayed to a screen space, wherein the volumetric cloud model is to represent the clouds in a virtual scene;
the calculation module is used for calculating volume illumination information corresponding to each pixel point in the screen space;
and the second rendering module is used for rendering in the high-definition rendering pipeline according to the volume illumination information, so that the volume light corresponding to the cloud layer is displayed to the screen space.
11. An electronic device, comprising: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor, when executing the computer program, implementing the method steps of any of claims 1-9.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method steps of any one of claims 1 to 9.
CN202010747145.3A 2020-07-29 2020-07-29 Volume light rendering method and device, electronic equipment and storage medium Active CN111968215B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010747145.3A CN111968215B (en) 2020-07-29 2020-07-29 Volume light rendering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010747145.3A CN111968215B (en) 2020-07-29 2020-07-29 Volume light rendering method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111968215A true CN111968215A (en) 2020-11-20
CN111968215B CN111968215B (en) 2024-03-22

Family

ID=73363605

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010747145.3A Active CN111968215B (en) 2020-07-29 2020-07-29 Volume light rendering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111968215B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200900A (en) * 2020-12-02 2021-01-08 成都完美时空网络技术有限公司 Volume cloud rendering method and device, electronic equipment and storage medium
CN112439196A (en) * 2020-12-11 2021-03-05 完美世界(北京)软件科技发展有限公司 Game light rendering method, device, equipment and storage medium
CN112465941A (en) * 2020-12-02 2021-03-09 成都完美时空网络技术有限公司 Volume cloud processing method and device, electronic equipment and storage medium
CN112669432A (en) * 2020-12-23 2021-04-16 北京像素软件科技股份有限公司 Volume cloud rendering method and device, electronic equipment and storage medium
CN112691378A (en) * 2020-12-23 2021-04-23 完美世界(北京)软件科技发展有限公司 Image processing method, apparatus and readable medium
CN113144613A (en) * 2021-05-08 2021-07-23 成都乘天游互娱网络科技有限公司 Model-based volume cloud generation method
CN113283543A (en) * 2021-06-24 2021-08-20 北京优锘科技有限公司 WebGL-based image projection fusion method, device, storage medium and equipment
CN113421199A (en) * 2021-06-23 2021-09-21 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113470161A (en) * 2021-06-30 2021-10-01 完美世界(北京)软件科技发展有限公司 Illumination determination method for volume cloud in virtual environment, related equipment and storage medium
CN113936097A (en) * 2021-09-30 2022-01-14 完美世界(北京)软件科技发展有限公司 Volume cloud rendering method and device and storage medium
CN114170359A (en) * 2021-11-03 2022-03-11 完美世界(北京)软件科技发展有限公司 Volume fog rendering method, device and equipment and storage medium
CN114998504A (en) * 2022-07-29 2022-09-02 杭州摩西科技发展有限公司 Two-dimensional image illumination rendering method, device and system and electronic device
CN115830208A (en) * 2023-01-09 2023-03-21 腾讯科技(深圳)有限公司 Global illumination rendering method and device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091363A (en) * 2014-07-09 2014-10-08 无锡梵天信息技术股份有限公司 Real-time size cloud computing method based on screen space
CN104143205A (en) * 2013-05-11 2014-11-12 哈尔滨点石仿真科技有限公司 Method for achieving real-time rendering of large-scale realistic volumetric cloud
CN105869106A (en) * 2016-04-27 2016-08-17 中国电子科技集团公司第二十八研究所 Improved method for drawing three-dimensional entity cloud
CN109544674A (en) * 2018-11-21 2019-03-29 北京像素软件科技股份有限公司 A kind of volume light implementation method and device
KR20200082601A (en) * 2018-12-31 2020-07-08 한국전자통신연구원 Apparatus and method for rendering multi-layered volumetric clouds

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104143205A (en) * 2013-05-11 2014-11-12 哈尔滨点石仿真科技有限公司 Method for achieving real-time rendering of large-scale realistic volumetric cloud
CN104091363A (en) * 2014-07-09 2014-10-08 无锡梵天信息技术股份有限公司 Real-time size cloud computing method based on screen space
CN105869106A (en) * 2016-04-27 2016-08-17 中国电子科技集团公司第二十八研究所 Improved method for drawing three-dimensional entity cloud
CN109544674A (en) * 2018-11-21 2019-03-29 北京像素软件科技股份有限公司 A kind of volume light implementation method and device
KR20200082601A (en) * 2018-12-31 2020-07-08 한국전자통신연구원 Apparatus and method for rendering multi-layered volumetric clouds

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200900B (en) * 2020-12-02 2021-02-26 成都完美时空网络技术有限公司 Volume cloud rendering method and device, electronic equipment and storage medium
CN112465941A (en) * 2020-12-02 2021-03-09 成都完美时空网络技术有限公司 Volume cloud processing method and device, electronic equipment and storage medium
CN112465941B (en) * 2020-12-02 2023-04-28 成都完美时空网络技术有限公司 Volume cloud processing method and device, electronic equipment and storage medium
CN112200900A (en) * 2020-12-02 2021-01-08 成都完美时空网络技术有限公司 Volume cloud rendering method and device, electronic equipment and storage medium
WO2022116659A1 (en) * 2020-12-02 2022-06-09 成都完美时空网络技术有限公司 Volumetric cloud rendering method and apparatus, and program and readable medium
CN112439196B (en) * 2020-12-11 2021-11-23 完美世界(北京)软件科技发展有限公司 Game light rendering method, device, equipment and storage medium
CN112439196A (en) * 2020-12-11 2021-03-05 完美世界(北京)软件科技发展有限公司 Game light rendering method, device, equipment and storage medium
CN112691378A (en) * 2020-12-23 2021-04-23 完美世界(北京)软件科技发展有限公司 Image processing method, apparatus and readable medium
CN112669432A (en) * 2020-12-23 2021-04-16 北京像素软件科技股份有限公司 Volume cloud rendering method and device, electronic equipment and storage medium
CN112691378B (en) * 2020-12-23 2022-06-07 完美世界(北京)软件科技发展有限公司 Image processing method, apparatus and readable medium
CN113144613B (en) * 2021-05-08 2024-06-21 成都乘天游互娱网络科技有限公司 Model-based method for generating volume cloud
CN113144613A (en) * 2021-05-08 2021-07-23 成都乘天游互娱网络科技有限公司 Model-based volume cloud generation method
CN113421199A (en) * 2021-06-23 2021-09-21 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113421199B (en) * 2021-06-23 2024-03-12 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN113283543A (en) * 2021-06-24 2021-08-20 北京优锘科技有限公司 WebGL-based image projection fusion method, device, storage medium and equipment
CN113470161A (en) * 2021-06-30 2021-10-01 完美世界(北京)软件科技发展有限公司 Illumination determination method for volume cloud in virtual environment, related equipment and storage medium
CN113936097A (en) * 2021-09-30 2022-01-14 完美世界(北京)软件科技发展有限公司 Volume cloud rendering method and device and storage medium
CN113936097B (en) * 2021-09-30 2023-10-20 完美世界(北京)软件科技发展有限公司 Volume cloud rendering method, device and storage medium
CN114170359A (en) * 2021-11-03 2022-03-11 完美世界(北京)软件科技发展有限公司 Volume fog rendering method, device and equipment and storage medium
CN114998504B (en) * 2022-07-29 2022-11-15 杭州摩西科技发展有限公司 Two-dimensional image illumination rendering method, device and system and electronic device
CN114998504A (en) * 2022-07-29 2022-09-02 杭州摩西科技发展有限公司 Two-dimensional image illumination rendering method, device and system and electronic device
CN115830208A (en) * 2023-01-09 2023-03-21 腾讯科技(深圳)有限公司 Global illumination rendering method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN111968215B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
CN111968215B (en) Volume light rendering method and device, electronic equipment and storage medium
CN111968216B (en) Volume cloud shadow rendering method and device, electronic equipment and storage medium
US11024077B2 (en) Global illumination calculation method and apparatus
CN111508052B (en) Rendering method and device of three-dimensional grid body
CN107341853B (en) Virtual-real fusion method and system for super-large virtual scene and dynamic screen shooting
JP5531093B2 (en) How to add shadows to objects in computer graphics
US20050041024A1 (en) Method and apparatus for real-time global illumination incorporating stream processor based hybrid ray tracing
CN108805971B (en) Ambient light shielding method
CN113674389B (en) Scene rendering method and device, electronic equipment and storage medium
US11232628B1 (en) Method for processing image data to provide for soft shadow effects using shadow depth information
CN114419240B (en) Illumination rendering method and device, computer equipment and storage medium
US8854392B2 (en) Circular scratch shader
CN111968214B (en) Volume cloud rendering method and device, electronic equipment and storage medium
Widmer et al. An adaptive acceleration structure for screen-space ray tracing
US20230230311A1 (en) Rendering Method and Apparatus, and Device
CN113436343A (en) Picture generation method and device for virtual studio, medium and electronic equipment
JP2020198066A (en) Systems and methods for augmented reality applications
US20230394740A1 (en) Method and system providing temporary texture application to enhance 3d modeling
CA3199390A1 (en) Systems and methods for rendering virtual objects using editable light-source parameter estimation
CN115713584A (en) Method, system, device and storage medium for rendering volume cloud based on directed distance field
US20230274493A1 (en) Direct volume rendering apparatus
KR20230013099A (en) Geometry-aware augmented reality effects using real-time depth maps
US20230090732A1 (en) System and method for real-time ray tracing in a 3d environment
CN118172459A (en) Oblique photography model rendering method and rendering system
CN113658318A (en) Data processing method and system, training data generation method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant