CN111968215B - Volume light rendering method and device, electronic equipment and storage medium - Google Patents

Volume light rendering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111968215B
CN111968215B CN202010747145.3A CN202010747145A CN111968215B CN 111968215 B CN111968215 B CN 111968215B CN 202010747145 A CN202010747145 A CN 202010747145A CN 111968215 B CN111968215 B CN 111968215B
Authority
CN
China
Prior art keywords
volume
rendering
illumination information
light
light source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010747145.3A
Other languages
Chinese (zh)
Other versions
CN111968215A (en
Inventor
彭通
周陶生
王鹏
徐丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perfect World Beijing Software Technology Development Co Ltd
Original Assignee
Perfect World Beijing Software Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perfect World Beijing Software Technology Development Co Ltd filed Critical Perfect World Beijing Software Technology Development Co Ltd
Priority to CN202010747145.3A priority Critical patent/CN111968215B/en
Publication of CN111968215A publication Critical patent/CN111968215A/en
Application granted granted Critical
Publication of CN111968215B publication Critical patent/CN111968215B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

The application relates to a method, a device, electronic equipment and a storage medium for volume light rendering, wherein the method comprises the following steps: creating a high definition rendering pipeline; rendering a volumetric cloud model to a rendering target in a high definition rendering pipeline such that a cloud layer is displayed to a screen space, wherein the volumetric cloud model is used to represent the cloud layer in the virtual scene; calculating volume illumination information corresponding to each pixel point in the screen space; and rendering in a high-definition rendering pipeline according to the volume illumination information, so that the volume light corresponding to the cloud layer is displayed to a screen space. According to the technical scheme, the rendering of the volume cloud is realized in the HDRP, so that the volume cloud effect of high-level visual fidelity is generated in the scene, the interaction with the volume light is increased, and a great effect is exerted in the aspect of improving the picture texture. The volume light effect is added, the stereoscopic impression and the hierarchy of the image are richer, and the real receptivity of the scene is improved.

Description

Volume light rendering method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and apparatus for rendering volume light, an electronic device, and a storage medium.
Background
When a beam of light passes through the colloid, a bright "path" in the colloid is observed from the direction of normal incident light due to the scattering effect of the colloid particles on the light, and this phenomenon is called the tyndall phenomenon. Such effects are often referred to as volume Light (v/v Light) in real-time rendering. When sunlight passes through the gaps of the cloud, a light column, also known as a God Ray, is formed. Compared with the illumination in the prior game, the illumination under the special effect gives people visual space feeling, and further enables game players to have more real feeling.
Therefore, how to simulate the volumetric light effect corresponding to the volumetric cloud is a technical problem to be solved in the prior art.
Disclosure of Invention
In order to solve the technical problems described above or at least partially solve the technical problems described above, embodiments of the present application provide a method, an apparatus, an electronic device, and a storage medium for volume light rendering.
According to an aspect of the embodiments of the present application, there is provided a volumetric light rendering method, including:
creating a high definition rendering pipeline;
rendering a volumetric cloud model to a rendering target in the high definition rendering pipeline such that a cloud layer is displayed to a screen space, wherein the volumetric cloud model is used to represent the cloud layer in a virtual scene;
calculating volume illumination information corresponding to each pixel point in the screen space;
and rendering in the high-definition rendering pipeline according to the volume illumination information, so that the volume light corresponding to the cloud layer is displayed to the screen space.
Optionally, the calculating the volumetric illumination information corresponding to each pixel point in the screen space includes:
acquiring a sun shadow map corresponding to a view cone and a sun light source corresponding to a camera view angle;
performing discrete processing on the view cone into a three-dimensional texture image;
calculating first illumination information corresponding to each voxel in the three-dimensional texture image according to the sun shadow map;
sampling according to the light rays emitted by the main view point of the camera view angle to obtain second illumination information of the voxels in the direction of the view line of the camera view angle;
and calculating the volume illumination information according to the first illumination information and the second illumination information.
Optionally, the calculating the volume illumination information according to the first illumination information and the second illumination information includes:
when it is determined that the cloud layer shielding exists in the direction of the solar light source, the shadow intensity corresponding to the cloud layer in the direction of the solar light source is obtained;
calculating third illumination information corresponding to the voxel according to the shadow intensity and the first illumination information;
and calculating the volume illumination information according to the third illumination information and the second illumination information.
Optionally, the discrete processing of the view cone into a three-dimensional texture image includes:
determining a volume parameter corresponding to the three-dimensional texture image according to the screen resolution;
the view cone is processed into a three-dimensional texture image corresponding to the volume parameter in a discrete mode;
the sampling the light rays emitted from the main view point of the camera view angle comprises:
determining a sampling step according to the volume parameter;
and sampling according to the light rays emitted by the main view point of the camera view angle by the sampling step length.
Optionally, the determining, according to the screen resolution, the volume parameter corresponding to the three-dimensional texture image includes:
determining a width value and a height value in the volume parameter according to the screen resolution;
and determining the depth value in the volume parameter by adopting a preset resolution lower than the screen resolution.
Optionally, the rendering in the high-definition rendering pipeline according to the volume illumination information includes:
performing atmospheric scattering sampling on the rendering target to obtain scattered light information corresponding to each pixel of the rendering target;
and rendering the volume illumination information according to the scattered light information.
Optionally, the performing atmospheric scattering sampling on the rendering target includes:
when sampling edge pixels of an opaque object, sampling sky colors adjacent to the edge pixels;
the sky color is used as the color of the edge pixels.
Optionally, the obtaining a sun shade map corresponding to the sun light source includes:
acquiring a camera depth texture map obtained from the camera view angle and a light source depth texture map obtained from the solar light source direction;
and performing shadow collection calculation in the screen space according to the camera depth texture map and the light source depth texture map to obtain the sun shadow map.
Optionally, the performing a shadow collection calculation in the screen space according to the camera depth texture map and the light source depth texture map to obtain the sun shadow map includes:
determining a first depth value of each pixel in the camera depth texture map and corresponding world space coordinates thereof;
converting world space coordinates of the pixels into light source space coordinates corresponding to the light source depth texture map;
comparing a second depth value corresponding to the light source space coordinate in the light source depth texture map with the first depth value;
and when the pixel is determined to be positioned in the shadow according to the comparison result of the first depth value and the second depth value, obtaining the sun shadow map according to the pixel value of the pixel positioned in the shadow.
According to another aspect of an embodiment of the present application, there is provided a volume light rendering device including:
a creation module for creating a high definition rendering pipeline;
a first rendering module for rendering a volumetric cloud model to a rendering target in the high definition rendering pipeline such that a cloud layer is displayed to a screen space, wherein the volumetric cloud model is used to represent the cloud layer in a virtual scene;
the calculation module is used for calculating the volume illumination information corresponding to each pixel point in the screen space;
and the second rendering module is used for rendering in the high-definition rendering pipeline according to the volume illumination information, so that the volume light corresponding to the cloud layer is displayed to the screen space.
According to another aspect of an embodiment of the present application, there is provided an electronic device including: the device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the above-mentioned method steps when executing the computer program.
According to another aspect of the embodiments of the present application, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-mentioned method steps.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
the rendering of the volume cloud is realized in the HDRP, so that the volume cloud effect of high-level visual fidelity is generated in the scene, and the interaction with the volume light is increased in the process of simulating the volume cloud, so that the method plays a great role in improving the picture texture. The volume light effect is added, the stereoscopic impression and the hierarchy of the image are richer, and the real receptivity of the scene is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a flowchart of a method for rendering volume light according to an embodiment of the present application;
FIG. 2 is a flow chart of a method for volumetric light rendering according to another embodiment of the present application;
FIG. 3 is a flow chart of a method for volumetric light rendering according to another embodiment of the present application;
FIG. 4 is a flow chart of a method for volumetric light rendering according to another embodiment of the present application;
FIG. 5 is a flow chart of a method for volumetric light rendering according to another embodiment of the present application;
FIG. 6 is a flow chart of a method for volumetric light rendering according to another embodiment of the present application;
FIG. 7 is a flow chart of a method for volumetric light rendering according to another embodiment of the present application;
fig. 8 is a block diagram of a volumetric light rendering device according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present application based on the embodiments herein.
The volume cloud (Volumetric Clouds), commonly referred to as volume cloud, is the volume cloud in a game that uses an image engine to simulate the semitransparent, irregular appearance of a real cloud.
Unity in order to improve the visual performance of engines, a programmable rendering pipeline, a high definition rendering pipeline (High Definition Render Pipeline, HDRP for short), that provides advanced visual fidelity, suitable for PC and host platforms is proposed. Relative to conventional rendering pipelines, HDRP may fully customize the pipeline implementation through c# scripts. At present, HDRP is still in a trial stage, and lacks realization of many specific rendering effects. In the application, the rendering of the volume cloud with high definition and vivid effect and the corresponding volume light is realized based on the HDRP.
The following first describes a volume light rendering method based on volume cloud provided by the embodiment of the present invention.
Fig. 1 is a flowchart of a volumetric light rendering method according to an embodiment of the present application. As shown in fig. 1, the method comprises the steps of:
step S11, creating a high-definition rendering pipeline;
step S12, rendering a volume cloud model to a rendering target in a high-definition rendering pipeline so that a cloud layer is displayed in a screen space, wherein the volume cloud model is used for representing the cloud layer in a virtual scene;
step S13, calculating volume illumination information corresponding to each pixel point in the screen space;
and step S14, rendering is carried out in a high-definition rendering pipeline according to the volume illumination information, so that the volume light corresponding to the cloud layer is displayed in a screen space.
In the embodiment, the rendering of the volume cloud is realized in the HDRP, so that the volume cloud effect of high-level visual fidelity is generated in the scene, and in the process of simulating the volume cloud, the interaction with the volume light is increased, so that a great effect is exerted in the aspect of improving the texture of the picture. The volume light effect is added, the stereoscopic impression and the hierarchy of the image are richer, and the real receptivity of the scene is improved.
The above steps S11 to S14 are described in detail below.
In step S11, there are two ways of creating the HDRP project, namely, upgrading to the HDRP project based on the original project, and creating a new HDRP project. In this embodiment, both methods can be adopted.
In step S12, first, a volumetric cloud model is added to the HDRP, and specific operations include: after turning on the Volume lighting (Volumetric Lighting), a Volume cloud model is added under the Volume frame.
Optionally, the volumetric cloud model in this embodiment is a model obtained by simulation in advance using a cloud simulation technique. Cloud simulation methods include, but are not limited to, the following:
(1) Cloud modeling techniques based on physical methods, such as particle systems, bubble modeling, or voxel modeling. For example, using a cellular automaton algorithm to simulate the physical change process of the volume cloud;
(2) Cloud simulation techniques based on existing empirical models, such as texture mapping methods or noise function methods. For example, a three-dimensional volume cloud model is constructed by using a Perlin noise function, and after the time dimension is increased, the generation or disappearance of particles is controlled according to the number of frames of program operation, so that the physical change of the volume cloud is realized.
And rendering the volume cloud model to a rendering target in the HDRP so that the rendered cloud layer is displayed on a screen. The rendering target (render target) is a video buffer for rendering pixels. In this step, the volume cloud model may be rendered to a default rendering target, i.e., a background buffer, which is physically a block of video memory containing information to be drawn for the next frame. A new render target may also be created using the render target2D class, leaving a new region in memory for drawing the volume cloud. Alternatively, the contents of each part of the image may be respectively drawn into different rendering targets, and then the image elements are compiled, and combined to form the final background buffer data. The display card draws pixels of the scene by reading data in the rendering target and using an Effect class, thereby displaying cloud layers and shadows on the screen.
In step S13, since the Volume lighting (Volumetric Lighting) component is provided in the HDRP, after the "Volume" option for controlling the global effect is checked in the HDRP, a scene with a Density Volume surrounding the whole area to be wrapped is created, the Volume fog function is added in the scene setting, and the light source is added in the wrapping area, so that the Volume light effect can be increased in the scene.
In step S14, volume light is rendered according to the calculated volume illumination information, and a volume light effect corresponding to the cloud layer is displayed in the scene.
The process of calculating the volume illumination information in the above step S13 is described in detail, and in this embodiment, the volume illumination is implemented based on the sun Shadow map (Shadow map) and the Ray stepping (Ray Marching) of the sun light source.
Fig. 2 is a flowchart of a volumetric light rendering method according to another embodiment of the present application. As shown in fig. 2, the step S13 includes the steps of:
step S21, obtaining a sun shadow map corresponding to a view cone and a sun light source corresponding to a camera view angle;
s22, performing discrete processing on the view cone into a three-dimensional texture image;
step S23, calculating first illumination information corresponding to each voxel in the three-dimensional texture image according to the sun shadow map;
step S24, sampling according to the light rays emitted by the main view point of the camera view angle, and obtaining second illumination information of voxels in the view direction of the camera view angle;
step S25, calculating volume illumination information according to the first illumination information and the second illumination information.
In the above steps S21 to S23, the view cone of the camera space is projected to the space represented by the three-dimensional texture, each voxel in the three-dimensional texture storing its current spatial information. It can be determined whether each voxel in the three-dimensional texture space is occluded, i.e. illuminated, based on the sun shading map. Specifically, by multiplying the world coordinates of a voxel point (obtained in a Fragment Shader (Fragment Shader) or by the depth back-projection of the voxel point) with the view projection matrix (viewProjectionMatrix) of the light source, the uv texture coordinates of the voxel point under the Shadow map are obtained, and compared with the depth recorded by the Shadow map, if the depth recorded by the Shadow map is greater than or equal to the point, it indicates that the point is not under Shadow, and should be illuminated, otherwise, it indicates that the point is blocked and should not be illuminated. Thus, illumination information corresponding to each voxel can be obtained based on the sun Shadow map.
In the step S24, rayMarching is performed in each direction through the main viewpoint of the camera, so as to obtain illumination information of the current camera sight line direction. Specifically, starting from the main viewpoint, advancing one point at a time along the ray, sampling the brightness of each point, and summing the scattering brightness at all the passed sampling points to obtain the color of the voxel; the brightness of the emitted light is inversely proportional to the square of the distance from the light source; light scattering luminance in the iterative medium.
In the step S25, the first illumination information and the second illumination information of each voxel are combined, so that the illumination result of the voxel after the sun rays are blocked can be obtained, and a volumetric light effect is formed.
In this embodiment, the illumination information of each voxel point in the three-dimensional texture is calculated based on the sun Shadow map, and the illumination information of each voxel in the line-of-sight direction obtained by raymaring is combined to obtain the illumination information of the voxels in the three-dimensional space after the sun rays are blocked, so that the volume light blocked by the cloud layer, namely, the God Ray, is formed, the visual perception of the space is given to the viewer, and the viewer has a more real perception.
In addition, when the raymaring is used for calculating illumination information, the scattered light intensity in all directions is the same, the scattered light intensity is not changed, the reality is not enough, and in addition, the volume light is not blocked. Thus, optionally, when calculating the scattered light intensity, the light intensity should not be attenuated by the distance to the light source alone, but should be multiplied by at least one of the following factors:
(1) The scatter factor (HG formula) indicates that the scattered light intensity should be different for each direction and that the sum of the scattered light intensities should be the same as the intensity of the beam of light impinging on the dust, i.e. energy conservation.
(2) Light projection ratio factor: the ratio of incident light intensity to transmitted light intensity can be described in terms of the density of the substance and the distance to the light source according to the Beer-Lambert method.
(3) Shadow factor: shadow algorithms such as Shadow Mapping.
Through the adjustment of the factors, the calculated volume light is changed more, and the authenticity is further improved.
In an alternative embodiment, the illumination information of each voxel may be further adjusted according to the shadow intensity (shadow strength) of the cloud in case of cloud occlusion. Fig. 3 is a flowchart of a volumetric light rendering method according to another embodiment of the present application. As shown in fig. 3, the step S25 includes:
step S31, when it is determined that cloud cover exists in the direction of the solar light source, the shadow intensity corresponding to the cloud cover in the direction of the solar light source is obtained;
step S32, calculating third illumination information corresponding to the voxels according to the shadow intensity and the first illumination information;
and step S33, calculating volume illumination information according to the third illumination information and the second illumination information.
In this embodiment, the corresponding voxel value in the sun Shadow map is adjusted according to the Shadow strength projected by the cloud layer in the sunlight direction, where the value range of the Shadow strength is [0,1], and when the Shadow strength is 0, it indicates that there is no Shadow. The finally obtained volume illumination information can show the shadow effect of the cloud layer on sunlight.
In another alternative embodiment, in the discrete processing of the view cone in step S22, a volume parameter of the three-dimensional texture image needs to be obtained, and the volume parameter may be default or may be determined according to the screen resolution. Fig. 4 is a flowchart of a volumetric light rendering method according to another embodiment of the present application. As shown in fig. 4, step S22 includes:
step S41, determining volume parameters corresponding to the three-dimensional texture image according to the screen resolution;
and S42, performing discrete processing on the view cone into a three-dimensional texture image corresponding to the volume parameter.
In step S41, the width value, the height value, and the depth value in the volume parameter may be determined in different manners, respectively. Optionally, step S41 includes: determining a width value and a height value in the volume parameter according to the screen resolution; depth values in the volume parameter are determined using a preset resolution lower than the screen resolution.
Where the width value width and height values height use a high resolution, such as 640 or 1280 pixels, associated with the screen resolution, and the depth value depth may be selected to be a lower resolution, such as 64 or 128 pixels.
After the volume parameters corresponding to the three-dimensional texture image are determined, the sampling step length can be determined according to the volume parameters when RayMaring is performed. The step S24 includes: determining a sampling step length according to the volume parameter; and sampling according to the light rays emitted by the main view point of the camera view angle by sampling step sizes.
Alternatively, the sampling Step size (Step size) may be a fixed Step size (Constant Step), for example, the sampling Step size is 5.
The Ray marking of fixed step length is very inefficient, and rays are lengthened by the same length each time, without taking into account the fact that the geometry fills up the volume. While for a loader, an increase in the stall cycles means a decrease in performance. Step size may also be lengthened if real-time stereo rendering is to be achieved.
For example, light is emitted from the origin of the light, and a distance function, such as Sphere Hit function, is first called with the origin as the center to calculate the shortest distance from the point to the spherical surface. If the error value is smaller than the set error value, the object is considered to be contacted, and the distance value is returned. If the object is judged to be in contact, the distance value calculated immediately before the object is moved forward along the ray vector is obtained to obtain a new circle center, and the distance is continuously calculated until the object is judged to be in contact. If the number of steps is excessive, the set maximum number of steps is exceeded or the distance of steps exceeds the maximum distance, the object cannot be contacted.
After distance estimation, the step length of the Ray marking is reduced along with approaching objects, so that the step number required by the Ray hitting the rendering body is greatly reduced, and the Ray marking efficiency and the performance of a reader are improved.
In the simulation of the volume light, the real physical rule needs to be considered, wherein the most important point is to consider the influence of atmospheric scattering on the light. The scattering calculation can meet the continuous time space requirement of the virtual scene, greatly improve the visual expressive force and the reality of the scene, and provide the environment light effect divided by writing. However, if the illumination information after the influence of the atmospheric scattering is calculated at the same time in the Ray marking process, the calculation efficiency is greatly affected. In order to obtain the influence of atmospheric scattering on illumination information and avoid the influence on calculation efficiency, the following method is adopted for solving.
Fig. 5 is a flowchart of a volumetric light rendering method according to another embodiment of the present application. As shown in fig. 5, the step S14 includes:
step S51, performing atmospheric scattering sampling on the rendering target to obtain scattered light information corresponding to each pixel of the rendering target;
and step S52, rendering the volume illumination information according to the scattered light information.
In this embodiment, since the calculation result of the volume cloud in the HDRP already includes the rendering result of the atmospheric scattering scene, the RenderTarget corresponding to the volume cloud in the HDRP may be sampled, so that the illumination information after atmospheric scattering may be directly obtained.
Due to calculation errors and differences in the corresponding RenderTarget and screen space sizes of the volumetric cloud, when the sky is color sampled, the sky may be sampled onto opaque objects in the scene. In order to solve the problem of inaccurate sampling, the performing atmospheric scattering sampling on the rendering target in the step S51 includes: when the edge pixels of the opaque object are sampled, sampling the sky colors adjacent to the edge pixels; the sky color is used as the color of the edge pixels.
Alternatively, opaque objects in the scene may be identified by a depth map corresponding to the current picture. The camera rendering mode is set to a depth mode (depth), and then a Unity built-in variable camera. The depth map may store therein non-linearly distributed depth values in the [0,1] range, which are derived from standardized device coordinates (Normalized Device Coordinates, NDC). In the depth map, the opaque object is rendered black. Based on the depth values in the depth map, opaque objects in the picture can be identified.
By the steps S51 to S52 described above, the opaque object is traced, and the color in the surrounding sky is used as the color of the edge pixel, so that the error of the sampling result can be effectively avoided.
Fig. 6 is a flowchart of a volumetric light rendering method according to another embodiment of the present application. In another alternative embodiment, as shown in fig. 6, the step S21 of obtaining a sun shade map corresponding to the sun light source includes:
step S61, obtaining a camera depth texture map obtained from a camera view angle and a light source depth texture map obtained from a solar light source direction;
step S62, shadow collection calculation is carried out in a screen space according to the camera depth texture map and the light source depth texture map, and a sun shadow map is obtained.
In step S61, a depth camera is first created at the current camera, resulting in a depth texture map observed from the current camera; a depth camera is then created from the solar source, resulting in a depth texture map that is observed from the solar source. In step S62, a shadow collection calculation (Shadows Collector) is performed in the screen space to obtain a solar shadow map, i.e., pixels in the shadow under sunlight.
Fig. 7 is a flowchart of a volumetric light rendering method according to another embodiment of the present application. As shown in fig. 7, the shadow collection process of step S62 described above includes the steps of:
step S71, determining a first depth value of each pixel in the camera depth texture map and corresponding world space coordinates thereof;
step S72, converting world space coordinates of the pixels into light source space coordinates corresponding to the light source depth texture map;
step S73, comparing a second depth value corresponding to the light source space coordinate in the light source depth texture map with the first depth value;
step S74, when the pixel is determined to be located in the shadow according to the comparison result of the first depth value and the second depth value, the sun shadow map is obtained according to the pixel value of the pixel located in the shadow.
In steps S71 to S74, the world coordinates of the pixels in the world space are reconstructed using the depth information, after the world space coordinates of each pixel are converted into the light source space, the corresponding depth value of the pixel inside the light source depth texture map is determined, the depth value of the pixel in the camera depth texture map is compared with the depth value in the light source depth texture map, and if the depth value of the pixel in the camera depth texture map is larger than the depth value in the light source depth texture map, it is indicated that the pixel cannot be illuminated by the light source, and the pixel is in shadow. Thus, the resulting solar shadow map contains all the shaded areas of the screen space relative to the sun.
In another alternative embodiment, the method further comprises:
receiving an editing operation of the volume cloud overlay map in the volume cloud editor;
and adjusting the whole cloud model according to the editing operation.
In this embodiment, the volume cloud editor provides a GameView window, and a user can edit the Coverage map of the volume cloud in real time in the window, so that not only can the rendering result of the volume cloud be adjusted, but also the shadow of the whole volume cloud and the corresponding volume light effect can be adjusted based on the Coverage map.
The following are device embodiments of the present application, which may be used to perform method embodiments of the present application.
Fig. 8 is a block diagram of a volumetric light rendering device according to an embodiment of the present application, where the volumetric light rendering device may be implemented as part or all of an electronic device by software, hardware, or a combination of both. As shown in fig. 8, the volume light rendering device includes:
a creation module 81 for creating a high definition rendering pipeline;
a first rendering module 82 for rendering a volumetric cloud model to a rendering target in a high definition rendering pipeline such that a cloud layer is displayed to a screen space, wherein the volumetric cloud model is used to represent the cloud layer in the virtual scene;
a calculating module 83, configured to calculate volumetric illumination information corresponding to each pixel point in the screen space;
the second rendering module 84 is configured to render in the high-definition rendering pipeline according to the volume illumination information, so that the volume light corresponding to the cloud layer is displayed in the screen space.
Optionally, the calculating module 83 includes:
the acquisition sub-module is used for acquiring a view cone corresponding to a camera view angle and a sun shadow map corresponding to a sun light source;
the discrete processing sub-module is used for performing discrete processing on the view cone into a three-dimensional texture image;
the first computing sub-module is used for computing first illumination information corresponding to each voxel in the three-dimensional texture image according to the sun shadow map;
the light stepping sub-module is used for sampling according to the light emitted by the main view point of the camera view angle to obtain the second illumination information of the voxels in the view direction of the camera view angle;
and the second computing sub-module is used for computing the volume illumination information according to the first illumination information and the second illumination information.
Optionally, the second calculation sub-module is configured to obtain, when it is determined that there is a cloud cover in the direction of the solar light source, a shadow intensity corresponding to the cloud cover in the direction of the solar light source; calculating third illumination information corresponding to the voxels according to the shadow intensity and the first illumination information; and calculating volume illumination information according to the third illumination information and the second illumination information.
Optionally, the discrete processing sub-module is used for determining a volume parameter corresponding to the three-dimensional texture image according to the screen resolution; the view cone is processed into a three-dimensional texture image corresponding to the volume parameter in a discrete mode;
the light stepping sub-module is used for determining a sampling step length according to the volume parameter; and sampling according to the light rays emitted by the main view point of the camera view angle by sampling step sizes.
Optionally, the discrete processing sub-module is used for determining a width value and a height value in the volume parameter according to the screen resolution; depth values in the volume parameter are determined using a preset resolution lower than the screen resolution.
Optionally, the second rendering module 84 includes:
the sub-module is used for carrying out atmospheric scattering sampling on the rendering target to obtain scattered light information corresponding to each pixel of the rendering target;
and the rendering sub-module is used for rendering the volume illumination information according to the scattered light information.
Optionally, a sub-module is used for sampling the sky color adjacent to the edge pixel when the edge pixel of the opaque object is sampled; the sky color is used as the color of the edge pixels.
Optionally, the obtaining submodule is used for obtaining a camera depth texture map obtained from a camera view angle and a light source depth texture map obtained from a solar light source direction; and performing shadow collection calculation in a screen space according to the camera depth texture map and the light source depth texture map to obtain a sun shadow map.
Optionally, the obtaining sub-module is further configured to determine a first depth value of each pixel in the camera depth texture map and a world space coordinate corresponding to the first depth value; converting world space coordinates of the pixels into light source space coordinates corresponding to the light source depth texture map; comparing a second depth value corresponding to the light source space coordinate in the light source depth texture map with the first depth value; when the pixel is determined to be positioned in the shadow according to the comparison result of the first depth value and the second depth value, the sun shadow map is obtained according to the pixel value of the pixel positioned in the shadow.
The embodiment of the application further provides an electronic device, as shown in fig. 9, where the electronic device may include: the device comprises a processor 1501, a communication interface 1502, a memory 1503 and a communication bus 1504, wherein the processor 1501, the communication interface 1502 and the memory 1503 are in communication with each other through the communication bus 1504.
A memory 1503 for storing a computer program;
the processor 1501, when executing the computer program stored in the memory 1503, implements the steps of the method embodiments described below.
The communication bus mentioned above for the electronic devices may be a peripheral component interconnect standard (Peripheral Component Interconnect, pi) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also Digital signal processors (Digital SignalProcessing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
The present application also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the following steps of the above-described method embodiments:
creating a high definition rendering pipeline;
rendering a volumetric cloud model to a rendering target in a high definition rendering pipeline such that a cloud layer is displayed to a screen space, wherein the volumetric cloud model is used to represent the cloud layer in the virtual scene;
calculating volume illumination information corresponding to each pixel point in the screen space;
and rendering in a high-definition rendering pipeline according to the volume illumination information, so that the volume light corresponding to the cloud layer is displayed to a screen space.
Optionally, calculating the volume illumination information corresponding to each pixel point in the screen space includes:
acquiring a sun shadow map corresponding to a view cone and a sun light source corresponding to a camera view angle;
performing discrete processing on the view cone into a three-dimensional texture image;
calculating first illumination information corresponding to each voxel in the three-dimensional texture image according to the sun shadow map;
sampling according to the light rays emitted by the main view point of the camera view angle to obtain second illumination information of voxels in the view direction of the camera view angle;
and calculating volume illumination information according to the first illumination information and the second illumination information.
Optionally, calculating the volume illumination information according to the first illumination information and the second illumination information includes:
when it is determined that cloud cover exists in the direction of the solar light source, the shadow intensity corresponding to the cloud cover in the direction of the solar light source is obtained;
calculating third illumination information corresponding to the voxels according to the shadow intensity and the first illumination information;
and calculating volume illumination information according to the third illumination information and the second illumination information.
Optionally, the discrete processing of the view cone into a three-dimensional texture image includes:
determining a volume parameter corresponding to the three-dimensional texture image according to the screen resolution;
the view cone is processed into a three-dimensional texture image corresponding to the volume parameter in a discrete mode;
sampling according to light rays emitted from a main viewpoint of a camera view angle, comprising:
determining a sampling step length according to the volume parameter;
and sampling according to the light rays emitted by the main view point of the camera view angle by sampling step sizes.
Optionally, determining the volume parameter corresponding to the three-dimensional texture image according to the screen resolution includes:
determining a width value and a height value in the volume parameter according to the screen resolution;
depth values in the volume parameter are determined using a preset resolution lower than the screen resolution.
Optionally, rendering in a high definition rendering pipeline according to the volumetric illumination information includes:
performing atmospheric scattering sampling on the rendering target to obtain scattered light information corresponding to each pixel of the rendering target;
and rendering the volume illumination information according to the scattered light information.
Optionally, performing atmospheric scattering sampling on the rendering target includes:
when the edge pixels of the opaque object are sampled, sampling the sky colors adjacent to the edge pixels;
the sky color is used as the color of the edge pixels.
Optionally, obtaining a sun shadow map corresponding to the sun light source includes:
acquiring a camera depth texture map obtained from a camera view angle and a light source depth texture map obtained from a solar light source direction;
and performing shadow collection calculation in a screen space according to the camera depth texture map and the light source depth texture map to obtain a sun shadow map.
Optionally, performing shadow collection calculation in a screen space according to the camera depth texture map and the light source depth texture map to obtain a solar shadow map, including:
determining a first depth value of each pixel in the camera depth texture map and corresponding world space coordinates thereof;
converting world space coordinates of the pixels into light source space coordinates corresponding to the light source depth texture map;
comparing a second depth value corresponding to the light source space coordinate in the light source depth texture map with the first depth value;
when the pixel is determined to be positioned in the shadow according to the comparison result of the first depth value and the second depth value, the sun shadow map is obtained according to the pixel value of the pixel positioned in the shadow.
It should be noted that, with respect to the apparatus, electronic device, and computer-readable storage medium embodiments described above, since they are substantially similar to the method embodiments, the description is relatively simple, and reference should be made to the description of the method embodiments for relevant points.
It is further noted that relational terms such as "first" and "second", and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is only a specific embodiment of the invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (11)

1. A method of volumetric light rendering, comprising:
creating a high definition rendering pipeline;
rendering a volumetric cloud model to a rendering target in the high definition rendering pipeline such that a cloud layer is displayed to a screen space, wherein the volumetric cloud model is used to represent the cloud layer in a virtual scene;
calculating volume illumination information corresponding to each pixel point in the screen space;
rendering in the high-definition rendering pipeline according to the volume illumination information, so that volume light corresponding to the cloud layer is displayed in the screen space;
the calculating the volume illumination information corresponding to each pixel point in the screen space comprises the following steps:
acquiring a sun shadow map corresponding to a view cone and a sun light source corresponding to a camera view angle;
performing discrete processing on the view cone into a three-dimensional texture image;
calculating first illumination information corresponding to each voxel in the three-dimensional texture image according to the sun shadow map;
sampling according to the light rays emitted by the main view point of the camera view angle to obtain second illumination information of the voxels in the direction of the view line of the camera view angle;
and calculating the volume illumination information according to the first illumination information and the second illumination information.
2. The method of claim 1, wherein the calculating the volumetric illumination information from the first illumination information and the second illumination information comprises:
when it is determined that the cloud layer shielding exists in the direction of the solar light source, the shadow intensity corresponding to the cloud layer in the direction of the solar light source is obtained;
calculating third illumination information corresponding to the voxel according to the shadow intensity and the first illumination information;
and calculating the volume illumination information according to the third illumination information and the second illumination information.
3. The method of claim 1, wherein the discretely processing the view cone into three-dimensional texture images comprises:
determining a volume parameter corresponding to the three-dimensional texture image according to the screen resolution;
the view cone is processed into a three-dimensional texture image corresponding to the volume parameter in a discrete mode;
the sampling the light rays emitted from the main view point of the camera view angle comprises:
determining a sampling step according to the volume parameter;
and sampling according to the light rays emitted by the main view point of the camera view angle by the sampling step length.
4. A method according to claim 3, wherein said determining the corresponding volume parameter of the three-dimensional texture image according to the screen resolution comprises:
determining a width value and a height value in the volume parameter according to the screen resolution;
and determining the depth value in the volume parameter by adopting a preset resolution lower than the screen resolution.
5. The method of claim 1, wherein the rendering in the high definition rendering pipeline according to the volumetric illumination information comprises:
performing atmospheric scattering sampling on the rendering target to obtain scattered light information corresponding to each pixel of the rendering target;
and rendering the volume illumination information according to the scattered light information.
6. The method of claim 5, wherein the atmospheric scattering sampling of the render target comprises:
when sampling edge pixels of an opaque object, sampling sky colors adjacent to the edge pixels;
the sky color is used as the color of the edge pixels.
7. The method according to claim 1, wherein the obtaining a sun shadow map corresponding to a sun light source comprises:
acquiring a camera depth texture map obtained from the camera view angle and a light source depth texture map obtained from the solar light source direction;
and performing shadow collection calculation in the screen space according to the camera depth texture map and the light source depth texture map to obtain the sun shadow map.
8. The method of claim 7, wherein performing a shadow collection calculation in the screen space based on the camera depth texture map and the light source depth texture map to obtain the solar shadow map comprises:
determining a first depth value of each pixel in the camera depth texture map and corresponding world space coordinates thereof;
converting world space coordinates of the pixels into light source space coordinates corresponding to the light source depth texture map;
comparing a second depth value corresponding to the light source space coordinate in the light source depth texture map with the first depth value;
and when the pixel is determined to be positioned in the shadow according to the comparison result of the first depth value and the second depth value, obtaining the sun shadow map according to the pixel value of the pixel positioned in the shadow.
9. A volume light rendering device, comprising:
a creation module for creating a high definition rendering pipeline;
a first rendering module for rendering a volumetric cloud model to a rendering target in the high definition rendering pipeline such that a cloud layer is displayed to a screen space, wherein the volumetric cloud model is used to represent the cloud layer in a virtual scene;
the calculation module is used for calculating the volume illumination information corresponding to each pixel point in the screen space;
the second rendering module is used for rendering in the high-definition rendering pipeline according to the volume illumination information, so that volume light corresponding to the cloud layer is displayed in the screen space;
the computing module includes:
the acquisition sub-module is used for acquiring a view cone corresponding to a camera view angle and a sun shadow map corresponding to a sun light source;
a discrete processing sub-module for performing discrete processing on the view cone into a three-dimensional texture image;
the first computing sub-module is used for computing first illumination information corresponding to each voxel in the three-dimensional texture image according to the sun shadow map;
the light stepping sub-module is used for sampling according to the light emitted by the main view point of the camera view angle to obtain second illumination information of voxels in the view direction of the camera view angle;
and the second computing sub-module is used for computing the volume illumination information according to the first illumination information and the second illumination information.
10. An electronic device, comprising: the device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory is used for storing a computer program;
the processor being adapted to carry out the method steps of any one of claims 1-8 when the computer program is executed.
11. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, carries out the method steps of any of claims 1-8.
CN202010747145.3A 2020-07-29 2020-07-29 Volume light rendering method and device, electronic equipment and storage medium Active CN111968215B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010747145.3A CN111968215B (en) 2020-07-29 2020-07-29 Volume light rendering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010747145.3A CN111968215B (en) 2020-07-29 2020-07-29 Volume light rendering method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111968215A CN111968215A (en) 2020-11-20
CN111968215B true CN111968215B (en) 2024-03-22

Family

ID=73363605

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010747145.3A Active CN111968215B (en) 2020-07-29 2020-07-29 Volume light rendering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111968215B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200900B (en) * 2020-12-02 2021-02-26 成都完美时空网络技术有限公司 Volume cloud rendering method and device, electronic equipment and storage medium
CN112465941B (en) * 2020-12-02 2023-04-28 成都完美时空网络技术有限公司 Volume cloud processing method and device, electronic equipment and storage medium
CN112439196B (en) * 2020-12-11 2021-11-23 完美世界(北京)软件科技发展有限公司 Game light rendering method, device, equipment and storage medium
CN112691378B (en) * 2020-12-23 2022-06-07 完美世界(北京)软件科技发展有限公司 Image processing method, apparatus and readable medium
CN112669432A (en) * 2020-12-23 2021-04-16 北京像素软件科技股份有限公司 Volume cloud rendering method and device, electronic equipment and storage medium
CN113144613B (en) * 2021-05-08 2024-06-21 成都乘天游互娱网络科技有限公司 Model-based method for generating volume cloud
CN113421199B (en) * 2021-06-23 2024-03-12 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN113283543B (en) * 2021-06-24 2022-04-15 北京优锘科技有限公司 WebGL-based image projection fusion method, device, storage medium and equipment
CN113470161B (en) * 2021-06-30 2022-06-07 完美世界(北京)软件科技发展有限公司 Illumination determination method for volume cloud in virtual environment, related equipment and storage medium
CN113936097B (en) * 2021-09-30 2023-10-20 完美世界(北京)软件科技发展有限公司 Volume cloud rendering method, device and storage medium
CN114998504B (en) * 2022-07-29 2022-11-15 杭州摩西科技发展有限公司 Two-dimensional image illumination rendering method, device and system and electronic device
CN115830208B (en) * 2023-01-09 2023-05-09 腾讯科技(深圳)有限公司 Global illumination rendering method, device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091363A (en) * 2014-07-09 2014-10-08 无锡梵天信息技术股份有限公司 Real-time size cloud computing method based on screen space
CN104143205A (en) * 2013-05-11 2014-11-12 哈尔滨点石仿真科技有限公司 Method for achieving real-time rendering of large-scale realistic volumetric cloud
CN105869106A (en) * 2016-04-27 2016-08-17 中国电子科技集团公司第二十八研究所 Improved method for drawing three-dimensional entity cloud
CN109544674A (en) * 2018-11-21 2019-03-29 北京像素软件科技股份有限公司 A kind of volume light implementation method and device
KR20200082601A (en) * 2018-12-31 2020-07-08 한국전자통신연구원 Apparatus and method for rendering multi-layered volumetric clouds

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104143205A (en) * 2013-05-11 2014-11-12 哈尔滨点石仿真科技有限公司 Method for achieving real-time rendering of large-scale realistic volumetric cloud
CN104091363A (en) * 2014-07-09 2014-10-08 无锡梵天信息技术股份有限公司 Real-time size cloud computing method based on screen space
CN105869106A (en) * 2016-04-27 2016-08-17 中国电子科技集团公司第二十八研究所 Improved method for drawing three-dimensional entity cloud
CN109544674A (en) * 2018-11-21 2019-03-29 北京像素软件科技股份有限公司 A kind of volume light implementation method and device
KR20200082601A (en) * 2018-12-31 2020-07-08 한국전자통신연구원 Apparatus and method for rendering multi-layered volumetric clouds

Also Published As

Publication number Publication date
CN111968215A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
CN111968215B (en) Volume light rendering method and device, electronic equipment and storage medium
CN111968216B (en) Volume cloud shadow rendering method and device, electronic equipment and storage medium
US11024077B2 (en) Global illumination calculation method and apparatus
CN107341853B (en) Virtual-real fusion method and system for super-large virtual scene and dynamic screen shooting
JP6910130B2 (en) 3D rendering method and 3D rendering device
JP5531093B2 (en) How to add shadows to objects in computer graphics
CN111508052A (en) Rendering method and device of three-dimensional grid body
US20050041024A1 (en) Method and apparatus for real-time global illumination incorporating stream processor based hybrid ray tracing
US7692647B2 (en) Real-time rendering of realistic rain
CN108805971B (en) Ambient light shielding method
CN111968214B (en) Volume cloud rendering method and device, electronic equipment and storage medium
JP7089495B2 (en) Systems and methods for augmented reality applications
US11810248B2 (en) Method for processing image data to provide for soft shadow effects using shadow depth information
CN111047506B (en) Environmental map generation and hole filling
US9183654B2 (en) Live editing and integrated control of image-based lighting of 3D models
US20230394740A1 (en) Method and system providing temporary texture application to enhance 3d modeling
Widmer et al. An adaptive acceleration structure for screen-space ray tracing
US20200302579A1 (en) Environment map generation and hole filling
US8854392B2 (en) Circular scratch shader
US8106906B1 (en) Optical system effects for computer graphics
CN115713584A (en) Method, system, device and storage medium for rendering volume cloud based on directed distance field
CN115439616A (en) Heterogeneous object characterization method based on multi-object image alpha superposition
JPH08153213A (en) Picture compositing and displaying method
US20230316640A1 (en) Image processing apparatus, image processing method, and storage medium
Do et al. On multi-view texture mapping of indoor environments using Kinect depth sensors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant