WO2022116659A1 - 一种体积云渲染方法、装置、程序和可读介质 - Google Patents

一种体积云渲染方法、装置、程序和可读介质 Download PDF

Info

Publication number
WO2022116659A1
WO2022116659A1 PCT/CN2021/121097 CN2021121097W WO2022116659A1 WO 2022116659 A1 WO2022116659 A1 WO 2022116659A1 CN 2021121097 W CN2021121097 W CN 2021121097W WO 2022116659 A1 WO2022116659 A1 WO 2022116659A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
rendering
information
cloud
layer
Prior art date
Application number
PCT/CN2021/121097
Other languages
English (en)
French (fr)
Inventor
申晨
Original Assignee
成都完美时空网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 成都完美时空网络技术有限公司 filed Critical 成都完美时空网络技术有限公司
Publication of WO2022116659A1 publication Critical patent/WO2022116659A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/55Radiosity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation

Definitions

  • the present application relates to the technical field of image rendering, and in particular, to a volumetric cloud rendering method, apparatus, program, and readable medium.
  • Volume clouds are an important part of outdoor scenes in games. In most real-time rendering systems for small-scale 3D scenes or scenes with viewpoints near the ground, volume clouds are usually drawn by ray marching or parallax mapping. accomplish.
  • the Ray Marching method is to expand the ray with the same length forward, sample the 3D noise map, and superimpose the results of multiple sampling to generate a volume cloud.
  • the parallax mapping method is to calculate the sampling position where the pixels should be offset through a height map, thereby deceiving the human eye and producing a sense of volume.
  • the characteristic of the Ray Marching method is that the effect is very real, but the performance overhead is very large, and it is necessary to calculate the position of the ray intersection every time, and then sample a 3D noise map. And the shape of the volume cloud needs to be determined by the shape of the 3D noise map, so customizing a specific shape requires a specific noise map. When there are many cloud shapes in the scene, many different noise maps are needed.
  • the parallax map is characterized by low performance overhead, and a specific algorithm can be used to improve the calculated offset accuracy. However, after all, it is a way to deceive the eyes.
  • a specific algorithm can be used to improve the calculated offset accuracy.
  • the sense of volume will be more obvious.
  • the calculated offset is 0.
  • a volume cloud rendering method including:
  • the rendering model is rendered according to the lighting information to obtain a volume cloud to be displayed.
  • a volumetric cloud rendering apparatus including:
  • the drawing module is used to draw the original mesh model of the volume cloud at least one layer of mesh model outwards according to the vertex normal direction;
  • a screening module configured to screen the pixel points of the grid model based on the noise threshold corresponding to the grid model of each layer to obtain a drawing model
  • a calculation module configured to calculate the illumination information corresponding to the rendering model according to the illumination parameters
  • a processing module configured to render the rendering model according to the lighting information to obtain a volume cloud to be displayed.
  • a computer device/equipment/system comprising a memory, a processor, and a computer program/instruction stored in the memory, the processor implements the above-mentioned first step when executing the computer program/instruction The steps of the method in one aspect.
  • a computer-readable medium on which computer programs/instructions are stored, and when the computer programs/instructions are executed by a processor, implement the steps of the method in the first aspect.
  • a computer program product comprising computer programs/instructions, when the computer program/instructions are executed by a processor, the steps of the method of the first aspect above are implemented.
  • the beneficial effects of the present invention are: by drawing at least one additional layer of grid model to the original grid model, the pixel value obtained by sampling the preset noise map based on the grid model is the difference between the noise threshold value set by each layer of grid model. By comparing the results, the pixel points of each layer of network model are screened, and finally the rendering model corresponding to the volume cloud is obtained. In this way, the shape of the volume cloud is determined based on the mesh model, rather than the shape of the noise map. If you want to change the shape of the volume cloud, you only need to set the number of additional layers to draw and the noise threshold for filtering pixels. , there is no need to preselect a specific noise map.
  • volume cloud is obtained based on the rendering of the model, rather than by simulating parallax to give a three-dimensional effect, the phenomenon of piercing at the edge of the volume cloud is avoided, and the authenticity of the volume cloud effect is improved.
  • FIG. 1 is a flowchart of a volumetric cloud rendering method provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of drawing a grid model provided by an embodiment of the present application.
  • FIG. 3 is a flowchart of a volumetric cloud rendering method provided by another embodiment of the present application.
  • FIG. 4 is a schematic diagram of a volume cloud model provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a volume cloud model provided by another embodiment of the present application.
  • FIG. 6 is a flowchart of a volumetric cloud rendering method provided by another embodiment of the present application.
  • FIG. 7 is a flowchart of a volumetric cloud rendering method provided by another embodiment of the present application.
  • FIG. 8 is a flowchart of a volumetric cloud rendering method provided by another embodiment of the present application.
  • FIG. 9 is a flowchart of a volumetric cloud rendering method provided by another embodiment of the present application.
  • FIG. 10 is a flowchart of a volumetric cloud rendering method provided by another embodiment of the present application.
  • FIG. 11 is a flowchart of a volumetric cloud rendering method provided by another embodiment of the present application.
  • FIG. 12 is a flowchart of a volumetric cloud rendering method provided by another embodiment of the present application.
  • FIG. 13 is a flowchart of a volumetric cloud rendering method provided by another embodiment of the present application.
  • FIG. 14 is a flowchart of a volumetric cloud rendering method provided by another embodiment of the present application.
  • FIG. 15 is a flowchart of a volumetric cloud rendering method provided by another embodiment of the present application.
  • 16 is a flowchart of a volumetric cloud rendering method provided by another embodiment of the present application.
  • FIG. 17 is a flowchart of a volumetric cloud rendering method provided by another embodiment of the present application.
  • FIG. 18 is a block diagram of a volumetric cloud rendering apparatus provided by an embodiment of the present application.
  • Figure 19 schematically shows a block diagram of a computer apparatus/apparatus/system for implementing the method according to the present invention.
  • Figure 20 schematically shows a block diagram of a computer program product implementing the method according to the invention.
  • Volumetric Clouds (Volumetric Clouds), commonly known as Volumetric Clouds, in the game volumetric clouds use an image engine to simulate the translucent and irregular performance of real clouds and fog.
  • FIG. 1 is a flowchart of a volume cloud rendering method provided by an embodiment of the present application. As shown in Figure 1, the method includes the following steps S11 to S14:
  • Step S11 draw at least one layer of mesh models outward from the original mesh model of the volume cloud according to the vertex normal direction.
  • the original Mesh model 21 of the volume cloud is additionally drawn N times outwards equidistantly according to the vertex normal direction, where N is an integer greater than or equal to 1, to obtain a multi-layer mesh model 22 .
  • step S12 the pixel points of the grid model are screened based on the noise threshold corresponding to the grid model of each layer to obtain a drawing model.
  • the preset noise map is sampled based on the grid model of each layer, and the pixel value of each pixel point sampled is compared with the preset noise threshold (Clip Value) to filter out the pixel points that meet the requirements, and obtain the drawing model. .
  • Step S13 calculating the illumination information corresponding to the rendering model according to the illumination parameters
  • step S14 the rendering model is rendered according to the lighting information to obtain a volume cloud to be displayed.
  • the rendering can be performed layer by layer starting from the innermost mesh model until the outermost mesh.
  • the pixel value obtained by sampling the preset noise map based on the grid model is the difference between the noise threshold set by each layer of grid model.
  • the pixel points of each layer of network model are screened, and finally the rendering model corresponding to the volume cloud is obtained.
  • the shape of the volume cloud is determined based on the mesh model, rather than the shape of the noise map. If you want to change the shape of the volume cloud, you only need to set the number of additional layers to draw and the noise threshold for filtering pixels. , there is no need to preselect a specific noise map.
  • volume cloud is obtained based on the rendering of the model, rather than by simulating parallax to give people a three-dimensional effect, the phenomenon of piercing at the edge of the volume cloud is avoided, and the authenticity of the volume cloud effect is improved.
  • FIG. 3 is a flowchart of a volumetric cloud rendering method provided by another embodiment of the present application. As shown in FIG. 3, the above step S12 includes the following steps S21 to S23:
  • Step S21 obtaining the noise threshold corresponding to each layer of grid model
  • Step S22 sampling the preset noise map based on the grid model of each layer to obtain a noise value
  • Step S23 Screening pixels whose noise threshold is less than or equal to the noise value for each layer of grid model to obtain a drawing model.
  • the curve 23 represents the noise value obtained by sampling the preset noise map based on the network model, and each layer of the network model 22 is provided with its corresponding Clip Value. Discard the pixels whose Clip Value is greater than the noise value, that is, the dotted line part in Figure 2; only keep the pixels whose Clip Value is less than or equal to the noise value to obtain the drawing model, which is the solid line part in Figure 2.
  • y represents Clip Value
  • x represents pixel coordinates.
  • the noise function is linear, the edge of the final volumetric cloud model will be sharpened, as shown in Figure 4, the volumetric cloud effect is less realistic.
  • the Clip Value can be nonlinearized.
  • the above step S21 includes the following steps A1 to A3:
  • Step A1 obtain the noise function corresponding to each layer of grid model, and the noise function is a linear function with the coordinates of the pixel points as variables;
  • Step A2 obtain the noise boundary value corresponding to the pixel point of each layer of grid model according to the noise function
  • Step A3 Perform exponentiation on the noise boundary value to obtain a noise threshold.
  • the drawing model obtained by additionally drawing the original mesh model N times and filtering based on the noise value needs to generate the vertices of the drawing model based on the vertices of the original mesh model.
  • FIG. 6 is a flowchart of a volume cloud rendering method provided by another embodiment of the present application. As shown in Figure 6, before the above step S13, the method further includes the following steps:
  • Step S31 inputting the vertex coordinates of the original mesh model as the first input parameter to the first shader in the graphics processor
  • step S32 the vertex coordinates of the drawing model are obtained through the first shader with the first input parameter.
  • the first shader is a geometry shader.
  • the geometry shader adds new vertices based on the original mesh model. Since the operation of creating vertices by the geometry shader is performed in the graphics processor (Graphics Processing Unit, GPU), it does not occupy CPU performance overhead.
  • graphics processor Graphics Processing Unit, GPU
  • vertex buffer output of the geometry shader is limited by the size, such as not exceeding 1024 floating point numbers (float), that is, there is a limit on the number of output vertices.
  • float floating point numbers
  • most mobile devices do not support geometry shaders, making volumetric clouds impossible to render on mobile.
  • FIG. 7 is a flowchart of a volumetric cloud rendering method provided by another embodiment of the present application. As shown in FIG. 7 , in step S14, the rendering model is rendered according to the lighting information, including the following steps:
  • Step S41 buffering the vertex data of the original mesh model into the video memory
  • Step S42 after sorting and batching the drawing commands corresponding to the grid models of each layer, adding the obtained batching commands to the command buffer;
  • Step S43 the graphics processor reads the batch command from the command buffer, and executes the rendering operation based on the batch command and the vertex data of the original mesh model.
  • the overhead generated in the graphics rendering process includes the overhead of executing on the CPU and the overhead of executing on the GPU.
  • the overhead executed on the CPU mainly includes the following three categories: the first category, the overhead of the driver submitting the rendering command; the second category, the overhead of the state command switching caused by the driver submitting the state command; and the third category, other because the API is called Drive overhead for loading or synchronizing data.
  • the transfer of material attribute information from the CPU to the GPU can be implemented in the following manner.
  • FIG. 8 is a flowchart of a volume cloud rendering method provided by another embodiment of the present application. As shown in FIG. 8 , in step S14, the rendering model is rendered according to the lighting information, which further includes the following steps:
  • Step S51 according to the noise threshold corresponding to each layer of grid model and the offset of each layer of grid model relative to the original grid model, generate a material property block;
  • Step S52 inputting the material property block into the second shader in the image processor as the second input parameter
  • the above step S43 includes:
  • Step S53 the second shader with the second input parameter performs the rendering of the volume cloud according to the batch command and the vertex data of the original mesh model.
  • the offset of each layer can be And Clip Value, packaged into the MaterialPropertyBlock, passed to the shader in the GPU.
  • the sunlight can be used as the main light source, and the illumination information corresponding to the volume cloud can be calculated based on various illumination parameters.
  • the Lambert model can be used to calculate the lighting information.
  • FIG. 9 is a flowchart of a volume cloud rendering method provided by another embodiment of the present application. As shown in Figure 9, step S13 includes:
  • Step S61 Calculate the first diffuse reflection information corresponding to each pixel point according to the normal vector and the illumination direction vector of each pixel point of the rendering model.
  • the first diffuse reflection information may be nl(NdotL) of the color intensity coefficient corresponding to the pixel point
  • nl represents the first diffuse reflection information
  • N represents the normal vector
  • L represents the illumination direction vector
  • dot() represents the dot product calculation
  • NdotL represents the dot product result of N and L.
  • the saturate function is the same as the max function in calculating the unit vector dot product, but the saturate function is more efficient.
  • the function of saturate(x) is that if the value of x is less than 0, the return value is 0. If the value of x is greater than 1, the return value is 1. If x is between 0 and 1, the value of x is returned directly.
  • Step S62 using the first diffuse reflection information as a lighting parameter
  • Step S63 Calculate the pixel color corresponding to each pixel point based on the illumination parameter to obtain illumination information.
  • the lighting information calculated by the Lambert model the lighting effect of the backlit surface of the volume cloud is not ideal, therefore, the lighting information calculated by the HalfLambert model can be used.
  • FIG. 10 is a flowchart of a volume cloud rendering method provided by another embodiment of the present application. As shown in Figure 10, step S13 also includes:
  • Step S71 performing half-Rambo calculation on the first diffuse reflection information to obtain a half-Rambo illumination parameter
  • HalfLambertnl represents the half-Lambert illumination parameter related to nl.
  • Step S72 obtaining the noise threshold corresponding to each layer of grid model
  • Step S73 according to the noise threshold and the semi-Rambo illumination parameter, the second diffuse reflection information of each pixel is obtained by fitting;
  • smoothnl represents the second diffuse reflection information, which is the smooth NdotL parameter after exponentiation operation
  • ClipValue represents the noise threshold of the mesh model
  • pow() represents the exponentiation operation.
  • Step S74 using the second diffuse reflection information as a lighting parameter.
  • the semi-Rambo illumination parameters are calculated to improve the diffuse reflection light on the surface of the object, especially the illumination effect of the backlit surface of the volume cloud, and the authenticity of the visual effect of the volume cloud.
  • the noise threshold of each layer of grid model is fitted to the diffuse reflection information, which can increase the brightness of the convex part of the volume cloud, and further improve the authenticity of the visual effect of the volume cloud.
  • the Subsurface Scattering (SSS) parameter is added when calculating the volume cloud illumination information.
  • FIG. 11 is a flowchart of a volume cloud rendering method provided by another embodiment of the present application. As shown in Figure 11, step S13 also includes:
  • Step S81 calculating the backward subsurface scattering information of each pixel point according to the backlight subsurface scattering parameter and the observer's line of sight direction vector;
  • backsss saturate(pow(backsss, 2+ClipValue*2)*1.5);
  • backsss represents the intensity information of the backlight SSS light
  • backLitDirection represents the backlight direction vector of the SSS light
  • lightDirection represents the light direction vector
  • backSSSRange represents the scattering range of the backlight SSS
  • viewDirection represents the observer's line of sight direction vector
  • ClipValue represents the noise threshold of the mesh model.
  • Step S82 calculating forward subsurface scattering information of each pixel point according to the light subsurface scattering parameter and the observer's line-of-sight direction vector;
  • float frontsss saturate(dot(viewDirection, frontLitDirection));
  • frontsss represents the intensity information of the light SSS light
  • frontLitDirection represents the light direction vector of the SSS light
  • Step S83 acquiring the influence factor corresponding to the forward subsurface scattering information.
  • Step S84 according to the product of the forward subsurface scattering information and the influence factor, and the backward subsurface scattering information, obtain the total subsurface scattering information;
  • sss represents the total subsurface scattering information
  • FrontSSSIntensity represents the sensitivity (impact factor) of forward SSS illumination.
  • step S85 the total subsurface scattering information is used as the illumination parameter.
  • the backlight SSS information is added in the above step S81 to increase the transparency of the volume cloud when the backlight is used.
  • the light SSS information is added in the step S82 to increase the effect of photons entering the cloud frontally, scattering inside the cloud, and then emitting from the front.
  • the aforementioned influence factor FrontSSSIntensity can be set to 0, that is, the radial SSS information is not considered when calculating the illumination information of the volume cloud.
  • FIG. 12 is a flowchart of a volume cloud rendering method provided by another embodiment of the present application. As shown in Figure 12, step S13 also includes:
  • Step S91 sampling the shadow texture according to the defined light source shadow to obtain shadow parameters
  • Step S92 performing attenuation calculation on the shadow parameter as the distance from the camera increases, to obtain shadow information corresponding to each pixel of the drawing model;
  • step S93 the shadow information is used as a lighting parameter.
  • allowing the volume cloud to receive shadows can be achieved in the following ways:
  • ShadowAttenuation MainLightRealtimeShadow(i.shadowCoord);
  • ShadowAttenuation is used to represent the value obtained after the real-time shadow texture is sampled by the shadow position of the main light source, as shadow information.
  • PositionWS represents the position coordinates of the pixel (fragment) in the world space
  • _worldSpaceCameraPos represents the coordinates of the camera in the world space
  • distance() is the function to find the distance in the shader
  • the distance() function is used to calculate the distance between the pixel and the camera the distance.
  • the volume cloud receives shadows, and the shadows attenuate as the distance from the camera increases, so as to further improve the authenticity of the volume cloud effect.
  • FIG. 13 is a flowchart of a volume cloud rendering method provided by another embodiment of the present application. As shown in Figure 13, step S13 also includes:
  • Step S101 calculating the first specular reflection information of each pixel point according to the surface normal vector of the drawing model and the observer's line of sight direction vector;
  • nv represents the first specular reflection information, which is the dot product result of the normal vector N and the observer's line of sight direction vector viewDirection(V), namely NdotV; viewDir.xyz represents the xyz component of the observer's line of sight direction vector.
  • Step S102 according to the noise threshold and the first specular reflection information, fit the second specular reflection information of each pixel point;
  • smoothnv represents the second specular reflection information, which is the smooth nv parameter after exponentiation.
  • Step 103 using the first specular reflection information and the second specular reflection information as lighting parameters.
  • all of the above information can be used as lighting parameters to calculate the total lighting parameter finalLit,
  • FIG. 14 is a flowchart of a volume cloud rendering method provided by another embodiment of the present application. As shown in Figure 14, step S13 includes:
  • Step S111 acquiring ambient light parameters and main light source parameters
  • the ambient light parameters may include ambient light colors obtained through spherical harmonic illumination sampling.
  • the main light source parameters may include the main light source color.
  • Step S112 Calculate the pixel color corresponding to each pixel point based on the illumination parameter, the ambient light parameter and the main light source parameter to obtain illumination information.
  • SH represents the ambient light color sampled by spherical harmonic lighting
  • _AmbientContrast represents the influence factor (contrast) of the ambient light color
  • _DarkColor.rgb represents the color of the innermost dark part of the cloud
  • _Color.rgb represents the outermost bright part of the cloud color
  • MainLightColor represents the main light source color.
  • the illumination effect of the volume cloud can be adjusted at any time, and the authenticity of the volume cloud display can be improved.
  • volumetric clouds In games, there are usually objects that travel through volumetric clouds, such as people, aircraft, spaceships, birds, dragons, and more. In order to get a more realistic effect, it is also necessary to do a translucent blend of the volume cloud with the objects located in the cloud.
  • FIG. 15 is a flowchart of a volume cloud rendering method provided by another embodiment of the present application. As shown in Figure 15, step S14 includes:
  • Step S121 performing edge detection according to the depth value of each pixel point and the depth value of the volume cloud before rendering
  • Step S122 determining the mixture to be mixed that overlaps with the volume cloud according to the edge detection result
  • step S123 translucent mixing is performed on the object to be mixed and the volume cloud, and the volume cloud to be displayed is obtained based on the translucent mixing result.
  • the object may be partially located in the volume cloud, therefore, it is necessary to determine the part of the object located in the volume cloud for translucent blending. Since the volume cloud has a certain translucent effect, the part of the translucent mixed object, which is located in the volume cloud, has a looming effect, which further improves the realism of the display effect of the volume cloud and the object.
  • the translucent mixing of the volume cloud and the object can be implemented in the after-effect stage after the volume cloud rendering is completed, or in the rendering stage of the volume cloud.
  • the manners of realizing translucent mixing in these two stages will be described in detail below.
  • FIG. 16 is a flowchart of a volume cloud rendering method provided by another embodiment of the present application. As shown in FIG. 16 , in step S14, after rendering the rendering model according to the lighting information, and before obtaining the volume cloud to be displayed, step S123 includes:
  • Step S131 determining the coincident pixel points of the mixture to be mixed and the volume cloud
  • Step S132 sampling to obtain the first color buffer value and the first depth buffer value before the overlapping pixel point rendering, and the second color buffer value and the second depth buffer value after the overlapping pixel point rendering;
  • Step S133 the first color buffer value is used as the starting position input parameter of the interpolation calculator, the second color buffer value is used as the target position input parameter of the interpolation calculator, and the difference between the first depth buffer value and the second depth buffer value is used as The interpolation speed input parameter of the interpolation calculator, and the linear interpolation result calculated by the interpolation calculator is obtained as the final pixel color of the coincident pixel point;
  • step S134 the volume cloud to be displayed is obtained based on the final pixel color of the overlapping pixel points.
  • the color buffer map and depth buffer map before and after rendering of the volume cloud can be obtained from the rendering pipeline, and the first depth buffer value ZBuffer1 and the second depth are sampled from the 2 depth buffer maps to the coincident pixels.
  • the buffer value ZBuffer2 is sampled from two color buffer maps to obtain the first color buffer value ColorBuffer1 and the second color buffer value ColorBuffer2 of the coincident pixels.
  • FIG. 17 is a flowchart of a volume cloud rendering method provided by another embodiment of the present application.
  • step S14 in the process of rendering the drawing model according to the lighting information, the above-mentioned step S123 includes:
  • Step S141 determining the coincident pixel points of the mixture to be mixed and the volume cloud
  • Step S142 sampling to obtain the color buffer value and the depth buffer value before the overlapping pixel point rendering, and the current color value and the current depth value of the overlapping pixel point;
  • step S143 the difference between the depth buffer value and the current depth value is used as the source mixing factor, the color buffer value is used as the source color, and the current color value is used as the target color, and a mixing operation is performed, and the mixed pixel color is used as the overlapping pixel color. final pixel color;
  • FinalColor represents the final pixel color
  • ColorBuffer represents the color buffer value
  • Z represents the current depth value
  • Zbuffer represents the depth buffer value
  • Color represents the current color value
  • Step S144 rendering the drawing model based on the final pixel color of the overlapping pixel points to obtain a volume cloud to be displayed.
  • the Alpha Blend method can be used for translucent blending.
  • the specific calculation method is not limited to the above formula.
  • Other Alpha Blend formulas can be used, which will not be repeated here.
  • the rendering of the drawing model based on the final pixel color of the coincident pixels includes: layer-by-layer rendering of each grid model of the drawing model in an order from the outside to the inside. In this way, Over Draw can be effectively avoided, additional overhead can be reduced, and the final display effect can be improved.
  • FIG. 18 is a block diagram of a volumetric cloud rendering apparatus provided by an embodiment of the present application, and the apparatus may be implemented as part or all of an electronic device through software, hardware, or a combination of the two.
  • the volume cloud rendering device includes:
  • the drawing module 1 is used to draw the original mesh model of the volume cloud outwardly according to the vertex normal vector at least one layer of mesh model;
  • the screening module 2 is used to screen the pixel points of the grid model based on the noise threshold corresponding to each layer of the grid model to obtain a drawing model;
  • the calculation module 3 is used for calculating the illumination information corresponding to the drawing model according to the illumination parameters
  • the processing module 4 is configured to render the rendering model according to the lighting information, and obtain the volume cloud to be displayed.
  • the screening module 2 is used to obtain the noise threshold corresponding to the grid model of each layer; to sample a preset noise map based on the grid model of each layer to obtain a noise value; to the grid model of each layer The model selects pixels whose noise threshold is less than or equal to the noise value to obtain the drawing model.
  • the screening module 2 is used to obtain the noise function corresponding to the grid model of each layer, and the noise function is a linear function with the coordinates of the pixel points as variables; according to the noise function, the noise function of each layer is obtained.
  • the noise boundary value corresponding to the pixel point of the grid model is obtained; the noise threshold is obtained by exponentiating the noise boundary value.
  • the device further includes:
  • an input module configured to input the vertex coordinates of the original mesh model as a first input parameter to a first shader in the graphics processor before calculating the illumination information corresponding to the rendering model according to the illumination parameters;
  • a first shader configured to obtain vertex coordinates of the drawing model according to the first input parameter.
  • the processing module 4 is used to buffer the vertex data of the original mesh model into the video memory; after sorting and batching the drawing commands corresponding to the mesh model of each layer, the obtained batching commands are obtained. adding to a command buffer; the graphics processor reads the batch command from the command buffer, and performs a rendering operation based on the batch command and vertex data of the original mesh model.
  • the processing module 4 is further configured to generate a material property block according to the noise threshold corresponding to the grid model of each layer and the offset of the grid model of each layer relative to the original grid model;
  • the material property block is input to the second shader in the image processor as a second input parameter;
  • the apparatus further includes: a second shader
  • the second shader is used for rendering the volume cloud according to the second input parameter, the batch command and the vertex data of the original mesh model.
  • the calculation module 3 is used to calculate the first diffuse reflection information corresponding to each pixel point according to the normal vector and the illumination direction vector of each pixel point of the drawing model; and calculating the pixel color corresponding to each pixel point based on the illumination parameter to obtain the illumination information.
  • the calculation module 3 is further configured to perform a semi-Rambo calculation on the first diffuse reflection information to obtain a semi-Rambo illumination parameter; obtain the noise threshold corresponding to the grid model of each layer; according to the noise threshold and the semi-Rambo illumination parameter, and obtain the second diffuse reflection information of each pixel point by fitting; the second diffuse reflection information is used as the illumination parameter.
  • the calculation module 3 is further configured to calculate the backward subsurface scattering information of each pixel point according to the backlight subsurface scattering parameter and the line of sight direction vector of the observer; according to the light subsurface scattering parameter and the line of sight of the observer;
  • the direction vector calculates the forward subsurface scattering information of each of the pixel points; obtains the influence factor corresponding to the forward subsurface scattering information; according to the product of the forward subsurface scattering information and the influence factor, and the Backward subsurface scattering information to obtain total subsurface scattering information; use the total subsurface scattering information as the illumination parameter.
  • the calculation module 3 is also used to sample the shadow texture according to the defined light source shadow to obtain shadow parameters; perform attenuation calculation on the shadow parameters as the distance from the camera increases to obtain each pixel of the drawing model. Shadow information corresponding to the point; use the shadow information as the lighting parameter.
  • the calculation module 3 is further configured to calculate the first specular reflection information of each of the pixel points according to the surface normal vector of the drawing model and the observer's line of sight direction vector; according to the noise threshold and the first specular reflection information; The specular reflection information is fitted to obtain the second specular reflection information of each pixel point; the first specular reflection information and the second specular reflection information are used as the illumination parameters.
  • the calculation module 3 is configured to acquire ambient light parameters and main light source parameters; calculate the pixel color corresponding to each pixel point based on the illumination parameters, ambient light parameters and main light source parameters to obtain the illumination information.
  • the processing module 4 is configured to perform edge detection according to the depth value of each pixel point before rendering and the depth value of the volume cloud; determine the mixture to be mixed that overlaps with the volume cloud according to the edge detection result; The to-be-mixed body and the volume cloud are translucently mixed, and the to-be-displayed volume cloud is obtained based on the translucent mixing result.
  • the processing module 4 includes:
  • the mixing sub-module is used to determine the overlapping pixels of the volume to be mixed and the volume cloud after rendering the rendering model according to the lighting information and before obtaining the volume cloud to be displayed; sampling to obtain the overlapping pixels for rendering The first color buffer value and the first depth buffer value before, and the second color buffer value and the second depth buffer value after the coincident pixel is rendered; the first color buffer value is used as the start of the interpolation calculator position input parameter, the second color buffer value is used as the target position input parameter of the interpolation calculator, and the difference between the first depth buffer value and the second depth buffer value is used as the interpolation speed of the interpolation calculator Input parameters to obtain the linear interpolation result calculated by the interpolation calculator as the final pixel color of the coincident pixel point; and obtain the volume cloud to be displayed based on the final pixel color of the coincident pixel point.
  • the processing module 4 includes:
  • a rendering sub-module configured to determine the overlapping pixels of the object to be mixed and the volume cloud in the process of rendering the drawing model according to the lighting information; sampling to obtain the color of the overlapping pixels before rendering The buffer value and the depth buffer value, and the current color value and the current depth value of the coincident pixel point; the difference between the depth buffer value and the current depth value is used as the source mixing factor, the color buffer value is used as the source color, and the The current color value is used as the target color, and a mixing operation is performed, and the mixed pixel color is used as the final pixel color of the coincident pixel point; the rendering model is rendered based on the final pixel color of the coincident pixel point, and the result is obtained. Describe the volume cloud to be displayed.
  • a rendering sub-module configured to render each grid model of the drawing model layer by layer in an order from the outside to the inside.
  • Various component embodiments of the present invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in the volumetric cloud rendering apparatus according to the embodiment of the present invention.
  • DSP digital signal processor
  • the present invention can also be implemented as a program/instruction (eg, computer program/instruction and computer program product) for an apparatus or apparatus for performing some or all of the methods described herein.
  • Such programs/instructions implementing the present invention may be stored on a computer readable medium, or may exist in the form of one or more signals, such signals may be downloaded from an Internet website, or provided on a carrier signal, or in any form Available in other formats.
  • Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology.
  • Information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cartridges, disk storage, quantum memory, graphene-based storage media or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
  • PRAM phase-change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read only memory
  • EEPROM Electrically Erasable Programm
  • FIG. 19 schematically shows a computer apparatus/device/system that can implement the volumetric cloud rendering method according to the present invention, the computer apparatus/device/system comprising a processor 410 and a computer-readable medium in the form of a memory 420 .
  • Memory 420 is an example of a computer-readable medium having storage space 430 for storing computer programs/instructions 431 .
  • the various steps in the volume cloud rendering method described above may be implemented.
  • Figure 20 schematically shows a block diagram of a computer program product implementing the method according to the invention.
  • the computer program product includes a computer program/instructions 510 that, when executed by a processor, such as the processor 410 shown in FIG. 19 , can implement the volume cloud rendering method described above. of the various steps.

Abstract

一种体积云渲染方法、装置、程序和可读介质,该方法包括:将体积云的原网格模型按照顶点法线向量向外绘制至少一层网格模型(S11);基于每层所述网格模型对应的噪声阈值对所述网格模型的像素点进行筛选,得到绘制模型(S12);根据光照参数计算所述绘制模型对应的光照信息(S13);根据所述光照信息对所述绘制模型进行渲染,得到待显示体积云(S14)。该技术方案体积云的形状基于网格模型确定,而不是通过噪声图的形状来确定,如果想要改变体积云的形状,仅需要设定额外绘制的层数以及筛选像素点的噪声阈值即可,则无需预先选取特定的噪声图;减少对噪声图的采用次数,也进一步降低了生成体积云的性能开销,使得体积云可以流畅地运行在移动端设备上。

Description

一种体积云渲染方法、装置、程序和可读介质
交叉引用
本申请要求于2020年12月2日提交的申请号为202011388910.3、发明名称为“一种体积云渲染方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像渲染技术领域,尤其涉及一种体积云渲染方法、装置、程序和可读介质。
背景技术
体积云是游戏户外场景中一个重要的组成部分,在大多数的小规模三维场景或者视点位于地面附近的场景的实时绘制系统中,体积云通常采用光线步进(Ray Marching)或视差贴图方式绘制实现。
Ray Marching方式是通过将射线等长的向前拓展,采样3D噪声图,将多次采样结果叠加,生成体积云。
视差贴图方式是通过一张高度图,计算像素点应该偏移的采样位置,进而欺骗人的眼睛,产生体积感。
Ray Marching方式的特点就是效果非常真实,但是随之而来的就是性能开销非常大,需要每次计算射线交点的位置,再去采样一张3D噪声图。而且体积云的形状需要通过3D噪声图的形状来决定,因而定制特定的形状就需要特定的噪声图。当场景里云的形状非常多的时候,就需要很多不同的噪声图。
而视差贴图的特点是性能开销比较低,可以通过特定的算法来提高计算出来的偏移精度。但是,终究是欺骗眼睛的做法,当视线与模型表面有比较小的夹角的时候,体积感才会比较明显,当视线垂直于模型表面的时候,计算得出的偏移量为0,不会有体积感。而且在模型的边缘会有很强烈的穿帮现象。
因此,急需一种效果真实且性能开销小的体积云渲染方法。
发明内容
本发明提出以下技术方案以克服或者至少部分地解决或者减缓上述问题:
根据本申请实施例的一个方面,提供了一种体积云渲染方法,包括:
将体积云的原网格模型按照顶点法线方向向外绘制至少一层网格模型;
基于每层所述网格模型对应的噪声阈值对所述网格模型的像素点进行筛选,得到绘制模型;
根据光照参数计算所述绘制模型对应的光照信息;
根据所述光照信息对所述绘制模型进行渲染,得到待显示体积云。
根据本申请实施例的另一个方面,提供了一种体积云渲染装置,包括:
绘制模块,用于将体积云的原网格模型按照顶点法线方向向外绘制至少一层网格模型;
筛选模块,用于基于每层所述网格模型对应的噪声阈值对所述网格模型的像素点进行筛选,得到绘制模型;
计算模块,用于根据光照参数计算所述绘制模型对应的光照信息;
处理模块,用于根据所述光照信息对所述绘制模型进行渲染,得到待显示体积云。
根据本发明的又一个方面,提供了一种计算机装置/设备/系统,包括存储器、处理器及存储在存储器上的计算机程序/指令,所述处理器执行所述计算机程序/指令时实现上述第一方面所述方法的步骤。
根据本发明的再一个方面,提供了一种计算机可读介质,其上存储有计算机程序/指令,所述计算机程序/指令被处理器执行时实现上述第一方面所述方法的步骤。
根据本发明的再一个方面,提供了一种计算机程序产品,包括计算机程序/指令,所述计算机程序/指令被处理器执行时实现上述第一方面所述方法的步骤。
本发明的有益效果为:通过对原网格模型额外绘制至少一层网格模型,基于网格模型对预设噪声图进行采样得到的像素值,与每层网格模型设定的噪声阈值的比较结果,对各层网络模型的像素点进行筛选,最终得到体积云对应的绘制模型。这样,使得体积云的形状基于网格模型确定,而不是通过噪声图的形状来确定,如果想要改变体积云的形状,仅需要设定额外绘制的层数以及筛选像素点的噪声阈值即可,则无需预先选取特定的噪声图。另外,通过对模型的多次额外绘制,减少对噪声图的采用次数,也进一步降低了生成体积云的性能开销,使得体积云可以流畅地运行在移动端设备如手机上。再者,由于体积云是基于对模型的渲染得到,而不是通过模拟视差给人以立体感,因此,避免了体积云边缘处穿帮现象的产生,提高了体积云效果的真实性。
附图说明
通过阅读下文优选实施方式的详细描述,本发明的上述及各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。在附图中:
图1为本申请实施例提供的一种体积云渲染方法的流程图;
图2为本申请实施例提供的绘制网格模型的示意图;
图3为本申请另一实施例提供的一种体积云渲染方法的流程图;
图4为本申请实施例提供的体积云模型的示意图;
图5为本申请另一实施例提供的体积云模型的示意图;
图6为本申请另一实施例提供的一种体积云渲染方法的流程图;
图7为本申请另一实施例提供的一种体积云渲染方法的流程图;
图8为本申请另一实施例提供的一种体积云渲染方法的流程图;
图9为本申请另一实施例提供的一种体积云渲染方法的流程图;
图10为本申请另一实施例提供的一种体积云渲染方法的流程图;
图11为本申请另一实施例提供的一种体积云渲染方法的流程图;
图12为本申请另一实施例提供的一种体积云渲染方法的流程图;
图13为本申请另一实施例提供的一种体积云渲染方法的流程图;
图14为本申请另一实施例提供的一种体积云渲染方法的流程图;
图15为本申请另一实施例提供的一种体积云渲染方法的流程图;
图16为本申请另一实施例提供的一种体积云渲染方法的流程图;
图17为本申请另一实施例提供的一种体积云渲染方法的流程图;
图18为本申请实施例提供的一种体积云渲染装置的框图;
图19示意性地示出了用于实现根据本发明的方法的计算机装置/设备/系统的框图;以及
图20示意性地示出了实现根据本发明的方法的计算机程序产品的框图。
具体实施例
下面结合附图和具体的实施方式对本发明作进一步的描述。以下描述仅为说明本发明的基本原理而并非对其进行限制。
体积云(Volumetric Clouds),一般称为容积云,在游戏中的体积云就是使用图像引擎来模拟真实云雾半透明、无规则的表现效果。
目前,随着移动端游戏的发展,考虑到移动端如手机等性能限制,需要移动端游戏在保证效果真实性的前提下,尽可能较低性能开销,尤其是渲染阶段的性能开销。
下面首先对本发明实施例所提供的一种体积云渲染方法进行介绍。
图1为本申请实施例提供的一种体积云渲染方法的流程图。如图1所示,该方法包括以下步骤S11至步骤S14:
步骤S11,将体积云的原网格模型按照顶点法线方向向外绘制至少一层网格模型。
如图2所示,将体积云的原Mesh模型21按照顶点法线方向,向外等距地额外绘制N次,N为大于或等于1的整数,得到多层网格模型22。
步骤S12,基于每层网格模型对应的噪声阈值对网格模型的像素点进行筛选,得到绘制模型。
基于各层网格模型对预设的噪声图进行采样,将采样到的各个像素点的像素值与预设的噪声阈值(Clip Value)进行比较,以筛选出符合要求的像素点,得到绘制模型。
步骤S13,根据光照参数计算绘制模型对应的光照信息;
步骤S14,根据光照信息对绘制模型进行渲染,得到待显示体积云。
在该步骤中对绘制模型的渲染过程中,可以从最内层网格模型开始逐层渲染,直到最外层网格。
通过上述步骤S11至步骤S14,通过对原网格模型额外绘制N层网格模型,基 于网格模型对预设噪声图进行采样得到的像素值,与每层网格模型设定的噪声阈值的比较结果,对各层网络模型的像素点进行筛选,最终得到体积云对应的绘制模型。这样,使得体积云的形状基于网格模型确定,而不是通过噪声图的形状来确定,如果想要改变体积云的形状,仅需要设定额外绘制的层数以及筛选像素点的噪声阈值即可,则无需预先选取特定的噪声图。另外,通过对模型的多次额外绘制,减少对噪声图的采用次数,也进一步降低了生成体积云的性能开销,使得体积云可以流畅地运行在移动端设备如手机上。再者,由于体积云是基于对模型的渲染得到,而不是通过模拟视差给人以立体感,因此,避免了体积云边缘处穿帮现象的产生,提高了体积云效果的真实性。
下面对上述各个步骤进行具体说明。
图3为本申请另一实施例提供的一种体积云渲染方法的流程图。如图3所示,上述步骤S12包括以下步骤S21至步骤S23:
步骤S21,获取每层网格模型对应的噪声阈值;
步骤S22,基于每层网格模型对预设噪声图进行采样,得到噪声值;
步骤S23,对每层网格模型筛选噪声阈值小于或等于噪声值的像素点,得到绘制模型。
如图2所示,曲线23表示基于网络模型对预设噪声图采样得到的噪声值,各层网络模型22均设有其对应的Clip Value。将Clip Value大于噪声值的像素点抛弃,即图2中的虚线部分;仅保留Clip Value小于或等于噪声值的像素点,得到绘制模型,即图2中的实线部分。
上述实施例中,Clip Value可以基于预设线性噪声函数计算得到,如一次函数y=kx+b(k,b是常数,k≠0),y表示Clip Value,x表示像素点坐标。但是,如果噪声函数为线性,则会导致最终的体积云模型的边缘变尖,如图4所示,体积云效果真实度较差。
为了提高显示效果的真实性,可以对Clip Value进行非线性化。可选的,上述步骤S21包括以下步骤A1至步骤A3:
步骤A1,获取每层网格模型对应的噪声函数,噪声函数为以像素点的坐标为变量的线性函数;
步骤A2,根据噪声函数得到每层网格模型像素点对应的噪声边界值;
步骤A3,对噪声边界值进行幂运算,得到噪声阈值。
通过步骤A1至步骤A3,对对Clip Value进行幂运算,使得对Clip Value非线性化,这样,如图5所示,筛选后的体积云模型边缘变得平滑,提高体积云效果的真实度。
在上述实施例中,经过对原网格模型额外绘制N次并基于噪声值筛选而得到的绘制模型,需要基于原网格模型的顶点生成绘制模型的顶点。生成顶点的方式可以有以下两种,具体如下:
(1)通过几何着色器创建顶点。
图6为本申请另一实施例提供的一种体积云渲染方法的流程图。如图6所示,上述步骤S13之前,该方法还包括以下步骤:
步骤S31,将原网格模型的顶点坐标作为第一输入参数输入图形处理器中的第一着色器;
步骤S32,通过带第一输入参数的第一着色器,得到绘制模型的顶点坐标。
其中,第一着色器为几何着色器。
通过步骤S31和步骤S32,由几何着色器基于原网格模型新增顶点。由于几何着色器创建顶点的操作在图形处理器(Graphics Processing Unit,GPU)中进行,不占用CPU性能开销。
但是,几何着色器的顶点缓冲器输出是由大小限制的,如不可超过1024个浮点数(float),即对输出顶点的数量有限制。另外,大部分移动端设备不支持几何着色器,使得体积云无法在移动端渲染。
(2)通过GPU-Instance技术进行渲染
图7为本申请另一实施例提供的一种体积云渲染方法的流程图。如图7所示,步骤S14中,根据光照信息对绘制模型进行渲染,包括以下步骤:
步骤S41,将原网格模型的顶点数据缓存至显存中;
步骤S42,将每层网格模型对应的绘制命令进行排序并合批后,将得到的合批命令添加至命令缓冲区;
步骤S43,由图形处理器从命令缓冲区读取合批命令,基于合批命令及原网格模型的顶点数据执行渲染操作。
其中,图形渲染过程中所产生的开销包括在CPU上执行的开销及在GPU上执行的开销。其中,CPU上执行的开销主要包括以下三类:第一类,驱动提交渲染命令的开销;第二类,驱动提交状态命令导致的状态命令切换的开销;以及第三类,其他由于API被调用导致加载或是同步数据的驱动开销。
通过批次合并(即将合理的方式将渲染状态相同多个可渲染物的Draw绘制数据合并到一批绘制),以及实例渲染(即将诸多几何数据近似的可渲染物通过一次DrawInstance函数绘制,而将这些可渲染物的区别通过数组传入渲染命令中),可以显著降低第一类开销。通过对可渲染物进行有效的排序,将状态相同的部分的可渲染物尽可能依次渲染,从而减少状态的切换,可以较明显减少第二类开销。因此在执行渲染之前,可以通过上述两种方式对数据进行预处理,可以有效降低CPU在图形渲染过程中的性能开销。
上述步骤S41至步骤S43,由于体积云各层网格模型相同,因此,将多次调用的绘制命令(DrawCall)合批,通过一个DrawCall批渲染相同的多层网格模型。这样,通过减少DrawCall数量可以降低CPU性能开销。另外,由于体积云渲染的整体过程耗时相对较多,因此,CPU上额外增加的排序合批操作耗时可忽略不计,不会对整体过程产生明显性能影响。
可选的,在采用GPU-Instance技术进行渲染的过程中,CPU向GPU传递材质属性信息可以通过以下方式实现。
图8为本申请另一实施例提供的一种体积云渲染方法的流程图。如图8所示,步骤S14中,根据光照信息对绘制模型进行渲染,还包括以下步骤:
步骤S51,根据每层网格模型对应的噪声阈值及每层网格模型相对于原网格模 型的偏移量,生成材质属性块;
步骤S52,将材质属性块作为第二输入参数输入图像处理器中的第二着色器;
上述步骤S43,包括:
步骤S53,由带第二输入参数的第二着色器,根据合批命令及原网格模型的顶点数据进行体积云的渲染。
本实施例中,由于各层网格模型的材质相同,区别仅在于相对于原网格模型的偏移量及Clip Value,因此,在传递材质属性信息是,可将每一层的偏移量及Clip Value,包装到材质属性块MaterialPropertyBlock,传递给GPU中的着色器。通过使用材质属性块,可以降低操作材质的耗时,提高材质操作的速度;另外,配合上述GPU-Instance技术,可以进一步的提高性能,省去实体对象本身的开销,减少DrawCall,降低CPU开销及内存开销。
本实施例中,由于体积云受太阳光影响很大,可以将太阳光作为主光源,基于多种光照参数计算体积云对应的光照信息。
首先,可以采用Lambert模型计算光照信息。
图9为本申请另一实施例提供的一种体积云渲染方法的流程图。如图9所示,步骤S13包括:
步骤S61,根据绘制模型各像素点的法线向量及光照方向向量计算各像素点对应的第一漫反射信息。
其中,第一漫反射信息可以为像素点对应的颜色强度系数的nl(NdotL),
float nl=max(0.0,dot(N,L)),或者,nl=saturate(dot(N,L));
其中,nl表示第一漫反射信息,N表示法线向量,L表示光照方向向量,dot()表示点积计算,NdotL表示N与L的点积结果。saturate函数在计算单位向量点积时与max函数的结果一致,但saturate函数效率更高一些。saturate(x)的作用是如果x取值小于0,则返回值为0。如果x取值大于1,则返回值为1。若x在0到1之间,则直接返回x的值。
步骤S62,将第一漫反射信息作为光照参数;
步骤S63,基于光照参数计算各像素点对应的像素颜色,得到光照信息。
通过Lambert模型计算的光照信息,体积云背光面的光照效果不理想,因此,可采用HalfLambert模型计算的光照信息。
图10为本申请另一实施例提供的一种体积云渲染方法的流程图。如图10所示,步骤S13还包括:
步骤S71,对第一漫反射信息进行半兰博计算,得到半兰博光照参数;
float HalfLambertnl=dot(N,L)*0.5+0.5;
其中,HalfLambertnl表示与nl相关的半兰博光照参数。
步骤S72,获取每层网格模型对应的噪声阈值;
步骤S73,根据噪声阈值及半兰博光照参数,拟合得到各像素点的第二漫反射信息;
float Smoothnl=saturate(pow(HalfLambertnl,2-ClipValue));
其中,smoothnl表示第二漫反射信息,为经过幂运算的平滑NdotL参数, ClipValue表示网格模型的噪声阈值,pow()表示幂运算。
步骤S74,将第二漫反射信息作为光照参数。
通过上述步骤S71,计算半兰博光照参数,以提高物体表面的漫反射光,尤其是可以提高体积云背光面的光照效果,提升体积云视觉效果的真实度。另外,通过上述步骤S73,将每层网格模型的噪声阈值拟合到漫反射信息中,可以使得体积云凸起的部分亮度增加,进一步提高体积云视觉效果的真实度。
另外,体积云的次表面散射情况对体积云视觉观感影响较大,因此,在计算体积云光照信息时添加次表面散射(Subsurface Scattering,SSS)参数。
图11为本申请另一实施例提供的一种体积云渲染方法的流程图。如图11所示,步骤S13还包括:
步骤S81,根据背光次表面散射参数及观察者视线方向向量计算各像素点的后向次表面散射信息;
float3 backLitDirection=-(lightDirection+(1-backSSSRange)*N);
float backsss=saturate(dot(viewDirection,backLitDirection));
backsss=saturate(pow(backsss,2+ClipValue*2)*1.5);
其中,backsss表示背光SSS光照的强度信息,backLitDirection表示SSS光照的背光方向向量,lightDirection表示光线方向向量,backSSSRange表示背光SSS的散射范围,viewDirection表示观察者视线方向向量,ClipValue表示网格模型的噪声阈值。
步骤S82,根据向光次表面散射参数及观察者视线方向向量计算各像素点的前向次表面散射信息;
float frontsss=saturate(dot(viewDirection,frontLitDirection));
其中,frontsss表示向光SSS光照的强度信息,frontLitDirection表示SSS光照的向光方向向量。
步骤S83,获取前向次表面散射信息对应的影响因子。
步骤S84,根据前向次表面散射信息与影响因子的乘积,以及后向次表面散射信息,得到总次表面散射信息;
float sss=saturate(backsss+FrontSSSIntensity*frontsss);
其中,sss表示总次表面散射信息,FrontSSSIntensity表示前向SSS光照的敏感度(影响因子)。
步骤S85,将总次表面散射信息作为光照参数。
通过上述步骤S81中增加背光SSS信息,增加背光时体积云的通透感,通过步骤S82增加向光SSS信息,增加光子正面射入云,在云的内部散射,再从正面射出的效果。
可选的,由于向光SSS信息对体积云的观感影响不大,因此可以将上述影响因子FrontSSSIntensity设为0,即在计算体积云的光照信息时不考虑向光SSS信息。
为了使得体积云的效果更加真实,还要让体积云接受阴影。
图12为本申请另一实施例提供的一种体积云渲染方法的流程图。如图12所示,步骤S13还包括:
步骤S91,根据定义的光源阴影,对阴影纹理进行采样,得到阴影参数;
步骤S92,将阴影参数随着与相机距离的增加进行衰减计算,得到绘制模型各像素点对应的阴影信息;
步骤S93,将阴影信息作为光照参数。
具体让体积云接收阴影可通过下述方式实现:
float shadowAttenuation;
#if defined(_MAIN_LIGHT_SHADOWS)
ShadowAttenuation=MainLightRealtimeShadow(i.shadowCoord);
#else
ShadowAttenuation=1;
#endif
float shadow=saturate(lerp(shadowAttenuation,1,(distance(PositionWS.xyz,_worldSpaceCameraPos.xyz)-100)*0.1));
其中,shadowAttenuation用于表示实时阴影纹理经主光源的阴影位置采样之后得到的值,作为阴影信息。PositionWS表示像素点(片元)在世界空间的位置坐标,_worldSpaceCameraPos表示该相机在世界空间中的坐标,distance()为着色器中求距离的函数,通过distance()函数计算像素点与相机之间的距离。
在计算阴影时,以shadowAttenuation作为差值计算器Lerp的起始位置输入参数,1作为差值计算器Lerp的目标位置输入参数,像素点与相机之间的距离作为差值计算器Lerp的插值速度输入参数,将差值计算结果归于[0,1]得到最终的阴影参数。
通过上述步骤S91至步骤S93,通过让体积云接收阴影,且阴影随着与相机距离的增加而衰减,进一步提高体积云效果的真实性。
图13为本申请另一实施例提供的一种体积云渲染方法的流程图。如图13所示,步骤S13还包括:
步骤S101,根据绘制模型的表面法线向量及观察者视线方向向量计算各像素点的第一镜面反射信息;
float nv=saturate(dot(N,viewDirection.xyz));
其中,nv表示第一镜面反射信息,为法线向量N和观察者视线方向向量viewDirection(V)的点乘结果,即NdotV;viewDir.xyz表示观察者视线方向向量的xyz分量。
步骤S102,根据噪声阈值及第一镜面反射信息,拟合得到各像素点的第二镜面反射信息;
float smoothnv=saturate(pow(nv,2-ClipValue));
其中,smoothnv表示第二镜面反射信息,为经过幂运算之后的平滑nv参数。
步骤103,将第一镜面反射信息和第二镜面反射信息作为光照参数。
可选的,可以使用上述所有信息作为光照参数来计算总光照参数finalLit,
float finalLit=
saturate(smoothnv*0.5+lerp(1,shadow,nl)*saturate(smoothnl+sss)*(1-nv*0.5))。
图14为本申请另一实施例提供的一种体积云渲染方法的流程图。如图14所示, 步骤S13包括:
步骤S111,获取环境光参数和主光源参数;
其中,环境光参数可以包括经球谐光照采样得到的环境光颜色。主光源参数可以包括主光源颜色。
步骤S112,基于光照参数、环境光参数及主光源参数计算各像素点对应的像素颜色,得到光照信息。
float3 SH=SampleSH(i,N)*_AmbientContrast;
float4 finalColor=
float4(lerp(DarkColor.rgb+SH,Color.rgb,finalLit),1)*MainLightColor*0.8;
其中,SH表示经球谐光照采样得到的环境光颜色,_AmbientContrast表示环境光颜色的影响因数(对比度),_DarkColor.rgb表示云最内层暗色部分的颜色,_Color.rgb表示云最外层亮色部分的颜色,MainLightColor表示主光源颜色。
在上述实施例中,在体积云的光照计算中,可以提供多种光照参数,随时调整体积云的光照效果,提高体积云显示的真实度。
在游戏中,通常会有在体积云中穿梭的物体,如人、飞行器、飞船、鸟类、龙等等。为了得到更加真实的效果,还需要将体积云与位于云中的物体做半透明混合。
图15为本申请另一实施例提供的一种体积云渲染方法的流程图。如图15所示,步骤S14包括:
步骤S121,根据渲染前各像素点的深度值及体积云的深度值,进行边缘检测;
步骤S122,根据边缘检测结果确定与体积云重合的待混合物体;
步骤S123,将待混合物体及体积云进行半透明混合,基于半透明混合结果得到待显示体积云。
其中,物体可能部分位于体积云中,因此,需要确定该物体位于体积云中的部分来进行半透明混合。由于体积云具有一定的半透明效果,因此半透明混合后的物体,位于体积云中的部分呈现若隐若现的效果,进一步提高体积云及物体显示效果的真实度。
具体地,体积云与物体的半透明混合可以在体积云渲染完成后的后效阶段实现,也可在体积云的渲染阶段实现。下面分别对这两个阶段实现半透明混合的方式进行详细说明。
(一)后效阶段的半透明混合
图16为本申请另一实施例提供的一种体积云渲染方法的流程图。如图16所示,步骤S14中,在根据光照信息对绘制模型进行渲染之后,得到待显示体积云之前,步骤S123包括:
步骤S131,确定待混合物体与体积云的重合像素点;
步骤S132,采样得到重合像素点渲染前的第一颜色缓冲值和第一深度缓冲值,以及重合像素点渲染后的第二颜色缓冲值和第二深度缓冲值;
步骤S133,将第一颜色缓冲值作为插值计算器的起始位置输入参数,第二颜色缓冲值作为插值计算器的目标位置输入参数,第一深度缓冲值与第二深度缓冲值的差值作为插值计算器的插值速度输入参数,得到插值计算器计算得到的线性插值结 果,作为重合像素点的最终像素颜色;
步骤S134,基于重合像素点的最终像素颜色得到待显示体积云。
在后效阶段,可以从渲染管线中获得体积云渲染前和渲染后的颜色缓冲贴图和深度缓冲贴图,从2张深度缓冲贴图中采样到重合像素点的第一深度缓冲值ZBuffer1和第二深度缓冲值ZBuffer2,从2张颜色缓冲贴图中采样得到重合像素点的第一颜色缓冲值ColorBuffer1和第二颜色缓冲值ColorBuffer2。
计算得到重合像素点的最终像素颜色FinalColor如下:
FinalColor=lerp(ColorBuffer1,ColorBuffer2,Zbuffer1–Zbuffer2)。
在半透明混合过程中,需要调用渲染管线的2个Pass进行颜色拷贝和深度拷贝:Copy Color Pass和Copy Depth Pass,通过颜色拷贝和深度拷贝得到颜色缓冲值和深度缓冲值。
(二)渲染阶段的半透明混合
图17为本申请另一实施例提供的一种体积云渲染方法的流程图。如图17所示,步骤S14中,在根据光照信息对绘制模型进行渲染的过程中,上述步骤S123包括:
步骤S141,确定待混合物体与体积云的重合像素点;
步骤S142,采样得到重合像素点渲染前的颜色缓冲值和深度缓冲值,以及重合像素点当前颜色值及当前深度值;
步骤S143,将深度缓冲值及当前深度值的差值作为源混合因子,将颜色缓冲值作为源颜色,将当前颜色值作为目标颜色,进行混合运算,将混合后的像素颜色作为重合像素点的最终像素颜色;
FinalColor=ColorBuffer×(Z-Zbuffer)+Color×(1-Z+Zbuffer);
其中,FinalColor表示最终像素颜色,ColorBuffer表示颜色缓冲值,Z表示当前深度值,Zbuffer表示深度缓冲值,Color表示当前颜色值。
步骤S144,基于重合像素点的最终像素颜色对绘制模型进行渲染,得到待显示体积云。
在渲染阶段,可以使用Alpha Blend方式进行半透明混合,具体计算方式并不限于上述公式,可以采用其他Alpha Blend公式,在此不再赘述。
另外,如果在渲染阶段做半透明混合,如果还从内向外逐层渲染网格模型,则可能出现过度绘制(Over Draw),即在对当前层的网格模型进行渲染时,其内层网格模型被重复Alpha Blend,产生大量的额外开销,显示效果也较差。因此,需要将体积云的渲染顺序反过来,即从外向内逐层渲染网格模型。因此,上述步骤S143中,基于重合像素点的最终像素颜色对绘制模型进行渲染,包括:按照从外向内的顺序将绘制模型的各网格模型进行逐层渲染。这样,可以有效避免Over Draw,减少额外开销,提升最终显示效果。
下述为本申请装置实施例,可以用于执行本申请方法实施例。
图18为本申请实施例提供的一种体积云渲染装置的框图,该装置可以通过软件、硬件或者两者的结合实现成为电子设备的部分或者全部。如图18所示,该体积云渲染装置包括:
绘制模块1,用于将体积云的原网格模型按照顶点法线向量向外绘制至少一层 网格模型;
筛选模块2,用于基于每层网格模型对应的噪声阈值对网格模型的像素点进行筛选,得到绘制模型;
计算模块3,用于根据光照参数计算绘制模型对应的光照信息;
处理模块4,用于根据光照信息对绘制模型进行渲染,得到待显示体积云。
可选的,筛选模块2,用于获取每层所述网格模型对应的噪声阈值;基于每层所述网格模型对预设噪声图进行采样,得到噪声值;对每层所述网格模型筛选所述噪声阈值小于或等于所述噪声值的像素点,得到所述绘制模型。
可选的,筛选模块2,用于获取每层所述网格模型对应的噪声函数,所述噪声函数为以所述像素点的坐标为变量的线性函数;根据所述噪声函数得到每层所述网格模型像素点对应的噪声边界值;对所述噪声边界值进行幂运算,得到所述噪声阈值。
可选的,该装置还包括:
输入模块,用于在根据光照参数计算所述绘制模型对应的光照信息之前,将所述原网格模型的顶点坐标作为第一输入参数输入图形处理器中的第一着色器;
第一着色器,用于根据所述第一输入参数得到所述绘制模型的顶点坐标。
可选的,处理模块4,用于将所述原网格模型的顶点数据缓存至显存中;将每层所述网格模型对应的绘制命令进行排序并合批后,将得到的合批命令添加至命令缓冲区;由图形处理器从所述命令缓冲区读取所述合批命令,基于所述合批命令及所述原网格模型的顶点数据执行渲染操作。
可选的,处理模块4,还用于根据每层所述网格模型对应的噪声阈值及每层所述网格模型相对于所述原网格模型的偏移量,生成材质属性块;将所述材质属性块作为第二输入参数输入所述图像处理器中的第二着色器;
该装置还包括:第二着色器;
第二着色器,用于根据所述第二输入参数、所述合批命令及所述原网格模型的顶点数据进行所述体积云的渲染。
可选的,计算模块3,用于根据所述绘制模型各像素点的法线向量及光照方向向量计算各所述像素点对应的第一漫反射信息;将所述第一漫反射信息作为所述光照参数;基于所述光照参数计算各所述像素点对应的像素颜色,得到所述光照信息。
可选的,计算模块3,还用于对所述第一漫反射信息进行半兰博计算,得到半兰博光照参数;获取每层所述网格模型对应的噪声阈值;根据所述噪声阈值及所述半兰博光照参数,拟合得到各所述像素点的第二漫反射信息;将所述第二漫反射信息作为所述光照参数。
可选的,计算模块3,还用于根据背光次表面散射参数及观察者视线方向向量计算各所述像素点的后向次表面散射信息;根据向光次表面散射参数及所述观察者视线方向向量计算各所述像素点的前向次表面散射信息;获取所述前向次表面散射信息对应的影响因子;根据所述前向次表面散射信息与所述影响因子的乘积,以及所述后向次表面散射信息,得到总次表面散射信息;将所述总次表面散射信息作为所述光照参数。
可选的,计算模块3,还用于根据定义的光源阴影,对阴影纹理进行采样,得到阴影参数;将所述阴影参数随着与相机距离的增加进行衰减计算,得到所述绘制模型各像素点对应的阴影信息;将所述阴影信息作为所述光照参数。
可选的,计算模块3,还用于根据所述绘制模型的表面法线向量及观察者视线方向向量计算各所述像素点的第一镜面反射信息;根据所述噪声阈值及所述第一镜面反射信息,拟合得到各所述像素点的第二镜面反射信息;将所述第一镜面反射信息和所述第二镜面反射信息作为所述光照参数。
可选的,计算模块3,用于获取环境光参数和主光源参数;基于所述光照参数、环境光参数及主光源参数计算各所述像素点对应的像素颜色,得到所述光照信息。
可选的,处理模块4,用于根据渲染前各像素点的深度值及所述体积云的深度值,进行边缘检测;根据边缘检测结果确定与所述体积云重合的待混合物体;将所述待混合物体及体积云进行半透明混合,基于半透明混合结果得到所述待显示体积云。
可选的,处理模块4包括:
混合子模块,用于在根据所述光照信息对所述绘制模型进行渲染之后,得到待显示体积云之前,确定所述待混合物体与体积云的重合像素点;采样得到所述重合像素点渲染前的第一颜色缓冲值和第一深度缓冲值,以及所述重合像素点渲染后的第二颜色缓冲值和第二深度缓冲值;将所述第一颜色缓冲值作为插值计算器的起始位置输入参数,所述第二颜色缓冲值作为所述插值计算器的目标位置输入参数,所述第一深度缓冲值与所述第二深度缓冲值的差值作为所述插值计算器的插值速度输入参数,得到所述插值计算器计算得到的线性插值结果,作为所述重合像素点的最终像素颜色;基于所述重合像素点的最终像素颜色得到所述待显示体积云。
可选的,处理模块4包括:
渲染子模块,用于在所述根据所述光照信息对所述绘制模型进行渲染的过程中,确定所述待混合物体与体积云的重合像素点;采样得到所述重合像素点渲染前的颜色缓冲值和深度缓冲值,以及所述重合像素点当前颜色值及当前深度值;将所述深度缓冲值及当前深度值的差值作为源混合因子,将所述颜色缓冲值作为源颜色,将所述当前颜色值作为目标颜色,进行混合运算,将混合后的像素颜色作为所述重合像素点的最终像素颜色;基于所述重合像素点的最终像素颜色对所述绘制模型进行渲染,得到所述待显示体积云。
可选的,渲染子模块,用于按照从外向内的顺序将所述绘制模型的各网格模型进行逐层渲染。
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本发明实施例的体积云渲染装置中的一些或者全部部件的一些或者全部功能。本发明还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置的程序/指令(例如,计算机程序/指令和计算机程序产品)。这样的实现本发明的程序/指令可以存储在计算机可读介质上,或者可以一个或者多个信号的形式存在,这样的信号可以从因特网网站 上下载得到,或者在载体信号上提供,或者以任何其他形式提供。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带、磁盘存储、量子存储器、基于石墨烯的存储介质或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
图19示意性地示出了可以实现根据本发明的体积云渲染方法的计算机装置/设备/系统,该计算机装置/设备/系统包括处理器410和以存储器420形式的计算机可读介质。存储器420是计算机可读介质的一个示例,其具有用于存储计算机程序/指令431的存储空间430。当所述计算机程序/指令431由处理器410执行时,可实现上文所描述的体积云渲染方法中的各个步骤。
图20示意性地示出了实现根据本发明的方法的计算机程序产品的框图。所述计算机程序产品包括计算机程序/指令510,当所述计算机程序/指令510被诸如图19所示的处理器410之类的处理器执行时,可实现上文所描述的体积云渲染方法中的各个步骤。
上文对本说明书特定实施例进行了描述,其与其它实施例一并涵盖于所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定遵循示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可行的或者有利的。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
应可理解,以上所述实施例仅为举例说明本发明之目的而并非对本发明进行限制。在不脱离本发明基本精神及特性的前提下,本领域技术人员还可以通过其他方式来实施本发明。本发明的范围当以后附的权利要求为准,凡在本说明书一个或多个实施例的精神和原则之内所做的任何修改、等同替换、改进等,皆应涵盖其中。

Claims (20)

  1. 一种体积云渲染方法,其特征在于,包括:
    将体积云的原网格模型按照顶点法线方向向外绘制至少一层网格模型;
    基于每层所述网格模型对应的噪声阈值对所述网格模型的像素点进行筛选,得到绘制模型;
    根据光照参数计算所述绘制模型对应的光照信息;
    根据所述光照信息对所述绘制模型进行渲染,得到待显示体积云。
  2. 根据权利要求1所述的方法,其特征在于,所述基于每层所述网格模型对应的噪声阈值对所述网格模型的像素点进行筛选,得到绘制模型,包括:
    获取每层所述网格模型对应的噪声阈值;
    基于每层所述网格模型对预设噪声图进行采样,得到噪声值;
    对每层所述网格模型筛选所述噪声阈值小于或等于所述噪声值的像素点,得到所述绘制模型。
  3. 根据权利要求2所述的方法,其特征在于,所述获取每层所述网格模型对应的噪声阈值,包括:
    获取每层所述网格模型对应的噪声函数,所述噪声函数为以所述像素点的坐标为变量的线性函数;
    根据所述噪声函数得到每层所述网格模型像素点对应的噪声边界值;
    对所述噪声边界值进行幂运算,得到所述噪声阈值。
  4. 根据权利要求1所述的方法,其特征在于,所述根据光照参数计算所述绘制模型对应的光照信息之前,所述方法还包括:
    将所述原网格模型的顶点坐标作为第一输入参数输入图形处理器中的第一着色器;
    通过带所述第一输入参数的第一着色器,得到所述绘制模型的顶点坐标。
  5. 根据权利要求2所述的方法,其特征在于,所述根据所述光照信息对所述绘制模型进行渲染,包括:
    将所述原网格模型的顶点数据缓存至显存中;
    将每层所述网格模型对应的绘制命令进行排序并合批后,将得到的合批命令添加至命令缓冲区;
    由图形处理器从所述命令缓冲区读取所述合批命令,基于所述合批命令及所述原网格模型的顶点数据执行渲染操作。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述光照信息对所述绘制模型进行渲染,还包括:
    根据每层所述网格模型对应的噪声阈值及每层所述网格模型相对于所述原网格模型的偏移量,生成材质属性块;
    将所述材质属性块作为第二输入参数输入所述图像处理器中的第二着色器;
    所述由图形处理器从所述命令缓冲区读取所述合批命令,基于所述合批命令及所述原网格模型的顶点数据执行渲染操作,包括:
    由带所述第二输入参数的第二着色器,根据所述合批命令及所述原网格模型的顶点数据进行所述体积云的渲染。
  7. 根据权利要求1所述的方法,其特征在于,所述根据光照参数计算所述绘制模型对应的光照信息,包括:
    根据所述绘制模型各像素点的法线向量及光照方向向量计算各所述像素点对应的第一漫反射信息;
    将所述第一漫反射信息作为所述光照参数;
    基于所述光照参数计算各所述像素点对应的像素颜色,得到所述光照信息。
  8. 根据权利要求7所述的方法,其特征在于,所述根据光照参数计算所述绘制模型对应的光照信息,还包括:
    对所述第一漫反射信息进行半兰博计算,得到半兰博光照参数;
    获取每层所述网格模型对应的噪声阈值;
    根据所述噪声阈值及所述半兰博光照参数,拟合得到各所述像素点的第二漫反射信息;
    将所述第二漫反射信息作为所述光照参数。
  9. 根据权利要求7所述的方法,其特征在于,所述根据光照参数计算所述绘制模型对应的光照信息,还包括:
    根据背光次表面散射参数及观察者视线方向向量计算各所述像素点的后向次表面散射信息;
    根据向光次表面散射参数及所述观察者视线方向向量计算各所述像素点的前向次表面散射信息;
    获取所述前向次表面散射信息对应的影响因子;
    根据所述前向次表面散射信息与所述影响因子的乘积,以及所述后向次表面散射信息,得到总次表面散射信息;
    将所述总次表面散射信息作为所述光照参数。
  10. 根据权利要求7所述的方法,其特征在于,所述根据光照参数计算所述绘制模型对应的光照信息,还包括:
    根据定义的光源阴影,对阴影纹理进行采样,得到阴影参数;
    将所述阴影参数随着与相机距离的增加进行衰减计算,得到所述绘制模型各像素点对应的阴影信息;
    将所述阴影信息作为所述光照参数。
  11. 根据权利要求7所述的方法,其特征在于,所述根据光照参数计算所述绘制模型对应的光照信息,还包括:
    根据所述绘制模型的表面法线向量及观察者视线方向向量计算各所述像素点的第一镜面反射信息;
    根据所述噪声阈值及所述第一镜面反射信息,拟合得到各所述像素点的第二镜面反射信息;
    将所述第一镜面反射信息和所述第二镜面反射信息作为所述光照参数。
  12. 根据权利要求1所述的方法,其特征在于,所述根据光照参数计算所述绘 制模型对应的光照信息,包括:
    获取环境光参数和主光源参数;
    基于所述光照参数、环境光参数及主光源参数计算各所述像素点对应的像素颜色,得到所述光照信息。
  13. 根据权利要求1所述的方法,其特征在于,所述根据所述光照信息对所述绘制模型进行渲染,得到待显示体积云,包括:
    根据渲染前各像素点的深度值及所述体积云的深度值,进行边缘检测;
    根据边缘检测结果确定与所述体积云重合的待混合物体;
    将所述待混合物体及体积云进行半透明混合,基于半透明混合结果得到所述待显示体积云。
  14. 根据权利要求13所述的方法,其特征在于,所述根据所述光照信息对所述绘制模型进行渲染之后,得到待显示体积云之前,所述将所述待混合物体及体积云进行半透明混合,基于半透明混合结果得到所述待显示体积云,包括:
    确定所述待混合物体与体积云的重合像素点;
    采样得到所述重合像素点渲染前的第一颜色缓冲值和第一深度缓冲值,以及所述重合像素点渲染后的第二颜色缓冲值和第二深度缓冲值;
    将所述第一颜色缓冲值作为插值计算器的起始位置输入参数,所述第二颜色缓冲值作为所述插值计算器的目标位置输入参数,所述第一深度缓冲值与所述第二深度缓冲值的差值作为所述插值计算器的插值速度输入参数,得到所述插值计算器计算得到的线性插值结果,作为所述重合像素点的最终像素颜色;
    基于所述重合像素点的最终像素颜色得到所述待显示体积云。
  15. 根据权利要求13所述的方法,其特征在于,在所述根据所述光照信息对所述绘制模型进行渲染的过程中,所述将所述待混合物体及体积云进行半透明混合,基于半透明混合结果得到所述待显示体积云,包括:
    确定所述待混合物体与体积云的重合像素点;
    采样得到所述重合像素点渲染前的颜色缓冲值和深度缓冲值,以及所述重合像素点当前颜色值及当前深度值;
    将所述深度缓冲值及当前深度值的差值作为源混合因子,将所述颜色缓冲值作为源颜色,将所述当前颜色值作为目标颜色,进行混合运算,将混合后的像素颜色作为所述重合像素点的最终像素颜色;
    基于所述重合像素点的最终像素颜色对所述绘制模型进行渲染,得到所述待显示体积云。
  16. 根据权利要求15所述的方法,其特征在于,所述基于所述重合像素点的最终像素颜色对所述绘制模型进行渲染,包括:
    按照从外向内的顺序将所述绘制模型的各网格模型进行逐层渲染。
  17. 一种体积云渲染装置,其特征在于,包括:
    绘制模块,用于将体积云的原网格模型按照顶点法线方向向外绘制至少一层网格模型;
    筛选模块,用于基于每层所述网格模型对应的噪声阈值对所述网格模型的像素 点进行筛选,得到绘制模型;
    计算模块,用于根据光照参数计算所述绘制模型对应的光照信息;
    处理模块,用于根据所述光照信息对所述绘制模型进行渲染,得到待显示体积云。
  18. 一种计算机装置/设备/系统,包括存储器、处理器及存储在存储器上的计算机程序/指令,所述处理器执行所述计算机程序/指令时实现根据权利要求1-16中任一项所述的体积云渲染方法的步骤。
  19. 一种计算机可读介质,其上存储有计算机程序/指令,所述计算机程序/指令被处理器执行时实现根据权利要求1-16中任一项所述的体积云渲染方法的步骤。
  20. 一种计算机程序产品,包括计算机程序/指令,所述计算机程序/指令被处理器执行时实现根据权利要求1-16中任一项所述的体积云渲染方法的步骤。
PCT/CN2021/121097 2020-12-02 2021-09-27 一种体积云渲染方法、装置、程序和可读介质 WO2022116659A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011388910.3A CN112200900B (zh) 2020-12-02 2020-12-02 一种体积云渲染方法、装置、电子设备及存储介质
CN202011388910.3 2020-12-02

Publications (1)

Publication Number Publication Date
WO2022116659A1 true WO2022116659A1 (zh) 2022-06-09

Family

ID=74033650

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/121097 WO2022116659A1 (zh) 2020-12-02 2021-09-27 一种体积云渲染方法、装置、程序和可读介质

Country Status (2)

Country Link
CN (1) CN112200900B (zh)
WO (1) WO2022116659A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294251A (zh) * 2022-06-13 2022-11-04 无人智境(北京)技术有限公司 一种海量集装箱批量渲染方法和设备
CN116630486A (zh) * 2023-07-19 2023-08-22 山东锋士信息技术有限公司 一种基于Unity3D渲染的半自动化动画制作方法
CN117269940A (zh) * 2023-11-17 2023-12-22 北京易控智驾科技有限公司 点云数据生成方法、激光雷达的感知能力验证方法
CN117274473A (zh) * 2023-11-21 2023-12-22 北京渲光科技有限公司 一种多重散射实时渲染的方法、装置及电子设备

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200900B (zh) * 2020-12-02 2021-02-26 成都完美时空网络技术有限公司 一种体积云渲染方法、装置、电子设备及存储介质
CN113223131B (zh) * 2021-04-16 2022-05-31 完美世界(北京)软件科技发展有限公司 一种模型的渲染方法、装置、存储介质以及计算设备
CN113144613A (zh) * 2021-05-08 2021-07-23 成都乘天游互娱网络科技有限公司 基于模型的体积云生成的方法
CN113313798B (zh) * 2021-06-23 2022-05-03 完美世界(北京)软件科技发展有限公司 云图的制作方法及装置、存储介质、计算机设备
CN113470161B (zh) * 2021-06-30 2022-06-07 完美世界(北京)软件科技发展有限公司 虚拟环境中容积云的光照确定方法、相关设备及存储介质
CN113256779B (zh) * 2021-07-05 2021-11-19 广州中望龙腾软件股份有限公司 一种基于OpenGL指令的渲染运行方法及系统
CN113658315B (zh) * 2021-08-17 2023-09-29 广州光锥元信息科技有限公司 基于分形噪声的光影特效制作方法和装置
CN113936097B (zh) * 2021-09-30 2023-10-20 完美世界(北京)软件科技发展有限公司 体积云渲染方法、设备及存储介质
CN114332311B (zh) * 2021-12-05 2023-08-04 北京字跳网络技术有限公司 一种图像生成方法、装置、计算机设备及存储介质
TWI816433B (zh) * 2022-06-14 2023-09-21 英業達股份有限公司 渲染方法、三維繪圖軟體及三維繪圖系統

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570929A (zh) * 2016-11-07 2017-04-19 北京大学(天津滨海)新代信息技术研究院 一种动态体积云的构建与绘制方法
US20170161909A1 (en) * 2015-12-03 2017-06-08 Bandai Namco Entertainment Inc. Image generation system, image generation method, and information storage medium
CN107481312A (zh) * 2016-06-08 2017-12-15 腾讯科技(深圳)有限公司 一种基于体绘制的图像渲染及装置
CN110827391A (zh) * 2019-11-12 2020-02-21 腾讯科技(深圳)有限公司 图像渲染方法、装置、设备及存储介质
CN111968215A (zh) * 2020-07-29 2020-11-20 完美世界(北京)软件科技发展有限公司 一种体积光渲染方法、装置、电子设备及存储介质
CN112200900A (zh) * 2020-12-02 2021-01-08 成都完美时空网络技术有限公司 一种体积云渲染方法、装置、电子设备及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103080984B (zh) * 2010-06-30 2017-04-12 巴里·林恩·詹金斯 确定从视区看去可见的网格多边形或所述网格多边形的分段的集合的方法及系统
KR20200082601A (ko) * 2018-12-31 2020-07-08 한국전자통신연구원 다층 볼륨 구름 렌더링 장치 및 방법
CN111145326B (zh) * 2019-12-26 2023-12-19 网易(杭州)网络有限公司 三维虚拟云模型的处理方法、存储介质、处理器及电子装置
CN111968216B (zh) * 2020-07-29 2024-03-22 完美世界(北京)软件科技发展有限公司 一种体积云阴影渲染方法、装置、电子设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170161909A1 (en) * 2015-12-03 2017-06-08 Bandai Namco Entertainment Inc. Image generation system, image generation method, and information storage medium
CN107481312A (zh) * 2016-06-08 2017-12-15 腾讯科技(深圳)有限公司 一种基于体绘制的图像渲染及装置
CN106570929A (zh) * 2016-11-07 2017-04-19 北京大学(天津滨海)新代信息技术研究院 一种动态体积云的构建与绘制方法
CN110827391A (zh) * 2019-11-12 2020-02-21 腾讯科技(深圳)有限公司 图像渲染方法、装置、设备及存储介质
CN111968215A (zh) * 2020-07-29 2020-11-20 完美世界(北京)软件科技发展有限公司 一种体积光渲染方法、装置、电子设备及存储介质
CN112200900A (zh) * 2020-12-02 2021-01-08 成都完美时空网络技术有限公司 一种体积云渲染方法、装置、电子设备及存储介质

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294251A (zh) * 2022-06-13 2022-11-04 无人智境(北京)技术有限公司 一种海量集装箱批量渲染方法和设备
CN116630486A (zh) * 2023-07-19 2023-08-22 山东锋士信息技术有限公司 一种基于Unity3D渲染的半自动化动画制作方法
CN116630486B (zh) * 2023-07-19 2023-11-07 山东锋士信息技术有限公司 一种基于Unity3D渲染的半自动化动画制作方法
CN117269940A (zh) * 2023-11-17 2023-12-22 北京易控智驾科技有限公司 点云数据生成方法、激光雷达的感知能力验证方法
CN117269940B (zh) * 2023-11-17 2024-03-15 北京易控智驾科技有限公司 点云数据生成方法、激光雷达的感知能力验证方法
CN117274473A (zh) * 2023-11-21 2023-12-22 北京渲光科技有限公司 一种多重散射实时渲染的方法、装置及电子设备
CN117274473B (zh) * 2023-11-21 2024-02-02 北京渲光科技有限公司 一种多重散射实时渲染的方法、装置及电子设备

Also Published As

Publication number Publication date
CN112200900A (zh) 2021-01-08
CN112200900B (zh) 2021-02-26

Similar Documents

Publication Publication Date Title
WO2022116659A1 (zh) 一种体积云渲染方法、装置、程序和可读介质
US20230316633A1 (en) Image processing method and related apparatus
CN111508052B (zh) 三维网格体的渲染方法和装置
Behrendt et al. Realistic real-time rendering of landscapes using billboard clouds
CN111968215B (zh) 一种体积光渲染方法、装置、电子设备及存储介质
CN113674389B (zh) 场景渲染方法、装置、电子设备及存储介质
CN111899325B (zh) 晶石模型的渲染方法、装置、电子设备及存储介质
CN111127623A (zh) 模型的渲染方法、装置、存储介质及终端
US6791544B1 (en) Shadow rendering system and method
CN115830208B (zh) 全局光照渲染方法、装置、计算机设备和存储介质
CN112884874A (zh) 在虚拟模型上贴花的方法、装置、设备及介质
Ganestam et al. Real-time multiply recursive reflections and refractions using hybrid rendering
CN112819941A (zh) 渲染水体面的方法、装置、设备和计算机可读存储介质
CN112446943A (zh) 图像渲染的方法、装置及计算机可读存储介质
US6753875B2 (en) System and method for rendering a texture map utilizing an illumination modulation value
Rademacher Ray tracing: graphics for the masses
KR101118597B1 (ko) 모바일용 컴퓨터 그래픽 랜더링 방법 및 시스템
CN112465941B (zh) 一种体积云处理方法、装置、电子设备及存储介质
CN116310018A (zh) 一种基于虚拟光照环境和光线查询的模型混合渲染方法
JPH08153213A (ja) 画像合成表示方法
JP6626698B2 (ja) レンダリング計算方法および表示装置
WO2024027237A1 (zh) 渲染的优化方法、电子设备和计算机可读存储介质
US20230274493A1 (en) Direct volume rendering apparatus
CN112907720B (zh) 一种面向真实感绘制的海冰数据可视化方法及装置
CN117333598B (zh) 一种基于数字场景的3d模型渲染系统及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21899693

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21899693

Country of ref document: EP

Kind code of ref document: A1