CN112465941A - Volume cloud processing method and device, electronic equipment and storage medium - Google Patents

Volume cloud processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112465941A
CN112465941A CN202011402256.7A CN202011402256A CN112465941A CN 112465941 A CN112465941 A CN 112465941A CN 202011402256 A CN202011402256 A CN 202011402256A CN 112465941 A CN112465941 A CN 112465941A
Authority
CN
China
Prior art keywords
volume cloud
model
value
rendering
cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011402256.7A
Other languages
Chinese (zh)
Other versions
CN112465941B (en
Inventor
申晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Perfect World Network Technology Co Ltd
Original Assignee
Chengdu Perfect World Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Perfect World Network Technology Co Ltd filed Critical Chengdu Perfect World Network Technology Co Ltd
Priority to CN202011402256.7A priority Critical patent/CN112465941B/en
Publication of CN112465941A publication Critical patent/CN112465941A/en
Application granted granted Critical
Publication of CN112465941B publication Critical patent/CN112465941B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/807Gliding or sliding on surfaces, e.g. using skis, skates or boards
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/663Methods for processing data by generating or executing the game program for rendering three dimensional images for simulating liquid objects, e.g. water, gas, fog, snow, clouds
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8017Driving on land or water; Flying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/61Scene description

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The application relates to a volume cloud processing method, a volume cloud processing device, electronic equipment and a storage medium, wherein the method comprises the following steps: obtaining a drawing model of volume cloud in a scene to be displayed and illumination information corresponding to the drawing model; performing edge detection according to the depth value of each pixel point before rendering in the scene to be displayed and the depth value of the volume cloud; determining an object to be mixed which is coincident with the volume cloud in the scene to be displayed according to an edge detection result; and performing semi-transparent mixing on the object to be mixed and the volume cloud, and obtaining the volume cloud to be displayed based on a semi-transparent mixing result and the illumination information. According to the technical scheme, the volume cloud and the objects in the cloud are subjected to semitransparent mixing, and the volume cloud has a certain semitransparent effect, so that the part of the objects in the volume cloud after semitransparent mixing has a hidden and appearing effect, and the reality degree of the volume cloud and the object display effect is further improved.

Description

Volume cloud processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a volume cloud processing method and apparatus, an electronic device, and a storage medium.
Background
The volume cloud is an important component in outdoor scenes of games, and in the three-dimensional games of today, players are not limited to activities on land. The flying in the air is a common scene. In order to ensure that the natural environment of the player is more real during the flight, a scene passing through a volume cloud is simulated.
The simulation effect of the volume cloud is directly related to the fidelity of the three-dimensional game, and the experience of the user is influenced, so that how to realize the more vivid volume cloud effect is a technical problem to be solved in the prior art.
Disclosure of Invention
In order to solve the technical problem or at least partially solve the technical problem, embodiments of the present application provide a volume cloud processing method, an apparatus, an electronic device, and a storage medium.
According to an aspect of an embodiment of the present application, there is provided a volume cloud processing method, including:
obtaining a drawing model of volume cloud in a scene to be displayed and illumination information corresponding to the drawing model;
performing edge detection according to the depth value of each pixel point before rendering in the scene to be displayed and the depth value of the volume cloud;
determining an object to be mixed which is coincident with the volume cloud in the scene to be displayed according to an edge detection result;
and performing semi-transparent mixing on the object to be mixed and the volume cloud, and obtaining the volume cloud to be displayed based on a semi-transparent mixing result and the illumination information.
Optionally, performing semi-transparent mixing on the object to be mixed and the volume cloud, and obtaining the volume cloud to be displayed based on a semi-transparent mixing result and the illumination information, including:
rendering the drawing model according to the illumination information;
determining coincident pixel points of the object to be mixed and the volume cloud;
sampling to obtain a first color buffer value and a first depth buffer value before the rendering of the coincident pixel point, and a second color buffer value and a second depth buffer value after the rendering of the coincident pixel point;
taking the first color buffer value as an initial position input parameter of an interpolation calculator, taking the second color buffer value as a target position input parameter of the interpolation calculator, taking a difference value between the first depth buffer value and the second depth buffer value as an interpolation speed input parameter of the interpolation calculator, and obtaining a linear interpolation result calculated by the interpolation calculator as a final pixel color of the overlapped pixel point;
and obtaining the volume cloud to be displayed based on the final pixel color of the coincident pixel point.
Optionally, performing semi-transparent mixing on the object to be mixed and the volume cloud, and obtaining the volume cloud to be displayed based on a semi-transparent mixing result and the illumination information, including:
determining coincident pixel points of the object to be mixed and the volume cloud;
sampling to obtain a color buffer value and a depth buffer value of the coincident pixel point before rendering, and sampling a current color value and a current depth value of the coincident pixel point in the process of rendering the rendering model based on the illumination information;
taking the difference value between the depth buffer value and the current depth value as a source mixing factor, taking the color buffer value as a source color, taking the current color value as a target color, performing mixing operation, and taking the mixed pixel color as the final pixel color of the coincident pixel point;
rendering the drawing model based on the final pixel color of the coincident pixel point to obtain the volume cloud to be displayed.
Optionally, the drawing model is obtained by drawing at least one layer of mesh model outwards from an original mesh model of the volume cloud according to a vertex normal vector, and the drawing model is rendered based on a final pixel color of the coincident pixel point to obtain the volume cloud to be displayed, including:
and rendering each grid model of the drawing model layer by layer according to the sequence from outside to inside.
Optionally, the obtaining of the drawing model of the volume cloud in the scene to be displayed includes:
drawing at least one layer of mesh model outwards from the original mesh model of the volume cloud according to the vertex normal vector;
and screening the pixel points of the grid model based on the noise threshold value corresponding to each layer of the grid model to obtain the drawing model.
Optionally, the screening, based on the noise threshold corresponding to each layer of the mesh model, pixel points of the mesh model to obtain a drawing model includes:
acquiring a noise threshold corresponding to each layer of the grid model;
sampling a preset noise map based on each layer of the grid model to obtain a noise value;
and screening the pixel points of which the noise threshold is smaller than or equal to the noise value for each layer of the grid model to obtain the drawing model.
Optionally, the obtaining a noise threshold corresponding to each layer of the mesh model includes:
acquiring a noise function corresponding to each layer of the grid model, wherein the noise function is a linear function taking the coordinates of the pixel points as variables;
obtaining a noise boundary value corresponding to each layer of the grid model pixel points according to the noise function;
and performing power operation on the noise boundary value to obtain the noise threshold value.
According to another aspect of the embodiments of the present application, there is provided a volume cloud processing apparatus, including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a drawing model of volume cloud in a scene to be displayed and illumination information corresponding to the drawing model;
the edge detection module is used for carrying out edge detection according to the depth value of each pixel point before rendering in the scene to be displayed and the depth value of the volume cloud;
the object determining module is used for determining an object to be mixed which is superposed with the volume cloud in the scene to be displayed according to an edge detection result;
and the semi-transparent mixing module is used for carrying out semi-transparent mixing on the object to be mixed and the volume cloud, and obtaining the volume cloud to be displayed based on a semi-transparent mixing result and the illumination information.
According to another aspect of an embodiment of the present application, there is provided an electronic device including: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the above method steps when executing the computer program.
According to another aspect of embodiments of the present application, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the above-mentioned method steps.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
the volume cloud and the object positioned in the cloud are subjected to semitransparent mixing, and the volume cloud has a certain semitransparent effect, so that the part of the object positioned in the volume cloud after semitransparent mixing has a hidden and appearing effect, and the reality degree of the display effect of the volume cloud and the object is further improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
Fig. 1 is a flowchart of a volume cloud processing method according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a volume cloud processing method according to another embodiment of the present application;
fig. 3 is a flowchart of a volume cloud processing method according to another embodiment of the present application;
fig. 4 is a flowchart of a volume cloud processing method according to another embodiment of the present application;
FIG. 5 is a schematic diagram of rendering a mesh model according to an embodiment of the present application;
fig. 6 is a flowchart of a volume cloud processing method according to another embodiment of the present application;
FIG. 7 is a schematic diagram of a volumetric cloud model provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a volumetric cloud model provided in another embodiment of the present application;
fig. 9 is a flowchart of a volume cloud processing method according to another embodiment of the present application;
fig. 10 is a flowchart of a volumetric cloud processing method according to another embodiment of the present application;
fig. 11 is a flowchart of a volume cloud processing method according to another embodiment of the present application;
fig. 12 is a flowchart of a volumetric cloud processing method according to another embodiment of the present application;
fig. 13 is a flowchart of a volumetric cloud processing method according to another embodiment of the present application;
fig. 14 is a flowchart of a volumetric cloud processing method according to another embodiment of the present application;
fig. 15 is a flowchart of a volumetric cloud processing method according to another embodiment of the present application;
fig. 16 is a flowchart of a volumetric cloud processing method according to another embodiment of the present application;
fig. 17 is a flowchart of a volumetric cloud processing method according to another embodiment of the present application;
fig. 18 is a block diagram of a volume cloud processing apparatus according to an embodiment of the present application;
fig. 19 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Volume Clouds (Volumetric Clouds), commonly referred to as Volumetric Clouds, use image engines to simulate the translucent, random appearance of a real cloud.
Since there are usually objects shuttling in the volume cloud in the scene to be displayed including the volume cloud, such as people, aircrafts, airships, birds, and dragons, the objects may be partially located in the volume cloud, and therefore, it is necessary to determine the portion of the objects located in the volume cloud for performing the semi-transparent blending to improve the reality of the scene to be displayed.
First, a volume cloud processing method provided by an embodiment of the present invention is described below.
Fig. 1 is a flowchart of a volume cloud processing method according to an embodiment of the present disclosure. As shown in fig. 1, the method includes the following steps S11 to S14:
step S11, obtaining a drawing model of the volume cloud in the scene to be displayed and illumination information corresponding to the drawing model;
step S12, performing edge detection according to the depth value of each pixel point before rendering in the scene to be displayed and the depth value of the volume cloud;
step S13, determining an object to be mixed which is superposed with the volume cloud in the scene to be displayed according to the edge detection result;
and step S14, performing semi-transparent mixing on the object to be mixed and the volume cloud, and obtaining the volume cloud to be displayed based on the semi-transparent mixing result and the illumination information.
Through the steps of S11 to S14, the volume cloud and the object located in the cloud are semi-transparent mixed, and the volume cloud has a certain semi-transparent effect, so that the part of the semi-transparent mixed object located in the volume cloud has a hidden and appearing effect, and the reality degree of the volume cloud and the object display effect is further improved.
Specifically, the semitransparent mixing of the volume cloud and the object may be implemented in a post-effect stage after the volume cloud rendering is completed, or in a rendering stage of the volume cloud. The manner in which these two stages achieve semi-transparent mixing is described in detail below.
(one) translucent mixing in the aftereffect stage
Fig. 2 is a flowchart of a volume cloud rendering method according to another embodiment of the present disclosure. As shown in fig. 2, step S14 includes:
step S21, rendering the drawing model according to the illumination information;
step S22, determining coincident pixel points of the object to be mixed and the volume cloud;
step S23, sampling to obtain a first color buffer value and a first depth buffer value before rendering of the coincident pixel point, and a second color buffer value and a second depth buffer value after rendering of the coincident pixel point;
step S24, using the first color buffer value as the initial position input parameter of the interpolation calculator, the second color buffer value as the target position input parameter of the interpolation calculator, the difference value between the first depth buffer value and the second depth buffer value as the interpolation speed input parameter of the interpolation calculator, and obtaining the linear interpolation result calculated by the interpolation calculator as the final pixel color of the overlapped pixel point;
and step S25, obtaining the volume cloud to be displayed based on the final pixel color of the coincident pixel point.
In a post-effect stage, color buffer maps and depth buffer maps before volume cloud rendering and after volume cloud rendering can be obtained from a rendering pipeline, a first depth buffer value ZBuffer1 and a second depth buffer value ZBuffer2 of a coincident pixel point are sampled from 2 depth buffer maps, and a first color buffer value ColorBuffer1 and a second color buffer value ColorBuffer2 of the coincident pixel point are obtained by sampling from 2 color buffer maps.
The final pixel color FinalColor of the coincident pixel point is obtained by calculation as follows:
FinalColor=lerp(ColorBuffer1,ColorBuffer2,Zbuffer1–Zbuffer2)。
in the translucent blending process, 2 passes of the rendering pipeline need to be called for color copy and depth copy: copy Color Pass and Copy DepthPass, which get Color buffer and depth buffer values by Color Copy and depth Copy.
Translucent blending in rendering stage
Fig. 3 is a flowchart of a volume cloud rendering method according to another embodiment of the present application. As shown in fig. 3, step S14 includes:
step S31, determining coincident pixel points of the object to be mixed and the volume cloud;
step S32, obtaining a color buffer value and a depth buffer value of the coincident pixel point before rendering by sampling, and sampling the current color value and the current depth value of the coincident pixel point in the process of rendering the rendering model based on the illumination information;
step S33, taking the difference value between the depth buffer value and the current depth value as a source mixing factor, taking the color buffer value as a source color, taking the current color value as a target color, performing mixing operation, and taking the mixed pixel color as the final pixel color of the coincident pixel point;
FinalColor=ColorBuffer×(Z-Zbuffer)+Color×(1-Z+Zbuffer);
wherein FinalColor represents the final pixel Color, ColorBuffer represents the Color buffer value, Z represents the current depth value, Zbuffer represents the depth buffer value, and Color represents the current Color value.
And step S34, rendering the drawing model based on the final pixel color of the coincident pixel point to obtain the volume cloud to be displayed.
In the rendering stage, the Alpha Blend mode may be used for the semi-transparent blending, and the specific calculation mode is not limited to the above formula, and other Alpha Blend formulas may be used, which are not described herein again.
Optionally, the rendering model is obtained by rendering at least one layer of mesh model outwards from the original mesh model of the volume cloud according to the vertex normal vector. In the process of rendering the drawing model, the rendering can be carried out layer by layer from the innermost grid model to the outermost grid. However, if semi-transparent blending is performed in the rendering stage, if the mesh model is rendered layer by layer from inside to outside, overdrawing (Over Draw) may occur, that is, when the mesh model of the current layer is rendered, the mesh model of the inner layer is repeated with Alpha Blend, which generates a large amount of extra cost and results in poor display effect. Therefore, the rendering order of the volume cloud needs to be reversed, i.e., the mesh model is rendered layer by layer from the outside in. In step S34, rendering the rendering model based on the final pixel color of the coincident pixel point includes: and rendering each grid model of the drawing model layer by layer according to the sequence from outside to inside. Therefore, the Over Draw can be effectively avoided, the extra overhead is reduced, and the final display effect is improved.
At present, with the development of a mobile-end game, considering performance limitations of a mobile end such as a mobile phone, and the like, it is required that the mobile-end game has a lower performance overhead as much as possible on the premise of ensuring the reality of an effect, especially the performance overhead in a rendering stage. Therefore, in the embodiment of the present application, an implementation manner for reducing performance overhead of volume cloud rendering is also provided.
Fig. 4 is a flowchart of a volume cloud processing method according to another embodiment of the present application. As shown in fig. 4, the method includes the following steps S41 to S42:
and step S41, drawing at least one layer of mesh model outwards from the original mesh model of the volume cloud according to the vertex normal vector.
As shown in fig. 5, the original Mesh model 21 of the volume cloud is additionally drawn outwards at equal intervals for N times according to the vertex normal vector, where N is an integer greater than or equal to 1, so as to obtain the multi-layer Mesh model 22.
And step S42, screening the pixel points of the grid model based on the noise threshold corresponding to each layer of grid model to obtain a drawing model.
And sampling a preset noise map based on each layer of grid model, and comparing the pixel Value of each sampled pixel point with a preset noise threshold (Clip Value) to screen out the pixel points meeting the requirement to obtain a drawing model.
Through the steps S41 and S42, N layers of mesh models are additionally drawn on the original mesh model, the pixel values obtained by sampling the preset noise map based on the mesh models are compared with the noise threshold set by each layer of mesh model, the pixel points of each layer of mesh model are screened, and finally the drawing model corresponding to the volume cloud is obtained. Therefore, the shape of the volume cloud is determined based on the grid model instead of the shape of the noise image, if the shape of the volume cloud is required to be changed, only the number of additionally drawn layers and the noise threshold value of the screening pixel point need to be set, and a specific noise image does not need to be selected in advance. In addition, through multiple additional drawing of the model, the adoption times of the noise map are reduced, and the performance overhead of generating the volume cloud is further reduced, so that the volume cloud can smoothly run on mobile terminal equipment such as a mobile phone. Moreover, the volume cloud is obtained based on the rendering of the model, and the three-dimensional effect is not provided for people through simulating parallax, so that the phenomenon that the edge of the volume cloud is worn on the wall is avoided, and the reality of the volume cloud effect is improved.
Fig. 6 is a flowchart of a volume cloud processing method according to another embodiment of the present application. As shown in fig. 6, the above step S42 includes the following steps S51 to S53:
step S51, acquiring a noise threshold corresponding to each layer of grid model;
step S52, sampling a preset noise image based on each layer of grid model to obtain a noise value;
and step S53, screening pixel points with noise threshold values smaller than or equal to the noise value for each layer of grid model to obtain a drawing model.
As shown in fig. 5, a curve 23 represents a noise Value obtained by sampling a preset noise map based on a network model, each layer of network model 22 is provided with its corresponding Clip Value, and pixel points whose Clip values are greater than the noise Value are discarded, that is, the dotted line portion in fig. 5 is retained, and only pixel points whose Clip values are less than or equal to the noise Value are retained, so as to obtain a drawing model, that is, the solid line portion in fig. 5.
In the above embodiment, the Clip Value may be calculated based on a preset linear noise function, for example, a linear function y ═ kx + b (k, b is a constant, k ≠ 0), y represents the Clip Value, and x represents the pixel coordinates. However, if the noise function is linear, the edge of the final volume cloud model is sharp, and as shown in fig. 7, the volume cloud effect is poor in reality.
To improve the realism of the display effect, the Clip Value may be non-linearized. Optionally, the step S51 includes the following steps a1 to A3:
step A1, acquiring a noise function corresponding to each layer of grid model, wherein the noise function is a linear function taking the coordinates of pixel points as variables;
a2, obtaining a noise boundary value corresponding to each layer of grid model pixel points according to a noise function;
and step A3, performing power operation on the noise boundary value to obtain a noise threshold value.
Through the steps A1 to A3, the power operation is performed on the Clip Value, so that the Clip Value is nonlinear, and thus, as shown in FIG. 8, the edges of the screened volume cloud model become smooth, and the reality degree of the volume cloud effect is improved.
In the above embodiment, the rendering model obtained by additionally rendering the original mesh model N times and filtering based on the noise value needs to generate vertices of the rendering model based on the vertices of the original mesh model. The vertex can be generated in the following two ways, specifically as follows:
(1) vertices are created by a geometry shader.
Fig. 9 is a flowchart of a volume cloud processing method according to another embodiment of the present application. As shown in fig. 9, after the step S42, the method further includes the following steps:
step S61, inputting the vertex coordinates of the original mesh model as a first input parameter into a first shader in the graphics processor;
in step S62, vertex coordinates of the rendering model are obtained by the first shader with the first input parameter.
Wherein, the first shader is a geometry shader.
Vertices are added by the geometry shader based on the original mesh model, via steps S61 and S62. Because the operation of creating the vertex by the geometry shader is carried out in a Graphics Processing Unit (GPU), the CPU performance overhead is not occupied.
However, the vertex buffer output of the geometry shader is limited by size, such as not exceeding 1024 floating point numbers (floats), i.e., there is a limit to the number of output vertices. In addition, most mobile-end devices do not support geometry shaders, so that the volume cloud cannot be rendered on the mobile end.
(2) Rendering by GPU-Instance technology
Fig. 10 is a flowchart of a volume cloud processing method according to another embodiment of the present application. As shown in fig. 10, in step S14, rendering the rendering model according to the lighting information includes the following steps:
step S71, caching the vertex data of the original grid model into a video memory;
step S72, sorting and batching drawing commands corresponding to each layer of grid model, and adding the obtained batching commands to a command buffer area;
in step S73, the graphics processor reads the batch command from the command buffer, and performs the rendering operation based on the batch command and the vertex data of the original mesh model.
The overhead generated in the graphics rendering process includes overhead executed on a CPU and overhead executed on a GPU. The overhead executed on the CPU mainly includes the following three types: the first type, driving the overhead of submitting rendering commands; the second type, which drives the overhead of status command switching caused by submitting status commands; and a third class, other driver overheads that result in loading or synchronizing data because the API is called.
The first type of overhead can be significantly reduced by batch merging (i.e., merging Draw data of a plurality of renderable objects with the same rendering state into a batch of draws in a reasonable manner), and instance rendering (i.e., drawing renderable objects with many geometric data approximations by a Draw instant function once, and transmitting differences of the renderable objects into a rendering command through arrays). By effectively sequencing the renderable objects, the renderable objects in the same state are rendered as sequentially as possible, so that the state switching is reduced, and the second type of overhead can be obviously reduced. Therefore, before the rendering is executed, the data can be preprocessed in the two modes, and the performance overhead of the CPU in the graphic rendering process can be effectively reduced.
In the above steps S71 to S73, since the volume cloud layer grid models are the same, the drawing commands (DrawCall) called multiple times are combined, and the same multi-layer grid model is rendered by one DrawCall batch. Thus, CPU performance overhead may be reduced by reducing the number of DrawCall. In addition, the time consumption of the whole process of volume cloud rendering is relatively large, so that the time consumption of additionally added sequencing batch operation on the CPU is negligible, and no obvious performance influence is generated on the whole process.
Optionally, in the process of rendering by using the GPU-Instance technology, the CPU may transmit the material attribute information to the GPU in the following manner.
Fig. 11 is a flowchart of a volume cloud processing method according to another embodiment of the present application. As shown in fig. 11, in step S14, rendering the rendering model according to the lighting information further includes the following steps:
step S81, generating a material attribute block according to the noise threshold corresponding to each layer of grid model and the offset of each layer of grid model relative to the original grid model;
step S82, inputting the material property block as a second input parameter into a second shader of the graphics processor;
the step S73 includes:
in step S83, a second shader with second input parameters performs volume cloud rendering according to the batching command and the vertex data of the original grid model.
In this embodiment, since the material of each layer of mesh model is the same, and the difference is only the offset and the Clip Value with respect to the original mesh model, the offset and the Clip Value of each layer can be packaged into a material property block and transferred to a shader in the GPU when the material property information is transferred. By using the material attribute block, the time consumed for operating the material can be reduced, and the speed of material operation can be improved; in addition, by matching with the GPU-Instance technology, the performance can be further improved, the cost of the entity object can be saved, the DrawCall can be reduced, and the CPU cost and the memory cost can be reduced.
In this embodiment, since the volume cloud is greatly affected by sunlight, the sunlight may be used as a main light source, and illumination information corresponding to the volume cloud is calculated based on a plurality of illumination parameters. In step S11, the illumination information corresponding to the rendering model is calculated based on each illumination parameter.
First, illumination information may be calculated using a Lambert model.
Fig. 12 is a flowchart of a volume cloud processing method according to another embodiment of the present application. As shown in fig. 12, step S11 is to obtain illumination information corresponding to the rendering model, and includes:
and step S91, calculating first diffuse reflection information corresponding to each pixel point according to the normal vector and the illumination direction vector of each pixel point of the drawing model.
Wherein the first diffuse reflection information can be nl (NdotL) of the color intensity coefficient corresponding to the pixel point,
(ii) float nl ═ max (0.0, dot (N, L)), or nl ═ saturrate (dot (N, L));
where nl denotes first diffuse reflection information, N denotes a normal vector, L denotes an illumination direction vector, dot () denotes a dot product calculation, and NdotL denotes a dot product result of N and L. The saturrate function is consistent with the result of the max function when calculating the unit vector dot product, but the saturrate function is more efficient. The effect of saturrate (x) is to return a value of 0 if x is less than 0. If x is greater than 1, the return value is 1. If x is between 0 and 1, the value of x is returned directly.
Step S92, the first diffuse reflection information is used as an illumination parameter;
and step S93, calculating the pixel color corresponding to each pixel point based on the illumination parameters to obtain illumination information.
The illumination effect of the back light surface of the volume cloud is not ideal through the illumination information calculated by the Lambert model, so that the illumination information calculated by the HalfLambert model can be adopted.
Fig. 13 is a flowchart of a volume cloud processing method according to another embodiment of the present application. As shown in fig. 13, step S11 further includes:
step S101, performing half-Rambo calculation on the first diffuse reflection information to obtain a half-Rambo illumination parameter;
float HalfLambertnl=dot(N,L)*0.5+0.5;
where halflambertin denotes the half-lambertian illumination parameter associated with nl.
Step S102, acquiring a noise threshold corresponding to each layer of grid model;
step S103, fitting according to the noise threshold and the half-Rambo illumination parameter to obtain second diffuse reflection information of each pixel point;
float Smoothnl=saturate(pow(HalfLambertnl,2-ClipValue));
wherein smoothnl represents second diffuse reflection information, and is a smoothing NdotL parameter subjected to power operation. ClipValue represents the noise threshold of the mesh model, and pow () represents the power operation.
And step S104, taking the second diffuse reflection information as the illumination parameter.
Through the step S101, the half-rabble illumination parameters are calculated to improve the diffuse reflection light on the surface of the object, particularly, the illumination effect of the back light surface of the volume cloud can be improved, and the reality of the volume cloud visual effect can be improved. In addition, through the step S103, the noise threshold of each layer of the mesh model is fitted to the diffuse reflection information, so that the brightness of the convex part of the volume cloud is increased, and the reality degree of the volume cloud visual effect is further improved.
In addition, the sub-surface Scattering condition of the volume cloud has a large influence on the visual appearance of the volume cloud, so that a sub-surface Scattering (SSS) parameter is added when the volume cloud illumination information is calculated.
Fig. 14 is a flowchart of a volume cloud processing method according to another embodiment of the present application. As shown in fig. 14, step S11 further includes:
step S111, calculating backward sub-surface scattering information of each pixel point according to the backlight sub-surface scattering parameters and the observer sight direction vectors;
float3 backLitDirection=-(lightDirection+(1-backSSSRange)*N);
float backsss=saturate(dot(viewDirection,backLitDirection));
backsss=saturate(pow(backsss,2+ClipValue*2)*1.5);
wherein backsssss represents the intensity information of backlight SSS illumination, backlittindication represents the backlight direction vector of SSS illumination, lightDirection represents the light direction vector, backsSSSCAnge represents the scattering range of backlight SSS, viewDirection represents the observer sight direction vector, and ClipValue represents the noise threshold of the grid model.
Step S112, calculating forward sub-surface scattering information of each pixel point according to the light-oriented sub-surface scattering parameters and the observer sight direction vectors;
float frontsss=saturate(dot(viewDirection,frontLitDirection));
here, frontssss indicates intensity information of light applied to the SSS, and frontLitDirection indicates a vector of a light direction of the SSS.
And step S113, acquiring an influence factor corresponding to the forward sub-surface scattering information.
Step S114, obtaining total sub-surface scattering information according to the product of the forward sub-surface scattering information and the influence factor and the backward sub-surface scattering information;
float sss=saturate(backsss+FrontSSSIntensity*frontsss);
where SSS denotes total subsurface scattering information and FrontSSSIntensity denotes sensitivity (impact factor) of forward SSS illumination.
And step S115, taking the total sub-surface scattering information as an illumination parameter.
By adding the backlight SSS information in the step S111, the transparency of volume cloud in backlight is increased, and by adding the SSS information to the light, the effects that photons are emitted into the cloud from the front side, scattered in the cloud, and then emitted from the front side are increased.
Optionally, since the influence of the directional SSS information on the appearance of the volume cloud is not large, the influence factor frontsss intensity may be set to 0, that is, the directional SSS information is not considered when the lighting information of the volume cloud is calculated.
In order to make the effect of the volume cloud more realistic, the volume cloud is required to receive the shadow.
Fig. 15 is a flowchart of a volume cloud processing method according to another embodiment of the present application. As shown in fig. 15, step S11 further includes:
step S121, sampling shadow textures according to the defined light source shadow to obtain shadow parameters;
step S122, attenuation calculation is carried out on the shadow parameters along with the increase of the distance from the camera to obtain shadow information corresponding to each pixel point of the drawing model;
step S123, the shadow information is used as the illumination parameter.
Specifically, letting the volume cloud receive the shadow can be realized by:
float shadowAttenuation;
#if defined(_MAIN_LIGHT_SHADOWS)
ShadowAttenuation=MainLightRealtimeShadow(i.shadowCoord);
#else
ShadowAttenuation=1;
#endif
float shadow=
saturate(lerp(shadowAttenuation,1,(distance(PositionWS.xyz,_worldSpaceCameraPos.xyz)-100)*0.1));
wherein, the shadow information is used to represent a value obtained after the real-time shadow texture is sampled by the shadow position of the main light source, and the value is used as the shadow information. PositionWS represents the position coordinates of a pixel point (fragment) in world space, _ world space CamaramPos represents the coordinates of the camera in world space, distance () is a function for calculating the distance in a shader, and the distance between the pixel point and the camera is calculated through the distance () function.
When calculating the shadow, using the shadow determination as the input parameter of the initial position of the difference calculator Lerp, 1 as the input parameter of the target position of the difference calculator Lerp, using the distance between the pixel point and the camera as the input parameter of the interpolation speed of the difference calculator Lerp, and attributing the difference calculation result to [0,1] to obtain the final shadow parameter.
Through the above steps S91 to S93, the reality of the volume cloud effect is further improved by letting the volume cloud receive the shadow, and the shadow decays as the distance from the camera increases.
Fig. 16 is a flowchart of a volume cloud processing method according to another embodiment of the present application. As shown in fig. 16, step S13 further includes:
step S131, calculating first specular reflection information of each pixel point according to the surface normal vector of the drawing model and the observer sight direction vector;
floatnv=saturate(dot(N,viewDirection.xyz));
wherein nv represents the first specular reflection information, which is the result of point multiplication of the normal vector N and the viewer's gaze direction (v), i.e., NdotV; xyz denotes the xyz component of the observer gaze direction vector.
Step S132, fitting according to the noise threshold and the first specular reflection information to obtain second specular reflection information of each pixel point;
floatsmoothnv=saturate(pow(nv,2-ClipValue));
wherein smoothnv represents second specular reflection information, and is a smoothing nv parameter after being subjected to power operation.
Step 133, using the first specular reflection information and the second specular reflection information as the illumination parameters.
Alternatively, the total illumination parameter finalLit may be calculated using all the above information as illumination parameters,
floatfinalLit=
saturate(smoothnv*0.5+lerp(1,shadow,nl)*saturate(smoothnl+sss)*(1-nv*0.5))。
fig. 17 is a flowchart of a volume cloud processing method according to another embodiment of the present application. As shown in fig. 17, step S13 includes:
step S141, obtaining an ambient light parameter and a main light source parameter;
the ambient light parameter may include, among other things, an ambient light color sampled via spherical harmonic illumination. The primary light source parameter may include a primary light source color.
And step S142, calculating the pixel color corresponding to each pixel point based on the illumination parameter, the ambient light parameter and the main light source parameter to obtain illumination information.
float3 SH=SampleSH(i,N)*_AmbientContrast;
float4 finalColor=float4(lerp(DarkColor.rgb+SH,Color.rgb,finalLit),1)*MainLightColor*0.8;
Where SH denotes an ambient light color obtained by spherical harmonic illumination sampling, _ AmbientContrast denotes an influence factor (contrast) of the ambient light color, _ dark color.
In the embodiment, in the illumination calculation of the volume cloud, various illumination parameters can be provided, the illumination effect of the volume cloud can be adjusted at any time, and the display reality degree of the volume cloud is improved.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application.
Fig. 18 is a block diagram of a volume cloud processing apparatus provided in an embodiment of the present application, and the apparatus may be implemented as part or all of an electronic device through software, hardware, or a combination of the two. As shown in fig. 18, the volume cloud processing apparatus includes:
the system comprises an acquisition module 1, a display module and a display module, wherein the acquisition module is used for acquiring a drawing model of volume cloud in a scene to be displayed and illumination information corresponding to the drawing model;
the edge detection module 2 is used for performing edge detection according to the depth value of each pixel point in the scene to be displayed before rendering and the depth value of the volume cloud;
the object determining module 3 is configured to determine an object to be mixed, which coincides with the volume cloud, in the scene to be displayed according to an edge detection result;
and the semi-transparent mixing module 4 is used for carrying out semi-transparent mixing on the object to be mixed and the volume cloud, and obtaining the volume cloud to be displayed based on a semi-transparent mixing result and the illumination information.
Optionally, the semi-transparent mixing module 4 is configured to render the rendering model according to the illumination information; determining coincident pixel points of the object to be mixed and the volume cloud; sampling to obtain a first color buffer value and a first depth buffer value before the rendering of the coincident pixel point, and a second color buffer value and a second depth buffer value after the rendering of the coincident pixel point; taking the first color buffer value as an initial position input parameter of an interpolation calculator, taking the second color buffer value as a target position input parameter of the interpolation calculator, taking a difference value between the first depth buffer value and the second depth buffer value as an interpolation speed input parameter of the interpolation calculator, and obtaining a linear interpolation result calculated by the interpolation calculator as a final pixel color of the overlapped pixel point; and obtaining the volume cloud to be displayed based on the final pixel color of the coincident pixel point.
Optionally, the semitransparent mixing module 4 is configured to determine a coincident pixel point of the object to be mixed and the volume cloud; sampling to obtain a color buffer value and a depth buffer value of the coincident pixel point before rendering, and sampling a current color value and a current depth value of the coincident pixel point in the process of rendering the rendering model based on the illumination information; taking the difference value between the depth buffer value and the current depth value as a source mixing factor, taking the color buffer value as a source color, taking the current color value as a target color, performing mixing operation, and taking the mixed pixel color as the final pixel color of the coincident pixel point; rendering the drawing model based on the final pixel color of the coincident pixel point to obtain the volume cloud to be displayed.
Optionally, the rendering model is obtained by rendering at least one layer of mesh model outwards from an original mesh model of the volume cloud according to the vertex normal vector, and the semitransparent hybrid module 4 is configured to render each mesh model of the rendering model layer by layer according to an outward-inward order.
Optionally, the obtaining module 1 is configured to draw at least one layer of mesh model outwards from the original mesh model of the volume cloud according to the vertex normal vector; and screening the pixel points of the grid model based on the noise threshold value corresponding to each layer of the grid model to obtain the drawing model.
Optionally, the obtaining module 1 is configured to obtain a noise threshold corresponding to each layer of the mesh model; sampling a preset noise map based on each layer of the grid model to obtain a noise value; and screening the pixel points of which the noise threshold is smaller than or equal to the noise value for each layer of the grid model to obtain the drawing model.
Optionally, the obtaining module 1 is configured to obtain a noise function corresponding to each layer of the grid model, where the noise function is a linear function taking coordinates of the pixel points as variables; obtaining a noise boundary value corresponding to each layer of the grid model pixel points according to the noise function; and performing power operation on the noise boundary value to obtain the noise threshold value.
An embodiment of the present application further provides an electronic device, as shown in fig. 19, the electronic device may include: the system comprises a processor 1501, a communication interface 1502, a memory 1503 and a communication bus 1504, wherein the processor 1501, the communication interface 1502 and the memory 1503 complete communication with each other through the communication bus 1504.
A memory 1503 for storing a computer program;
the processor 1501, when executing the computer program stored in the memory 1503, implements the steps of the method embodiments described below.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method embodiments.
It should be noted that, for the above-mentioned apparatus, electronic device and computer-readable storage medium embodiments, since they are basically similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
It is further noted that, herein, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A volumetric cloud processing method, comprising:
obtaining a drawing model of volume cloud in a scene to be displayed and illumination information corresponding to the drawing model;
performing edge detection according to the depth value of each pixel point before rendering in the scene to be displayed and the depth value of the volume cloud;
determining an object to be mixed which is coincident with the volume cloud in the scene to be displayed according to an edge detection result;
and performing semi-transparent mixing on the object to be mixed and the volume cloud, and obtaining the volume cloud to be displayed based on a semi-transparent mixing result and the illumination information.
2. The method of claim 1, wherein semi-transparently mixing the object to be mixed and the volume cloud, and obtaining the volume cloud to be displayed based on a semi-transparent mixing result and the illumination information comprises:
rendering the drawing model according to the illumination information;
determining coincident pixel points of the object to be mixed and the volume cloud;
sampling to obtain a first color buffer value and a first depth buffer value before the rendering of the coincident pixel point, and a second color buffer value and a second depth buffer value after the rendering of the coincident pixel point;
taking the first color buffer value as an initial position input parameter of an interpolation calculator, taking the second color buffer value as a target position input parameter of the interpolation calculator, taking a difference value between the first depth buffer value and the second depth buffer value as an interpolation speed input parameter of the interpolation calculator, and obtaining a linear interpolation result calculated by the interpolation calculator as a final pixel color of the overlapped pixel point;
and obtaining the volume cloud to be displayed based on the final pixel color of the coincident pixel point.
3. The method of claim 1, wherein semi-transparently mixing the object to be mixed and the volume cloud, and obtaining the volume cloud to be displayed based on a semi-transparent mixing result and the illumination information comprises:
determining coincident pixel points of the object to be mixed and the volume cloud;
sampling to obtain a color buffer value and a depth buffer value of the coincident pixel point before rendering, and sampling a current color value and a current depth value of the coincident pixel point in the process of rendering the rendering model based on the illumination information;
taking the difference value between the depth buffer value and the current depth value as a source mixing factor, taking the color buffer value as a source color, taking the current color value as a target color, performing mixing operation, and taking the mixed pixel color as the final pixel color of the coincident pixel point;
rendering the drawing model based on the final pixel color of the coincident pixel point to obtain the volume cloud to be displayed.
4. The method according to claim 3, wherein the rendering model is obtained by rendering at least one layer of mesh model outward from an original mesh model of the volume cloud according to a vertex normal vector, and the rendering model based on the final pixel color of the coincident pixel points to obtain the volume cloud to be displayed comprises:
and rendering each grid model of the drawing model layer by layer according to the sequence from outside to inside.
5. The method according to any one of claims 1-4, wherein the obtaining of the rendering model of the volume cloud in the scene to be displayed comprises:
drawing at least one layer of mesh model outwards from the original mesh model of the volume cloud according to the vertex normal vector;
and screening the pixel points of the grid model based on the noise threshold value corresponding to each layer of the grid model to obtain the drawing model.
6. The method of claim 5, wherein the screening pixel points of the mesh model based on the noise threshold corresponding to each layer of the mesh model to obtain a rendering model comprises:
acquiring a noise threshold corresponding to each layer of the grid model;
sampling a preset noise map based on each layer of the grid model to obtain a noise value;
and screening the pixel points of which the noise threshold is smaller than or equal to the noise value for each layer of the grid model to obtain the drawing model.
7. The method according to claim 6, wherein the obtaining a noise threshold corresponding to each layer of the mesh model comprises:
acquiring a noise function corresponding to each layer of the grid model, wherein the noise function is a linear function taking the coordinates of the pixel points as variables;
obtaining a noise boundary value corresponding to each layer of the grid model pixel points according to the noise function;
and performing power operation on the noise boundary value to obtain the noise threshold value.
8. A volumetric cloud processing apparatus, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a drawing model of volume cloud in a scene to be displayed and illumination information corresponding to the drawing model;
the edge detection module is used for carrying out edge detection according to the depth value of each pixel point before rendering in the scene to be displayed and the depth value of the volume cloud;
the object determining module is used for determining an object to be mixed which is superposed with the volume cloud in the scene to be displayed according to an edge detection result;
and the semi-transparent mixing module is used for carrying out semi-transparent mixing on the object to be mixed and the volume cloud, and obtaining the volume cloud to be displayed based on a semi-transparent mixing result and the illumination information.
9. An electronic device, comprising: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor, when executing the computer program, implementing the method steps of any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method steps of any one of claims 1 to 7.
CN202011402256.7A 2020-12-02 2020-12-02 Volume cloud processing method and device, electronic equipment and storage medium Active CN112465941B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011402256.7A CN112465941B (en) 2020-12-02 2020-12-02 Volume cloud processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011402256.7A CN112465941B (en) 2020-12-02 2020-12-02 Volume cloud processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112465941A true CN112465941A (en) 2021-03-09
CN112465941B CN112465941B (en) 2023-04-28

Family

ID=74805385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011402256.7A Active CN112465941B (en) 2020-12-02 2020-12-02 Volume cloud processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112465941B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313798A (en) * 2021-06-23 2021-08-27 完美世界(北京)软件科技发展有限公司 Cloud picture manufacturing method and device, storage medium and computer equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091363A (en) * 2014-07-09 2014-10-08 无锡梵天信息技术股份有限公司 Real-time size cloud computing method based on screen space
WO2018113173A1 (en) * 2016-12-24 2018-06-28 华为技术有限公司 Virtual reality display method and terminal
CN109035383A (en) * 2018-06-26 2018-12-18 苏州蜗牛数字科技股份有限公司 A kind of method for drafting, device and the computer readable storage medium of volume cloud
CN111508052A (en) * 2020-04-23 2020-08-07 网易(杭州)网络有限公司 Rendering method and device of three-dimensional grid body
CN111968215A (en) * 2020-07-29 2020-11-20 完美世界(北京)软件科技发展有限公司 Volume light rendering method and device, electronic equipment and storage medium
CN111968216A (en) * 2020-07-29 2020-11-20 完美世界(北京)软件科技发展有限公司 Volume cloud shadow rendering method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091363A (en) * 2014-07-09 2014-10-08 无锡梵天信息技术股份有限公司 Real-time size cloud computing method based on screen space
WO2018113173A1 (en) * 2016-12-24 2018-06-28 华为技术有限公司 Virtual reality display method and terminal
CN109035383A (en) * 2018-06-26 2018-12-18 苏州蜗牛数字科技股份有限公司 A kind of method for drafting, device and the computer readable storage medium of volume cloud
CN111508052A (en) * 2020-04-23 2020-08-07 网易(杭州)网络有限公司 Rendering method and device of three-dimensional grid body
CN111968215A (en) * 2020-07-29 2020-11-20 完美世界(北京)软件科技发展有限公司 Volume light rendering method and device, electronic equipment and storage medium
CN111968216A (en) * 2020-07-29 2020-11-20 完美世界(北京)软件科技发展有限公司 Volume cloud shadow rendering method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HAYASHI, N 等: "Observation of submicron dust particles trapped in a diffused region of a low pressure radio frequency plasma", 《PHYSICS OF PLASMAS》 *
邱航等: "云的真实感模拟技术综述", 《计算机科学》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313798A (en) * 2021-06-23 2021-08-27 完美世界(北京)软件科技发展有限公司 Cloud picture manufacturing method and device, storage medium and computer equipment

Also Published As

Publication number Publication date
CN112465941B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
CN112200900B (en) Volume cloud rendering method and device, electronic equipment and storage medium
WO2021129044A1 (en) Object rendering method and apparatus, and storage medium and electronic device
CN111508052B (en) Rendering method and device of three-dimensional grid body
US7034828B1 (en) Recirculating shade tree blender for a graphics system
CN110196746B (en) Interactive interface rendering method and device, electronic equipment and storage medium
US6580430B1 (en) Method and apparatus for providing improved fog effects in a graphics system
US7583264B2 (en) Apparatus and program for image generation
RU2427918C2 (en) Metaphor of 2d editing for 3d graphics
Li et al. Physically-based editing of indoor scene lighting from a single image
CN113674389B (en) Scene rendering method and device, electronic equipment and storage medium
WO2023185262A1 (en) Illumination rendering method and apparatus, computer device, and storage medium
CN113012273B (en) Illumination rendering method, device, medium and equipment based on target model
US7327364B2 (en) Method and apparatus for rendering three-dimensional images of objects with hand-drawn appearance in real time
CN112446943A (en) Image rendering method and device and computer readable storage medium
CN112884874A (en) Method, apparatus, device and medium for applying decals on virtual model
US7064753B2 (en) Image generating method, storage medium, image generating apparatus, data signal and program
KR101507776B1 (en) methof for rendering outline in three dimesion map
CN112819941A (en) Method, device, equipment and computer-readable storage medium for rendering water surface
CN112465941B (en) Volume cloud processing method and device, electronic equipment and storage medium
JP5848071B2 (en) A method for estimating the scattering of light in a homogeneous medium.
US8310483B2 (en) Tinting a surface to simulate a visual effect in a computer generated scene
Papanikolaou et al. Real-time separable subsurface scattering for animated virtual characters
CN117078838B (en) Object rendering method and device, storage medium and electronic equipment
CN117333603A (en) Virtual model rendering method, device, equipment and storage medium
Mahmud et al. Surrounding-aware screen-space-global-illumination using generative adversarial network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant