CN118115647A - Volume cloud rendering method, device, equipment and storage medium - Google Patents

Volume cloud rendering method, device, equipment and storage medium Download PDF

Info

Publication number
CN118115647A
CN118115647A CN202311763036.0A CN202311763036A CN118115647A CN 118115647 A CN118115647 A CN 118115647A CN 202311763036 A CN202311763036 A CN 202311763036A CN 118115647 A CN118115647 A CN 118115647A
Authority
CN
China
Prior art keywords
model
cloud
sequence frame
layer
grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311763036.0A
Other languages
Chinese (zh)
Inventor
陈凯豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202311763036.0A priority Critical patent/CN118115647A/en
Publication of CN118115647A publication Critical patent/CN118115647A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The disclosure relates to the field of computers, and provides a method, a device, equipment and a storage medium for volume cloud rendering, wherein the method comprises the following steps: acquiring a preset cloud-like layer model, and generating a sequence frame map of the cloud-like layer model; generating a corresponding plane grid model according to the mapping information of the sequence frame mapping, and generating a model to be rendered based on the plane grid model and the sequence frame mapping; inputting mapping information of sequence frame mapping into preset material expression nodes, applying the material expression nodes to preset cloud layer materials, and adjusting a mixing mode of the cloud layer materials; and performing model rendering on the model to be rendered according to the adjusted cloud layer material to realize the volume cloud effect. The method can save modeling time and resource consumption by using the preset cloud-like layer model, and generates the sequence frame mapping of the cloud-like layer model to convert the model into a static texture image instead of real-time calculation of the dynamic effect of the volume cloud, so that less calculation resources and memory can be consumed.

Description

Volume cloud rendering method, device, equipment and storage medium
Technical Field
The disclosure relates to the field of computers, and in particular relates to a method, a device, equipment and a storage medium for volume cloud rendering.
Background
Another important technical background of volumetric cloud technology in game engines is graphic rendering technology. It achieves the rendering of the cloud layer by using various rendering techniques, such as voxel, volume light, particle, etc. The rendering technologies can simulate the characteristics of colors, illumination, shadows and the like in natural cloud layers, so that a realistic cloud layer effect is realized. The volume Cloud is mostly manufactured by using a volume Cloud volume fog tool, which is based on a physical Cloud simulation algorithm, and calculates the shape and characteristics of the Cloud layer by simulating physical processes in the Cloud layer, including turbulence, airflow, evaporation, condensation and other processes. However, because the integration level is high and the real-time calculation is dependent, the real-time calculation amount is large, and the high computer performance and memory are required, so that the rendering speed is low.
Disclosure of Invention
The main purpose of this disclosure is to solve the current volume cloud and make the technical problem that rendering speed is slow that needs real-time operation to render.
The first aspect of the present disclosure provides a method for volume cloud rendering, the method comprising:
Acquiring a preset cloud-like layer model, and generating a sequence frame map of the cloud-like layer model;
Generating a corresponding plane grid model according to the mapping information of the sequence frame mapping, and generating a model to be rendered based on the plane grid model and the sequence frame mapping;
Inputting the mapping information of the sequence frame mapping into a preset material expression node, applying the material expression node to a preset cloud layer material, and adjusting the mixing mode of the cloud layer material;
and performing model rendering on the model to be rendered according to the adjusted cloud layer material, so as to realize a volume cloud effect.
A second aspect of the present disclosure provides a volume cloud rendering apparatus, the apparatus comprising:
The mapping generation module is used for acquiring a preset cloud-like layer model and generating a sequence frame mapping of the cloud-like layer model;
The rendering model generation module is used for generating a corresponding plane grid model according to the mapping information of the sequence frame mapping, and generating a model to be rendered based on the plane grid model and the sequence frame mapping;
the material determining module is used for inputting the mapping information of the sequence frame mapping into a preset material expression node, applying the material expression node to a preset cloud layer material and adjusting the mixing mode of the cloud layer material;
And the rendering module is used for performing model rendering on the model to be rendered according to the adjusted cloud layer material, so as to realize a volume cloud effect.
A third aspect of the present disclosure provides a volume cloud rendering apparatus, comprising: a memory and at least one processor, the memory having instructions stored therein, the memory and the at least one processor being interconnected by a line; the at least one processor invokes the instructions in the memory to cause the volume cloud rendering device to perform the steps of the volume cloud rendering method described above.
A fourth aspect of the present disclosure provides a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the steps of the volumetric cloud rendering method described above.
The method comprises the steps of obtaining a preset cloud-like layer model, and generating a sequence frame map of the cloud-like layer model; generating a corresponding plane grid model according to the mapping information of the sequence frame mapping, and generating a model to be rendered based on the plane grid model and the sequence frame mapping; inputting mapping information of sequence frame mapping into preset material expression nodes, applying the material expression nodes to preset cloud layer materials, and adjusting a mixing mode of the cloud layer materials; and performing model rendering on the model to be rendered according to the adjusted cloud layer material to realize the volume cloud effect. The method can save modeling time and resource consumption by using the preset cloud-like layer model, and generates the sequence frame mapping of the cloud-like layer model to convert the model into a static texture image instead of real-time calculation of the dynamic effect of the volume cloud, so that less calculation resources and memory can be consumed.
Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the disclosure. The objectives and other advantages of the disclosure will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
FIG. 1 is a schematic diagram of one embodiment of a method of volume cloud rendering in an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of another embodiment of a volume cloud rendering method in an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of another embodiment of a volume cloud rendering method in an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of one embodiment of a volume cloud rendering apparatus in an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of another embodiment of a volume cloud rendering apparatus in an embodiment of the present disclosure;
Fig. 6 is a schematic diagram of one embodiment of a volume cloud rendering device in an embodiment of the present disclosure.
Detailed Description
The embodiment of the invention provides a volume cloud rendering method, a device, equipment and a storage medium, wherein a preset cloud-like layer model is obtained, and a sequence frame map of the cloud-like layer model is generated; generating a corresponding plane grid model according to the mapping information of the sequence frame mapping, and generating a model to be rendered based on the plane grid model and the sequence frame mapping; inputting mapping information of sequence frame mapping into preset material expression nodes, applying the material expression nodes to preset cloud layer materials, and adjusting a mixing mode of the cloud layer materials; and performing model rendering on the model to be rendered according to the adjusted cloud layer material to realize the volume cloud effect. The method can save modeling time and resource consumption by using the preset cloud-like layer model, and generates the sequence frame mapping of the cloud-like layer model to convert the model into a static texture image instead of real-time calculation of the dynamic effect of the volume cloud, so that less calculation resources and memory can be consumed.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For easy understanding, the following describes a specific flow of an embodiment of the present invention, referring to fig. 1, and a first embodiment of a lens attribute adjustment method in an embodiment of the present invention includes:
101. Acquiring a preset cloud-like layer model, and generating a sequence frame map of the cloud-like layer model;
In this embodiment, the cloud-like model is a model that is modified into a shape similar to the concave-convex shape of the cloud layer after the original model is adjusted, and the preset cloud-like model can be obtained by various methods, for example, a three-dimensional model generated by a computer, cloud image data in the real world, and the like are used, in this embodiment, cloud resources are manufactured offline by using three-dimensional computer graphics modeling software such as Houdini or Blender, specifically, three-dimensional computer graphics modeling software is Houdini, and after the cloud-like model is generated, the models are stored in a file form, for example, a 3D model file or an image sequence file, and then the models are obtained when the models are needed.
Further, the obtaining the preset cloud-like layer model and generating the sequence frame map of the cloud-like layer model include: acquiring a preset cloud layer-like model, wherein the number of images in each row and the total slice layer number correspond to the sequence frame map; transversely slicing the cloud-like layer model according to the total slice layer number to obtain a multi-layer slice plane; shooting the multi-layer slice plane through a preset camera to obtain multi-frame sequence frame images; and according to the number of each row of images, sequentially placing the multi-frame sequence frame images in a preset initial mapping to generate the sequence frame mapping of the cloud-like layer model.
Specifically, a preset cloud-like layer model, the number of images in each row and the total slice layer number are obtained, wherein the number of images in each row and the total slice layer number are manually set according to requirements, when the required cloud layer effect is finer, the total slice layer number for slicing the cloud-like layer model is higher, and along with the higher the total slice layer number, the number of images in each row is also increased in order to enable the cloud-like layer model to accord with the mapping rule. And transversely slicing the cloud-like layer model according to the total slice layer number to obtain a multi-layer slice plane. This means that the cloud-like model is segmented in the horizontal direction, resulting in a series of slice planes. Each slice plane represents the shape of a cloud layer at a specific height, then a camera is preset, the camera is used for shooting a plurality of layers of slice planes, a plurality of frames of sequence frame images are obtained, and the frames of sequence frame images are placed in a preset initial mapping according to the number of images of each row in sequence. And sequentially attaching each row of pixels of each sequence frame image to the corresponding position of the initial mapping until the whole initial mapping is filled, so that the sequence frame mapping of the cloud layer-like model is generated, wherein the sequence frame mapping is placed in the preset initial mapping in a way that the shot shape images are ordered in a mapping according to the layer number sequence from left to right and top to bottom, and the mapping is 2048 x 2048 or 4096 x 4096 pixels.
102. Generating a corresponding plane grid model according to the mapping information of the sequence frame mapping, and generating a model to be rendered based on the plane grid model and the sequence frame mapping;
In this embodiment, the sequence frame map includes a plurality of frames of sequence frame images, and the map information includes a number of rows and columns of graphics corresponding to the plurality of frames of sequence frame images in the sequence frame map; the generating a corresponding planar grid model according to the mapping information of the sequence frame mapping comprises the following steps: and creating a plane model, and carrying out grid division on the plane model according to the number of rows and columns of the graph of the multi-frame sequence frame image in the sequence frame map to obtain a plane grid model.
In particular, in order to be able to restore the volumetric cloud effects produced in Houdini software with minimal performance consumption in the game engine, model resources corresponding to previously rendered volumetric cloud sequence frame mapping resources need to be produced within 3dmax three-dimensional computer graphics software, because of the different rendering techniques and algorithms that exist between different graphics software and engines. Houdini software and game engines, such as rendering techniques and algorithms used by the game engines, may be different, resulting in that the volumetric cloud effects produced in Houdini cannot be directly rendered in the game engines in the same manner, and when the volumetric cloud effects in Houdini need to be rendered in the game engines, corresponding model resources need to be produced within three-dimensional computer graphics software such as 3ds Max. This is because the game engine may use different renderers and volume cloud technologies, and cannot directly read the volume cloud sequence frame mapping resources generated by Houdini software, and by reproducing corresponding model resources in 3ds Max, the volume cloud rendering technology and algorithm supported by the game engine can be used to restore the volume cloud effect in Houdini software with the lowest performance consumption. This ensures better performance and rendering in the engine and meets the rendering criteria and requirements of the game engine.
Specifically, in three-dimensional computer graphics software, for example, 3ds Max, a Plane (Plane) model is created, the Plane model performs grid division of length and width segments, the grid division is based on the number of rows and columns of graphics of a sequence frame map of a volume cloud, the number of rows and columns of graphics refers to the layout structure of sequence frame images in the sequence frame map, that is, the number of images in the row and column directions, and it should be noted that, in order to ensure that each frame image in the sequence frame map can fall exactly in each grid in the Plane grid model, whether multi-frame sequence frame images fall exactly in the grids can be verified, and a specific verification mode can be that the map is attached to the Plane grid model.
Further, the sequence frame map comprises a plurality of frames of sequence frame images; the generating a model to be rendered based on the planar mesh model and the sequence frame map includes: placing the sequence frame map in a plane grid model, and determining grids corresponding to each sequence frame image in the plane grid model according to the positions of each sequence frame image in the sequence frame map in the plane grid model; dividing the plane grid model and a sequence frame map on the plane grid model according to grid lines in the plane grid model to obtain a plurality of grid blocks; stacking the grid blocks according to the sequence numbers of the corresponding sequence frame images to obtain a stacking model, and adjusting the stacking model to obtain a model to be rendered.
Specifically, the sequence frame mapping is applied to the created planar grid model, and each sequence frame image is ensured to correspond to a corresponding position in the planar grid model, wherein grid cells corresponding to each sequence frame image can be determined in the planar grid model according to the position of each sequence frame image in the sequence frame mapping, and grid lines of the planar grid model are used for dividing among the grid cells to obtain a plurality of grid blocks. Meanwhile, the sequence frame mapping is segmented in the same way so as to be matched with the grid blocks, and the segmented grid blocks are stacked in sequence according to the sequence number of the sequence frame images. Each grid block is ensured to be stacked in the corresponding sequence frame image order.
Further, the stacking the plurality of grid blocks according to the sequence numbers of the corresponding sequence frame images to obtain a stacking model, and adjusting the stacking model to obtain a model to be rendered includes: centering and zeroing the positions of the grid blocks, and stacking the grid blocks on a Z axis of the grid blocks according to the sequence numbers of the corresponding sequence frame images to obtain a stacking model, wherein the Z axis is an axis in the vertical direction in a three-dimensional space where the plane grid model is located; determining a grid center point of each layer of grid block in the stacking model, and connecting the grid center points with vertexes of the corresponding grid blocks; and carrying out lifting adjustment on grid center points of each layer of grid blocks in the stacked model in the Z-axis direction to obtain a model to be rendered.
Specifically, to facilitate the subsequent operations, the positions of all grid blocks need to be adjusted so that their center points are located at the origin of coordinates. This allows for more convenient handling of the entire model. According to the sequence number of sequential frame images, stacking a plurality of grid blocks on a Z axis in sequence, wherein the specific stacking sequence is to enable the grid blocks to increase a constant value along the Z axis frame by frame according to the sequence number of sequential frames to form a structural model similar to a multi-layer pancake, then adding a point at the center of each layer of plane of the multi-layer pancake structure as a grid center point, connecting the grid center point with four vertexes of the plane edge, and pulling the center point of each layer of plane to the Z axis direction to form a sharp angle tower shape, wherein the step is to prevent the condition that only one line can be seen when the sight is parallel to the plane at the side face
Further, after the step of pulling up and adjusting the grid center point of each layer of grid block in the stacking model to the Z-axis direction to obtain the model to be rendered, the method further includes: the normal direction of the grid center point is adjusted to be along the Z-axis direction, and the normal direction of the vertex of the grid block is adjusted to be expanded along the direction which deviates from the horizontal direction by forty-five degrees.
Specifically, the accuracy of the restored volume cloud illumination information is ensured, firstly, the center point of the pointed tower-shaped model is selected as a reference point, and then the normal direction of the point is adjusted to be vertically directed upwards. This ensures that the central region of the model is properly processed in the illumination calculation to conform to the actual illumination, and then the normal direction of the peripheral edge points is adjusted to a direction extending 45 degrees obliquely outward relative to the horizontal direction. By the aid of the method, the shadow effect of the pointed tower model under illumination can be simulated, the texture and the sense of reality are increased, the normal line of the pointed tower model can reflect illumination information more accurately through the adjustment, and accordingly the fact that the restored volume cloud illumination effect is correct is ensured.
In this embodiment, after stacking the plurality of grid blocks according to the sequence numbers of the corresponding sequential frame images to obtain a stacking model, and adjusting the stacking model to obtain a model to be rendered, the method further includes: traversing a plurality of grid blocks of the model to be rendered, and projecting a sequence frame image on the traversed grid blocks to a next layer of grid blocks in sequence; and when the traversed grid block is the last layer of grid block of the model to be rendered, projecting the traversed grid block to the model below the model to be rendered.
Specifically, in order to better restore the coloring of the lighting effect material of the cloud layer, a default lighting model can be set, in the scene setting, a dynamic shadow option is started to enable the shadow of the slice graph of each cloud layer to be projected onto the next layer, and a finer projection shape is formed by continuous superposition, because the cloud layer material is an opaque material, the shadow of the previous layer is projected onto the next layer and is calculated through LIGHTMASS, wherein LIGHTMASS is a baking calculation tool built in the game engine and can be used for calculating static lighting and shadows. When baking, LIGHTMASS takes the lighting information and static grid information in the scene as input, calculates illumination and shadow information in the scene, and stores the information in Lightmap. Lightmap is a texture that contains illumination and shading information for each static object in the scene. When viewing the scene in the running process, the UE displays illumination and shadow of the object according to the information stored in Lightmap. This means that even if the object of the previous layer obscures the object of the next layer, the object of the next layer can still be affected by the shadow of the object of the previous layer, and the quality and speed of the baking can be controlled by setting the parameters of LIGHTMASS. For example, the resolution and quality of the illumination map may be adjusted to obtain higher quality illumination and shading effects, but this may increase the baking time and size of Lightmap. Meanwhile, dynamic shadows can be used for simulating the shadow effect of a dynamic object, the shadows do not need to be baked in advance, can be generated in real time in operation, and need to be opened in scene setting, and in the projection process, the projection of the lowest layer can be projected on the ground or a model below the slice model.
103. Inputting the mapping information of the sequence frame mapping into a preset material expression node, applying the material expression node to a preset cloud layer material, and adjusting the mixing mode of the cloud layer material;
In this embodiment, each frame mapping information of the sequence of frame mapping is input into a preset texture expression node. This may be accomplished by taking the map file as input and setting node parameters to properly parse and process the map data. And applying the material expression node to a preset cloud layer material. The texture expression node is connected with the cloud layer material, the output of the node is ensured to be correctly transmitted to a mapping channel of the cloud layer material, and the texture expression node is applied to the preset cloud layer material, so that the cloud layer material is set into a mixed mode into a masked mode, the mode has the characteristic of low performance consumption, the game performance can be optimized, the area without transparent effect can be reserved, and the transparent area can be removed, so that the calculation load is reduced.
104. And performing model rendering on the model to be rendered according to the adjusted cloud layer material, so as to realize a volume cloud effect.
In this embodiment, the application of the adjusted cloud layer material to the model to be rendered is achieved by giving the cloud layer material to a rendering node of the model or a material channel in a renderer. Ensuring that the material nodes are connected correctly, transmitting the adjusted cloud layer material parameters to corresponding renderers, and adjusting the volume cloud parameters according to the needs to obtain the expected effect. These parameters may include density, color, shape, motion, etc. of the cloud layer, defining camera viewing angles, illumination settings, and other related rendering options according to rendering settings and scene requirements, and upon starting the rendering process, rendering software or engine will present a volumetric cloud effect in the rendering result according to the configured cloud layer materials and parameters.
In the embodiment, a preset cloud-like layer model is obtained, and a sequence frame map of the cloud-like layer model is generated; generating a corresponding plane grid model according to the mapping information of the sequence frame mapping, and generating a model to be rendered based on the plane grid model and the sequence frame mapping; inputting mapping information of sequence frame mapping into preset material expression nodes, applying the material expression nodes to preset cloud layer materials, and adjusting a mixing mode of the cloud layer materials; and performing model rendering on the model to be rendered according to the adjusted cloud layer material to realize the volume cloud effect. The method can save modeling time and resource consumption by using the preset cloud-like layer model, and generates the sequence frame mapping of the cloud-like layer model to convert the model into a static texture image instead of real-time calculation of the dynamic effect of the volume cloud, so that less calculation resources and memory can be consumed.
Referring to fig. 2, another embodiment of a volume cloud rendering method in an embodiment of the present disclosure includes:
201. Acquiring a plurality of preset sphere models, and combining the sphere models to obtain a sphere combined model;
In practical applications, a preset cloud-like model may be obtained by various methods, for example, using a computer-generated three-dimensional model, cloud image data in the real world, and the like, in this embodiment, cloud resources are offline manufactured by using three-dimensional computer graphics modeling software such as Houdini or Blender, specifically, using three-dimensional computer graphics modeling software as Houdini as an example, firstly, a plurality of spheres may be created in houdini, and a plurality of spheres are combined into a whole by using a Merge node, so as to obtain a sphere combination model, where the shape of the sphere combination model is the approximate shape of the cloud to be generated.
202. The spherical combination model is accessed into a preset volume cloud container template, and a cloud-like model corresponding to the spherical combination model is generated according to the volume cloud container template;
in this embodiment, after the ball combining model is generated, a volume Cloud model based on the model is generated by connecting a ball combining model type access volume Cloud container template Cloud node, which is a node dedicated to generating a volume Cloud model based on the container template. According to the geometrical information and the material property of the combined sphere model, a corresponding volume cloud effect is generated in a rendering result, and a cloud-like model is obtained.
203. Generating a volume cloud noise image according to a preset volume cloud noise tool, and adding the volume cloud noise image into a volume cloud container template;
In the present embodiment, in the volumetric cloud noise tool (CloudNoise), in the VEXpression field, a suitable VEX expression is entered to generate the noise value. For example, a turbnoise function may be used to generate continuous noise. In addition, parameters such as frequency, amplitude, and offset can be adjusted as needed to obtain a desired noise effect, and noise values adjusted so that the noise image can form a satisfactory cloud shape. By continually modifying the noise parameters, e.g. increasing or decreasing the frequency, adjusting the amplitude or offset, until a satisfactory volumetric Cloud shape is obtained, finally adding the generated noise image to the density or shape properties of the Cloud node. In this way, the noise image will affect the density or shape in the volumetric cloud container, thereby achieving a realistic volumetric cloud effect.
204. Accessing a volume cloud container template into a preset cloud noise node, and adjusting a cloud-like model based on a volume cloud noise image in the volume cloud container template through the cloud noise node;
In this embodiment, the model stored by the Cloud node of the volumetric Cloud container template in the previous step is accessed to the CloudNoise Cloud noise node to accurately adjust the Cloud shape, and the disturbance shape of the volumetric Cloud can be adjusted by modifying the attributes of the Amplitude and the Type until the disturbance shape is adjusted to a more satisfactory shape.
205. Acquiring a preset cloud-like layer model, and generating a sequence frame map of the cloud-like layer model;
206. Generating a corresponding plane grid model according to the mapping information of the sequence frame mapping, and generating a model to be rendered based on the plane grid model and the sequence frame mapping;
207. inputting mapping information of sequence frame mapping into preset material expression nodes, applying the material expression nodes to preset cloud layer materials, and adjusting a mixing mode of the cloud layer materials;
208. And performing model rendering on the model to be rendered according to the adjusted cloud layer material to realize the volume cloud effect.
In this embodiment, steps 205-208 are similar to steps 101-104 in the first embodiment, and will not be described here.
In the embodiment, a preset cloud-like layer model is obtained, and a sequence frame map of the cloud-like layer model is generated; generating a corresponding plane grid model according to the mapping information of the sequence frame mapping, and generating a model to be rendered based on the plane grid model and the sequence frame mapping; inputting mapping information of sequence frame mapping into preset material expression nodes, applying the material expression nodes to preset cloud layer materials, and adjusting a mixing mode of the cloud layer materials; and performing model rendering on the model to be rendered according to the adjusted cloud layer material to realize the volume cloud effect. The method can save modeling time and resource consumption by using the preset cloud-like layer model, and generates the sequence frame mapping of the cloud-like layer model to convert the model into a static texture image instead of real-time calculation of the dynamic effect of the volume cloud, so that less calculation resources and memory can be consumed.
Referring to fig. 3, another embodiment of a volume cloud rendering method in an embodiment of the present disclosure includes:
301. acquiring a preset cloud-like layer model, and generating a sequence frame map of the cloud-like layer model;
302. generating a corresponding plane grid model according to the mapping information of the sequence frame mapping, and generating a model to be rendered based on the plane grid model and the sequence frame mapping;
In this embodiment, steps 301 to 302 are similar to steps 101 to 102 in the first embodiment, and will not be described here again.
303. Storing the sequence frame map at a master texture node;
in this embodiment, after setting each texture setting, a TextureSample node is added and is replaced by MainTex to store a sequence frame map of the volume cloud, where TextureSample node is one of the texture nodes commonly used in the game engine, and is used to sample the pixel color from the texture, and MainTex is a commonly used term in the game engine, and refers to the main texture.
304. Taking the R channel value of the sequence frame map in the main texture node as a base number, taking the preset transparency intensity as an index, and inputting the index result into an exponent power node;
In this embodiment, since the RGB channels of the rendered sequence frame map are the same graphics, only an R channel is needed for the node, a variable Aplha _power with a value of 0.01 may be set for adjusting the transparency intensity of the cloud, and a Power exponent Power node is used, the R channel of the node of the sequence frame map MainTex of the cloud is used as a Base, and the variable Aplha _power is used as an exponent Exp and is output to a DitherTemporalAA material expression node for transparency mixing to realize the pseudo-semitransparent effect.
305. Inputting an index result into a preset material expression node, applying the material expression node to a preset cloud layer material, and adjusting a mixing mode of the cloud layer material;
In practical application, a common material is newly built in the game engine as a cloud layer material, a mixing mode is required to be set as a masked mode, the material can be changed from opaque to semitransparent, the mode is required to be mixed by using DitherTemporalAA material expression nodes to form semitransparent gradient change, the dynamic transparent effect can be created, and the DitherTemporalAA material expression nodes are applied to the cloud layer material, so that the mixing mode of the cloud layer material is adjusted to be the masked mode.
306. And performing model rendering on the model to be rendered according to the adjusted cloud layer material to realize the volume cloud effect.
In this embodiment, before performing model rendering on the model to be rendered according to the adjusted cloud layer material to implement a volumetric cloud effect, the method further includes: and obtaining a preset distortion map, and performing disturbance processing on the sequence frame map in the model to be rendered through the distortion map.
Specifically, the characteristic of flow change in a natural cloud layer can be simulated by disturbing the sequence frame map through a simple distortion map, and specifically, each layer of graph of the volume cloud sequence frame map needs to be disturbed while flowing through a noise map with different RGB channels so as to send distortion. In order to enable the distorted noise map to be aligned with each layer of graphics of the volume cloud sequence frame, the distorted map needs to be repeatedly arranged according to the row and column number of the volume cloud sequence frame map: firstly, a TextureSample node named noise_Tex is needed to be added to store the distortion map, meanwhile, a four-dimensional variable named noise_UV is added, the RG channel of the four-dimensional variable is set to be the number of rows and columns of the graph of the volume cloud sequence frame, and an end node is added to combine the RG channel of the noise_UV into a two-dimensional variable to multiply the two-dimensional variable with the UV of the noise_Tex map.
Meanwhile, in order to make the disturbance map simulate the effect of air distortion flow, a BA channel of a noise_UV variable is required to be used as wind direction and wind Speed in the horizontal direction to make the disturbance map flow, a UV coordinate node of a newly added Panner moving texture is required to be connected to a UVs input interface of the noise_Tex distortion map node, and a newly added application node combines the BA channel of the noise_UV into a Speed interface of the UV coordinate node of the two-dimensional variable connected to Panner moving texture. And the result of multiplying the RG channel of the noise_UV and the UV of the noise_Tex map is connected to a Coordinate Coordinate interface. In addition, in order to enable the distortion map to disturb the volume cloud sequence frame map, the RG channel of the distortion map needs to be mixed with the UV of the distortion map, and in order to enable the artistic environment to adjust the disturbance intensity in real time, a variable named noise_post needs to be added and set to be 0.01 as default, the variable named noise_post is multiplied by the RG channel of the noise_tex map, and the multiplied result is added with the UV of the volume cloud sequence frame map MainTex and input into the UVs port of the MainTex node of the variable named noise_post. Meanwhile, in order to enrich the effect of cloud disturbance, a B channel of the noise_Tex disturbance map can be used as a Mask of the volume cloud sequence frame map, a Lerp interpolation node is required to be added in specific measures, a B interface is set to be 0, the B channel of the noise_Tex disturbance map is connected to an Alpha interface of a Lerp node, and an R channel of the MainTex volume cloud sequence frame map is output to an A port of a Lerp node. The Lerp interface outputs to the Base interface of the Power node last.
In the embodiment, a preset cloud-like layer model is obtained, and a sequence frame map of the cloud-like layer model is generated; generating a corresponding plane grid model according to the mapping information of the sequence frame mapping, and generating a model to be rendered based on the plane grid model and the sequence frame mapping; inputting mapping information of sequence frame mapping into preset material expression nodes, applying the material expression nodes to preset cloud layer materials, and adjusting a mixing mode of the cloud layer materials; and performing model rendering on the model to be rendered according to the adjusted cloud layer material to realize the volume cloud effect. The method can save modeling time and resource consumption by using the preset cloud-like layer model, and generates the sequence frame mapping of the cloud-like layer model to convert the model into a static texture image instead of real-time calculation of the dynamic effect of the volume cloud, so that less calculation resources and memory can be consumed.
The method for rendering a volume cloud in an embodiment of the present disclosure is described above, and the apparatus for rendering a volume cloud in an embodiment of the present disclosure is described below, referring to fig. 4, where an embodiment of the apparatus for rendering a volume cloud in an embodiment of the present disclosure includes:
the map generation module 401 is configured to obtain a preset cloud-like layer model, and generate a sequence frame map of the cloud-like layer model;
A rendering model generating module 402, configured to generate a corresponding planar mesh model according to the mapping information of the sequence frame mapping, and generate a model to be rendered based on the planar mesh model and the sequence frame mapping;
The texture determining module 403 is configured to input mapping information of the sequence frame mapping into a preset texture expression node, apply the texture expression node to a preset cloud layer texture, and adjust a mixing mode of the cloud layer texture;
And the rendering module 404 is configured to perform model rendering on the model to be rendered according to the adjusted cloud layer material, so as to implement a volume cloud effect.
In the embodiment, a preset cloud-like layer model is obtained, and a sequence frame map of the cloud-like layer model is generated; generating a corresponding plane grid model according to the mapping information of the sequence frame mapping, and generating a model to be rendered based on the plane grid model and the sequence frame mapping; inputting mapping information of sequence frame mapping into preset material expression nodes, applying the material expression nodes to preset cloud layer materials, and adjusting a mixing mode of the cloud layer materials; and performing model rendering on the model to be rendered according to the adjusted cloud layer material to realize the volume cloud effect. The method can save modeling time and resource consumption by using the preset cloud-like layer model, and generates the sequence frame mapping of the cloud-like layer model to convert the model into a static texture image instead of real-time calculation of the dynamic effect of the volume cloud, so that less calculation resources and memory can be consumed.
Referring to fig. 5, another embodiment of a volume cloud rendering apparatus according to an embodiment of the present disclosure includes:
the map generation module 401 is configured to obtain a preset cloud-like layer model, and generate a sequence frame map of the cloud-like layer model;
A rendering model generating module 402, configured to generate a corresponding planar mesh model according to the mapping information of the sequence frame mapping, and generate a model to be rendered based on the planar mesh model and the sequence frame mapping;
The texture determining module 403 is configured to input mapping information of the sequence frame mapping into a preset texture expression node, apply the texture expression node to a preset cloud layer texture, and adjust a mixing mode of the cloud layer texture;
And the rendering module 404 is configured to perform model rendering on the model to be rendered according to the adjusted cloud layer material, so as to implement a volume cloud effect.
In a possible implementation manner, the volume cloud rendering apparatus further includes a cloud layer model generating module 405, where the cloud layer model generating module 405 is specifically configured to:
Acquiring a plurality of preset sphere models, and combining the sphere models to obtain a sphere combination model;
And accessing the spherical combination model into a preset volume cloud container template, and generating a cloud-like layer model corresponding to the spherical combination model according to the volume cloud container template.
In a possible implementation manner, the volume cloud rendering device further includes a model adjustment module 406, where the model adjustment module 406 is specifically configured to:
generating a volume cloud noise image according to a preset volume cloud noise tool, and adding the volume cloud noise image into the volume cloud container template;
and accessing the volume cloud container template into a preset cloud noise node, and adjusting the cloud-like layer model based on the volume cloud noise image in the volume cloud container template through the cloud noise node.
In a possible implementation, the map generation module 401 is specifically configured to:
acquiring a preset cloud layer-like model, wherein the number of images in each row and the total slice layer number correspond to the sequence frame map;
transversely slicing the cloud-like layer model according to the total slice layer number to obtain a multi-layer slice plane;
Shooting the multi-layer slice plane through a preset camera to obtain multi-frame sequence frame images;
And according to the number of each row of images, sequentially placing the multi-frame sequence frame images in a preset initial mapping to generate the sequence frame mapping of the cloud-like layer model.
In a possible implementation manner, the sequence frame map includes a plurality of frames of sequence frame images, and the map information includes the number of graphics rows and columns corresponding to the frames of sequence frame images in the sequence frame map;
The rendering model generation module 402 is specifically configured to:
And creating a plane model, and carrying out grid division on the plane model according to the number of rows and columns of the graph of the multi-frame sequence frame image in the sequence frame map to obtain a plane grid model.
In a possible embodiment, the sequence frame map comprises a plurality of frames of sequence frame images; the rendering model generation module 402 is specifically further configured to:
Placing the sequence frame map in a plane grid model, and determining grids corresponding to each sequence frame image in the plane grid model according to the positions of each sequence frame image in the sequence frame map in the plane grid model;
Dividing the plane grid model and a sequence frame map on the plane grid model according to grid lines in the plane grid model to obtain a plurality of grid blocks;
Stacking the grid blocks according to the sequence numbers of the corresponding sequence frame images to obtain a stacking model, and adjusting the stacking model to obtain a model to be rendered.
In a possible implementation, the rendering model generation module 402 is specifically further configured to:
centering and zeroing the positions of the grid blocks, and stacking the grid blocks on a Z axis of the grid blocks according to the sequence numbers of the corresponding sequence frame images to obtain a stacking model, wherein the Z axis is an axis in the vertical direction in a three-dimensional space where the plane grid model is located;
Determining a grid center point of each layer of grid block in the stacking model, and connecting the grid center points with vertexes of the corresponding grid blocks;
And carrying out lifting adjustment on grid center points of each layer of grid blocks in the stacked model in the Z-axis direction to obtain a model to be rendered.
In a possible implementation manner, the volume cloud rendering device further includes a normal adjustment module 407, where the normal adjustment module 407 is specifically configured to:
The normal direction of the grid center point is adjusted to be along the Z-axis direction, and the normal direction of the vertex of the grid block is adjusted to be expanded along the direction which deviates from the horizontal direction by forty-five degrees.
In a possible embodiment, the volume cloud rendering device further comprises a projection module 408, the projection module 408 being specifically configured to:
Traversing a plurality of grid blocks of the model to be rendered, and projecting a sequence frame image on the traversed grid blocks to a next layer of grid blocks in sequence;
And when the traversed grid block is the last layer of grid block of the model to be rendered, projecting the traversed grid block to the model below the model to be rendered.
In a possible implementation, the material determining module 403 is specifically configured to:
storing the sequence frame map at a primary texture node;
taking the R channel value of the sequence frame mapping in the main texture node as a base number, taking the preset transparency intensity as an index, and inputting the index result into an exponent power node;
Inputting the index result into a preset material expression node.
In a possible implementation manner, the volume cloud rendering device further includes a map perturbation module 409, where the map perturbation module 409 is specifically configured to:
and obtaining a preset distortion map, and performing disturbance processing on the sequence frame map in the model to be rendered through the distortion map.
In the embodiment, a preset cloud-like layer model is obtained, and a sequence frame map of the cloud-like layer model is generated; generating a corresponding plane grid model according to the mapping information of the sequence frame mapping, and generating a model to be rendered based on the plane grid model and the sequence frame mapping; inputting mapping information of sequence frame mapping into preset material expression nodes, applying the material expression nodes to preset cloud layer materials, and adjusting a mixing mode of the cloud layer materials; and performing model rendering on the model to be rendered according to the adjusted cloud layer material to realize the volume cloud effect. The method can save modeling time and resource consumption by using the preset cloud-like layer model, and generates the sequence frame mapping of the cloud-like layer model to convert the model into a static texture image instead of real-time calculation of the dynamic effect of the volume cloud, so that less calculation resources and memory can be consumed.
The medium volume cloud rendering device in the embodiment of the present disclosure is described in detail from the point of view of the modularized functional entity in fig. 4 and 5 above, and the volume cloud rendering apparatus in the embodiment of the present disclosure is described in detail from the point of view of hardware processing below.
Fig. 6 is a schematic structural diagram of a volumetric cloud rendering device 600 according to an embodiment of the present disclosure, where the volumetric cloud rendering device 600 may have a relatively large difference due to different configurations or performances, and may include one or more processors (centralprocessingunits, CPU) 610 (e.g., one or more processors) and a memory 620, one or more storage mediums 630 (e.g., one or more mass storage devices) storing applications 633 or data 632. Wherein the memory 620 and the storage medium 630 may be transitory or persistent storage. The program stored in the storage medium 630 may include one or more modules (not shown), each of which may include a series of instruction operations in the volume cloud rendering apparatus 600. Still further, the processor 610 may be configured to communicate with the storage medium 630 and execute a series of instruction operations in the storage medium 630 on the volume cloud rendering device 600 to implement the following steps:
Acquiring a preset cloud-like layer model, and generating a sequence frame map of the cloud-like layer model; generating a corresponding plane grid model according to the mapping information of the sequence frame mapping, and generating a model to be rendered based on the plane grid model and the sequence frame mapping; inputting mapping information of sequence frame mapping into preset material expression nodes, applying the material expression nodes to preset cloud layer materials, and adjusting a mixing mode of the cloud layer materials; and performing model rendering on the model to be rendered according to the adjusted cloud layer material to realize the volume cloud effect. The method can save modeling time and resource consumption by using the preset cloud-like layer model, and generates the sequence frame mapping of the cloud-like layer model to convert the model into a static texture image instead of real-time calculation of the dynamic effect of the volume cloud, so that less calculation resources and memory can be consumed.
Optionally, before obtaining the preset cloud-like layer model and generating the sequence frame map of the cloud-like layer model, the method further includes: acquiring a plurality of preset sphere models, and combining the sphere models to obtain a sphere combined model; and (3) accessing the spherical combination model into a preset volume cloud container template, and generating a cloud-like layer model corresponding to the spherical combination model according to the volume cloud container template.
The cloud-like layer model is generated in advance, and the cloud-like layer model does not need to be generated according to real-time rendering, so that less computing resources and memory can be consumed.
Optionally, after the ball combination model is connected to a preset volume cloud container template, generating a cloud-like layer model corresponding to the ball combination model according to the volume cloud container template, the method further comprises: generating a volume cloud noise image according to a preset volume cloud noise tool, and adding the volume cloud noise image into a volume cloud container template; and accessing the volume cloud container template into a preset cloud noise node, and adjusting the cloud-like layer model based on the volume cloud noise image in the volume cloud container template through the cloud noise node.
By generating the noise image, the random distribution and the shape of the cloud layer can be simulated to realize the realistic volume cloud effect.
Optionally, obtaining a preset cloud-like layer model, and generating a sequence frame map of the cloud-like layer model includes: acquiring a preset cloud layer-like model, wherein the number of images in each row and the total slice layer number correspond to a sequence frame map; transversely slicing the cloud-like layer model according to the total slice layer number to obtain a multi-layer slice plane; shooting a multi-layer slice plane through a preset camera to obtain multi-frame sequence frame images; and according to the number of each row of images, sequentially placing the multi-frame sequence frame images in a preset initial mapping to generate a sequence frame mapping of the cloud-like layer model.
The method can generate the corresponding sequence frame mapping by using the preset cloud layer-like model. The sequence of frame maps contains a plurality of images, each representing the appearance of a cloud-like model at a particular slice level. By playing these images, the dynamic effect of the cloud layer can be simulated, increasing the fidelity and visual appeal of the scene.
Optionally, the sequence frame map includes a plurality of frames of sequence frame images, and the map information includes the number of rows and columns of graphics corresponding to the plurality of frames of sequence frame images in the sequence frame map; generating a corresponding planar mesh model according to the mapping information of the sequence frame mapping comprises: and creating a plane model, and carrying out grid division on the plane model according to the number of rows and columns of the graphics of the multi-frame sequence frame image in the sequence frame map to obtain a plane grid model.
The above-described manner can utilize a planar mesh model and a sequential frame map to generate a planar simulation with complex animation effects, and place the sequential frame map appropriately on the planar mesh model.
Optionally, the sequence frame map includes a plurality of frames of sequence frame images; generating a model to be rendered based on the planar mesh model and the sequence frame map comprises: placing the sequence frame map in a plane grid model, and determining corresponding grids in the plane grid model in each sequence frame image according to the positions of each sequence frame image in the sequence frame map in the plane grid model; dividing the planar grid model and the sequence frame mapping on the planar grid model according to grid lines in the planar grid model to obtain a plurality of grid blocks; stacking the grid blocks according to the sequence numbers of the corresponding sequence frame images to obtain a stacking model, and adjusting the stacking model to obtain a model to be rendered.
The method divides the sequence frame map into a plurality of grid blocks and stacks the grid blocks according to the requirement, so that the memory and the rendering resources can be effectively managed and utilized.
Optionally, stacking the plurality of grid blocks according to the sequence numbers of the corresponding sequence frame images to obtain a stacking model, and adjusting the stacking model to obtain the model to be rendered includes: centering and zeroing the positions of the grid blocks, and stacking the grid blocks on the Z axis of the grid blocks according to the sequence numbers of the corresponding sequence frame images to obtain a stacking model; determining a grid center point of each layer of grid block in the stacking model, and connecting the grid center points with vertexes of the corresponding grid blocks; and carrying out lifting adjustment on grid center points of each layer of grid blocks in the stacked model in the Z-axis direction to obtain the model to be rendered.
The above approach can ensure the accuracy and consistency of the positions of the grid blocks when they are stacked by centering the positions of the grid blocks to zero. This is important for scenes in which accurate control of the position of an object is required in animation or a game, and the problem of positional deviation or misalignment can be avoided.
Optionally, after the lifting adjustment is performed on the grid center point of each layer of grid block in the stacked model to the Z-axis direction to obtain the model to be rendered, the method further includes: the normal direction of the grid center point is adjusted to be along the Z-axis direction, and the normal direction of the top point of the grid block is adjusted to be inclined by forty-five degrees along the horizontal direction and is outwards expanded.
The method can ensure more accurate and natural illumination effect by adjusting the normal direction of the grid center point and the normal direction of the top point of the grid block. By doing so, the light source and the shadow in the scene can generate correct projection and reflection effects, and the rendering sense of reality and fidelity are improved.
Optionally, after stacking the plurality of grid blocks according to the sequence numbers of the corresponding sequence frame images to obtain a stacking model, and adjusting the stacking model to obtain a model to be rendered, the method further includes: traversing a plurality of grid blocks of a model to be rendered, and projecting a sequence frame image on the traversed grid blocks to a next layer of grid blocks in sequence; and when the traversed grid block is the last layer of grid block of the model to be rendered, projecting the traversed grid block to the model below the model to be rendered.
The mode forms a finer projection shape through continuous superposition so as to ensure that cloud layer effects do not penetrate the upper under all view angles
Optionally, inputting the mapping information of the sequence frame mapping into a preset texture expression node includes: storing the sequence frame map at a master texture node; taking the R channel value of the sequence frame map in the main texture node as a base number, taking the preset transparency intensity as an index, and inputting the index result into an exponent power node; inputting the index result into a preset material expression node.
The appearance and the characteristics of the material can be customized by inputting the index result into the preset material expression node. The material expression nodes can comprise illumination models, shadow models, reflection models and the like, and diversified material effects can be created according to requirements by adjusting index results.
Optionally, before performing model rendering on the model to be rendered according to the adjusted cloud layer material, the method further includes: and obtaining a preset distortion map, and performing disturbance processing on the sequence frame map in the model to be rendered through the distortion map.
The influence of wind speed on the movement shape is simulated by adding the disturbance map in the mode, so that the dynamic effect of cloud movement in real life is more solved.
The volume cloud rendering device 600 may also include one or more power supplies 640, one or more wired or wireless network interfaces 650, one or more input/output interfaces 660, and/or one or more operating systems 631, such as Windows Serve, mac OS X, unix, linux, freeBSD, and the like. Those skilled in the art will appreciate that the volume cloud rendering device structure shown in fig. 6 is not limiting of the volume cloud rendering device provided by the present disclosure, and may include more or fewer components than illustrated, or may combine certain components, or a different arrangement of components.
The present disclosure also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, which may also be a volatile computer readable storage medium, having instructions stored therein which, when executed on a computer, cause the computer to perform the steps of:
Acquiring a preset cloud-like layer model, and generating a sequence frame map of the cloud-like layer model; generating a corresponding plane grid model according to the mapping information of the sequence frame mapping, and generating a model to be rendered based on the plane grid model and the sequence frame mapping; inputting mapping information of sequence frame mapping into preset material expression nodes, applying the material expression nodes to preset cloud layer materials, and adjusting a mixing mode of the cloud layer materials; and performing model rendering on the model to be rendered according to the adjusted cloud layer material to realize the volume cloud effect. The method can save modeling time and resource consumption by using the preset cloud-like layer model, and generates the sequence frame mapping of the cloud-like layer model to convert the model into a static texture image instead of real-time calculation of the dynamic effect of the volume cloud, so that less calculation resources and memory can be consumed.
Optionally, before obtaining the preset cloud-like layer model and generating the sequence frame map of the cloud-like layer model, the method further includes: acquiring a plurality of preset sphere models, and combining the sphere models to obtain a sphere combined model; and (3) accessing the spherical combination model into a preset volume cloud container template, and generating a cloud-like layer model corresponding to the spherical combination model according to the volume cloud container template.
The cloud-like layer model is generated in advance, and the cloud-like layer model does not need to be generated according to real-time rendering, so that less computing resources and memory can be consumed.
Optionally, after the ball combination model is connected to a preset volume cloud container template, generating a cloud-like layer model corresponding to the ball combination model according to the volume cloud container template, the method further comprises: generating a volume cloud noise image according to a preset volume cloud noise tool, and adding the volume cloud noise image into a volume cloud container template; and accessing the volume cloud container template into a preset cloud noise node, and adjusting the cloud-like layer model based on the volume cloud noise image in the volume cloud container template through the cloud noise node.
By generating the noise image, the random distribution and the shape of the cloud layer can be simulated to realize the realistic volume cloud effect.
Optionally, obtaining a preset cloud-like layer model, and generating a sequence frame map of the cloud-like layer model includes: acquiring a preset cloud layer-like model, wherein the number of images in each row and the total slice layer number correspond to a sequence frame map; transversely slicing the cloud-like layer model according to the total slice layer number to obtain a multi-layer slice plane; shooting a multi-layer slice plane through a preset camera to obtain multi-frame sequence frame images; and according to the number of each row of images, sequentially placing the multi-frame sequence frame images in a preset initial mapping to generate a sequence frame mapping of the cloud-like layer model.
The method can generate the corresponding sequence frame mapping by using the preset cloud layer-like model. The sequence of frame maps contains a plurality of images, each representing the appearance of a cloud-like model at a particular slice level. By playing these images, the dynamic effect of the cloud layer can be simulated, increasing the fidelity and visual appeal of the scene.
Optionally, the sequence frame map includes a plurality of frames of sequence frame images, and the map information includes the number of rows and columns of graphics corresponding to the plurality of frames of sequence frame images in the sequence frame map; generating a corresponding planar mesh model according to the mapping information of the sequence frame mapping comprises: and creating a plane model, and carrying out grid division on the plane model according to the number of rows and columns of the graphics of the multi-frame sequence frame image in the sequence frame map to obtain a plane grid model.
The above-described manner can utilize a planar mesh model and a sequential frame map to generate a planar simulation with complex animation effects, and place the sequential frame map appropriately on the planar mesh model.
Optionally, the sequence frame map includes a plurality of frames of sequence frame images; generating a model to be rendered based on the planar mesh model and the sequence frame map comprises: placing the sequence frame map in a plane grid model, and determining corresponding grids in the plane grid model in each sequence frame image according to the positions of each sequence frame image in the sequence frame map in the plane grid model; dividing the planar grid model and the sequence frame mapping on the planar grid model according to grid lines in the planar grid model to obtain a plurality of grid blocks; stacking the grid blocks according to the sequence numbers of the corresponding sequence frame images to obtain a stacking model, and adjusting the stacking model to obtain a model to be rendered.
The method divides the sequence frame map into a plurality of grid blocks and stacks the grid blocks according to the requirement, so that the memory and the rendering resources can be effectively managed and utilized.
Optionally, stacking the plurality of grid blocks according to the sequence numbers of the corresponding sequence frame images to obtain a stacking model, and adjusting the stacking model to obtain the model to be rendered includes: centering and zeroing the positions of the grid blocks, and stacking the grid blocks on the Z axis of the grid blocks according to the sequence numbers of the corresponding sequence frame images to obtain a stacking model; determining a grid center point of each layer of grid block in the stacking model, and connecting the grid center points with vertexes of the corresponding grid blocks; and carrying out lifting adjustment on grid center points of each layer of grid blocks in the stacked model in the Z-axis direction to obtain the model to be rendered.
The above approach can ensure the accuracy and consistency of the positions of the grid blocks when they are stacked by centering the positions of the grid blocks to zero. This is important for scenes in which accurate control of the position of an object is required in animation or a game, and the problem of positional deviation or misalignment can be avoided.
Optionally, after the lifting adjustment is performed on the grid center point of each layer of grid block in the stacked model to the Z-axis direction to obtain the model to be rendered, the method further includes: the normal direction of the grid center point is adjusted to be along the Z-axis direction, and the normal direction of the top point of the grid block is adjusted to be inclined by forty-five degrees along the horizontal direction and is outwards expanded.
The method can ensure more accurate and natural illumination effect by adjusting the normal direction of the grid center point and the normal direction of the top point of the grid block. By doing so, the light source and the shadow in the scene can generate correct projection and reflection effects, and the rendering sense of reality and fidelity are improved.
Optionally, after stacking the plurality of grid blocks according to the sequence numbers of the corresponding sequence frame images to obtain a stacking model, and adjusting the stacking model to obtain a model to be rendered, the method further includes: traversing a plurality of grid blocks of a model to be rendered, and projecting a sequence frame image on the traversed grid blocks to a next layer of grid blocks in sequence; and when the traversed grid block is the last layer of grid block of the model to be rendered, projecting the traversed grid block to the model below the model to be rendered.
The mode forms a finer projection shape through continuous superposition, so that cloud layer effects under all view angles are ensured not to penetrate the wall.
Optionally, inputting the mapping information of the sequence frame mapping into a preset texture expression node includes: storing the sequence frame map at a master texture node; taking the R channel value of the sequence frame map in the main texture node as a base number, taking the preset transparency intensity as an index, and inputting the index result into an exponent power node; inputting the index result into a preset material expression node.
The appearance and the characteristics of the material can be customized by inputting the index result into the preset material expression node. The material expression nodes can comprise illumination models, shadow models, reflection models and the like, and diversified material effects can be created according to requirements by adjusting index results.
Optionally, before performing model rendering on the model to be rendered according to the adjusted cloud layer material, the method further includes: and obtaining a preset distortion map, and performing disturbance processing on the sequence frame map in the model to be rendered through the distortion map.
The influence of wind speed on the movement shape is simulated by adding the disturbance map in the mode, so that the dynamic effect of cloud movement in real life is more solved.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the system or apparatus and unit described above may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are merely for illustrating the technical solution of the present disclosure, and not for limiting the same; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present disclosure.

Claims (14)

1. A method of volumetric cloud rendering, the method comprising:
Acquiring a preset cloud-like layer model, and generating a sequence frame map of the cloud-like layer model;
Generating a corresponding plane grid model according to the mapping information of the sequence frame mapping, and generating a model to be rendered based on the plane grid model and the sequence frame mapping;
Inputting the mapping information of the sequence frame mapping into a preset material expression node, applying the material expression node to a preset cloud layer material, and adjusting the mixing mode of the cloud layer material;
and performing model rendering on the model to be rendered according to the adjusted cloud layer material, so as to realize a volume cloud effect.
2. The method of claim 1, further comprising, before the obtaining the preset cloud-like layer model and generating the sequence frame map of the cloud-like layer model:
Acquiring a plurality of preset sphere models, and combining the sphere models to obtain a sphere combination model;
And accessing the spherical combination model into a preset volume cloud container template, and generating a cloud-like layer model corresponding to the spherical combination model according to the volume cloud container template.
3. The method for rendering the volume cloud according to claim 2, further comprising, after the step of accessing the spherical composition model to a preset volume cloud container template, generating a cloud-like layer model corresponding to the spherical composition model according to the volume cloud container template:
generating a volume cloud noise image according to a preset volume cloud noise tool, and adding the volume cloud noise image into the volume cloud container template;
and accessing the volume cloud container template into a preset cloud noise node, and adjusting the cloud-like layer model based on the volume cloud noise image in the volume cloud container template through the cloud noise node.
4. The method of claim 1, wherein the obtaining a preset cloud-like model and generating a sequence frame map of the cloud-like model comprise:
acquiring a preset cloud layer-like model, wherein the number of images in each row and the total slice layer number correspond to the sequence frame map;
transversely slicing the cloud-like layer model according to the total slice layer number to obtain a multi-layer slice plane;
Shooting the multi-layer slice plane through a preset camera to obtain multi-frame sequence frame images;
And according to the number of each row of images, sequentially placing the multi-frame sequence frame images in a preset initial mapping to generate the sequence frame mapping of the cloud-like layer model.
5. The method according to claim 1, wherein the sequence frame map comprises a plurality of frames of sequence frame images, and the map information comprises a number of graphics rows corresponding to the plurality of frames of sequence frame images in the sequence frame map;
The generating a corresponding planar grid model according to the mapping information of the sequence frame mapping comprises the following steps:
And creating a plane model, and carrying out grid division on the plane model according to the number of rows and columns of the graph of the multi-frame sequence frame image in the sequence frame map to obtain a plane grid model.
6. The method of claim 1, wherein the sequence frame map comprises a plurality of frames of sequence frame images; the generating a model to be rendered based on the planar mesh model and the sequence frame map includes:
Placing the sequence frame map in a plane grid model, and determining grids corresponding to each sequence frame image in the plane grid model according to the positions of each sequence frame image in the sequence frame map in the plane grid model;
Dividing the plane grid model and a sequence frame map on the plane grid model according to grid lines in the plane grid model to obtain a plurality of grid blocks;
Stacking the grid blocks according to the sequence numbers of the corresponding sequence frame images to obtain a stacking model, and adjusting the stacking model to obtain a model to be rendered.
7. The method of volume cloud rendering according to claim 6, wherein stacking the plurality of grid blocks according to the sequence numbers of the corresponding sequential frame images to obtain a stack model, and adjusting the stack model to obtain a model to be rendered includes:
centering and zeroing the positions of the grid blocks, and stacking the grid blocks on a Z axis of the grid blocks according to the sequence numbers of the corresponding sequence frame images to obtain a stacking model, wherein the Z axis is an axis in the vertical direction in a three-dimensional space where the plane grid model is located;
Determining a grid center point of each layer of grid block in the stacking model, and connecting the grid center points with vertexes of the corresponding grid blocks;
And carrying out lifting adjustment on grid center points of each layer of grid blocks in the stacked model in the Z-axis direction to obtain a model to be rendered.
8. The method of volume cloud rendering according to claim 7, wherein after the step of performing lifting adjustment on the grid center point of each layer of grid block in the stacked model in the Z-axis direction to obtain a model to be rendered, further comprises:
The normal direction of the grid center point is adjusted to be along the Z-axis direction, and the normal direction of the vertex of the grid block is adjusted to be expanded along the direction which deviates from the horizontal direction by forty-five degrees.
9. The method of volume cloud rendering according to claim 6, wherein after stacking the plurality of grid blocks according to the sequence numbers of the corresponding sequential frame images to obtain a stacking model, and adjusting the stacking model to obtain a model to be rendered, further comprises:
Traversing a plurality of grid blocks of the model to be rendered, and projecting a sequence frame image on the traversed grid blocks to a next layer of grid blocks in sequence;
And when the traversed grid block is the last layer of grid block of the model to be rendered, projecting the traversed grid block to the model below the model to be rendered.
10. The method of claim 1, wherein inputting the mapping information of the sequence frame mapping into a predetermined texture expression node comprises:
storing the sequence frame map at a primary texture node;
taking the R channel value of the sequence frame mapping in the main texture node as a base number, taking the preset transparency intensity as an index, and inputting the index result into an exponent power node;
Inputting the index result into a preset material expression node.
11. The method according to claim 1, further comprising, before the model rendering the model to be rendered according to the adjusted cloud layer material, before implementing the volumetric cloud effect:
and obtaining a preset distortion map, and performing disturbance processing on the sequence frame map in the model to be rendered through the distortion map.
12. A volume cloud rendering device, the volume cloud rendering device comprising:
The mapping generation module is used for acquiring a preset cloud-like layer model and generating a sequence frame mapping of the cloud-like layer model;
The rendering model generation module is used for generating a corresponding plane grid model according to the mapping information of the sequence frame mapping, and generating a model to be rendered based on the plane grid model and the sequence frame mapping;
the material determining module is used for inputting the mapping information of the sequence frame mapping into a preset material expression node, applying the material expression node to a preset cloud layer material and adjusting the mixing mode of the cloud layer material;
And the rendering module is used for performing model rendering on the model to be rendered according to the adjusted cloud layer material, so as to realize a volume cloud effect.
13. A volume cloud rendering device, the volume cloud rendering device comprising: a memory and at least one processor, the memory having instructions stored therein;
The at least one processor invokes the instructions in the memory to cause the volume cloud rendering device to perform the steps of the volume cloud rendering method of any of claims 1-11.
14. A computer readable storage medium having instructions stored thereon, which when executed by a processor, implement the steps of the volumetric cloud rendering method of any of claims 1-11.
CN202311763036.0A 2023-12-20 2023-12-20 Volume cloud rendering method, device, equipment and storage medium Pending CN118115647A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311763036.0A CN118115647A (en) 2023-12-20 2023-12-20 Volume cloud rendering method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311763036.0A CN118115647A (en) 2023-12-20 2023-12-20 Volume cloud rendering method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118115647A true CN118115647A (en) 2024-05-31

Family

ID=91217784

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311763036.0A Pending CN118115647A (en) 2023-12-20 2023-12-20 Volume cloud rendering method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118115647A (en)

Similar Documents

Publication Publication Date Title
US4855934A (en) System for texturing computer graphics images
CN108648269A (en) The monomerization approach and system of three-dimensional building object model
US5841441A (en) High-speed three-dimensional texture mapping systems and methods
CN102289845B (en) Three-dimensional model drawing method and device
CN113436308B (en) Three-dimensional environment air quality dynamic rendering method
CN111968216A (en) Volume cloud shadow rendering method and device, electronic equipment and storage medium
CN107170040A (en) A kind of three-dimensional bridge scenario building method and apparatus
CN114693856B (en) Object generation method and device, computer equipment and storage medium
CN111915710A (en) Building rendering method based on real-time rendering technology
CN110852952A (en) GPU-based large-scale terrain real-time rendering method
CN115906703A (en) GPU fluid simulation method for real-time interactive application
CN115526976A (en) Virtual scene rendering method and device, storage medium and electronic equipment
CN115409962B (en) Method for constructing coordinate system in illusion engine, electronic device and storage medium
CN118115647A (en) Volume cloud rendering method, device, equipment and storage medium
CN114255312A (en) Processing method and device of vegetation image and electronic equipment
CN112002019A (en) Method for simulating character shadow based on MR mixed reality
Dietrich et al. Terrain guided multi-level instancing of highly complex plant populations
CN112991498A (en) Lens animation rapid generation system and method
JPH0241785B2 (en)
CN117292038B (en) Rendering method, system, equipment and storage medium for sea surface model
WO2023184139A1 (en) Methods and systems for rendering three-dimensional scenes
CN116824082B (en) Virtual terrain rendering method, device, equipment, storage medium and program product
CN118229906B (en) Method, system, medium and product for making realistic relief and shading map
CN115423917B (en) Real-time drawing method and system for global three-dimensional wind field
Sui et al. A visualization framework for cloud rendering in global 3D GIS

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination