CN114254501A - Large-scale grassland rendering and simulating method - Google Patents
Large-scale grassland rendering and simulating method Download PDFInfo
- Publication number
- CN114254501A CN114254501A CN202111526989.6A CN202111526989A CN114254501A CN 114254501 A CN114254501 A CN 114254501A CN 202111526989 A CN202111526989 A CN 202111526989A CN 114254501 A CN114254501 A CN 114254501A
- Authority
- CN
- China
- Prior art keywords
- grass
- grassland
- representing
- vertex
- coordinates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 49
- 238000000034 method Methods 0.000 title claims abstract description 47
- 244000025254 Cannabis sativa Species 0.000 claims abstract description 288
- 210000000988 bone and bone Anatomy 0.000 claims abstract description 63
- 238000004088 simulation Methods 0.000 claims abstract description 27
- 230000000694 effects Effects 0.000 claims abstract description 12
- 238000007726 management method Methods 0.000 claims abstract description 7
- 239000000126 substance Substances 0.000 claims description 19
- 238000004364 calculation method Methods 0.000 claims description 17
- 239000013598 vector Substances 0.000 claims description 12
- 230000001131 transforming effect Effects 0.000 claims description 11
- 238000005520 cutting process Methods 0.000 claims description 9
- 238000005286 illumination Methods 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000005070 sampling Methods 0.000 claims description 9
- 230000008030 elimination Effects 0.000 claims description 8
- 238000003379 elimination reaction Methods 0.000 claims description 8
- 238000005452 bending Methods 0.000 claims description 7
- 238000004806 packaging method and process Methods 0.000 claims description 6
- 230000000007 visual effect Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- 230000009471 action Effects 0.000 claims description 3
- 230000005484 gravity Effects 0.000 claims description 3
- 238000012856 packing Methods 0.000 claims description 3
- 238000012805 post-processing Methods 0.000 claims description 3
- 238000005096 rolling process Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 230000010339 dilation Effects 0.000 claims 1
- 238000004422 calculation algorithm Methods 0.000 abstract description 4
- 238000005457 optimization Methods 0.000 description 9
- 230000003993 interaction Effects 0.000 description 5
- 238000003491 array Methods 0.000 description 4
- 238000013507 mapping Methods 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/23—Design optimisation, verification or simulation using finite element methods [FEM] or finite difference methods [FDM]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/506—Illumination models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
- G06T17/205—Re-meshing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Image Generation (AREA)
Abstract
The invention relates to the technical field of computer graphics, and discloses a large-scale grassland rendering and simulation method, which comprises the following steps: step 1, generating a grassland model, wherein the grassland model comprises at least one grass block, a plurality of random seeds are arranged in the grass block, the grassland model is randomized through the random seeds, and grass leaf data and bones are generated; step 2, simulating the random grassland model according to the grass leaf data and the skeleton to obtain a dynamic grassland model; step 3, removing the grassland model by using a management method based on grass blocks; and 4, efficiently rendering the eliminated grassland model by using the surface subdivision. The method has low computational power requirement, high algorithm parallelization and better effect on large-scale grassland rendering.
Description
Technical Field
The invention relates to the technical field of computer graphics, in particular to a large-scale grassland rendering and simulating method.
Background
Modeling and rendering of large-scale natural landscapes are an important research subject and are widely applied to the fields of virtual reality, flight simulation, battlefield simulation, cartography, video games, ecosystem simulation and the like. The grass rendering is an important component of large-scale natural scene rendering, and modeling representation and real-time rendering of the grass rendering are always hot points in the field of computer graphics.
With the improvement of research level of various subjects, especially with the rapid development of the field of computer graphics, virtual grass clusters are widely applied, such as pastures, forests, sports fields, virtual battlefields and the like, for many scenes, the reality sense of the whole scene depends on the rendering quality of the grassland to a great extent, and therefore, the high-quality large-scale grassland rendering simulation technology is particularly important, and even a little flaw can bring about very important sensory influence to audiences.
The existing mainstream grassland simulation scheme is to use a notice board technology to represent grass leaves and use a noise graph to disturb the grass leaves to realize illumination rendering and wind power interaction, and use a displacement chartlet to disturb vertexes in a vertex shader in a trampling effect; or more precisely, the method based on physical computation is used to compute the motion state of grass leaf interaction in real time, for example, the method combines hardware instantiation with physical simulation, uses hardware instantiation to render the grass leaves which do not need simulation temporarily, only carries out physical modeling and simulation on the grass leaves which need interaction, or for example, the method uses control points to represent the grass leaves and uses surface subdivision to complement the vertex data in real time or uses a geometric shader to generate and control the vertex position in real time. However, the first method is a procedural animation, which has the problem of insufficient reality; the second method has a significant collision interaction effect, but the simulation of the wind power is still procedural, and the efficiency of the second method and the third method is too low.
Therefore, the current grass rendering and simulation method has the problems of insufficient simulation precision or low efficiency for the key problems of scene management, elimination, physical animation representation and the like in large-scale grass blades.
Disclosure of Invention
The invention aims to provide a large-scale grassland rendering and simulating method, which improves the feedback method and rendering and removing speed of grass leaves on collision and wind fields, thereby improving the reality sense and rendering efficiency of grassland simulation.
The invention is realized by the following technical scheme:
a large-scale grassland rendering and simulating method comprises the following steps:
step 1, generating a grassland model based on a triangular mesh, wherein the grassland model comprises at least one grass block, a plurality of random seeds are arranged in the grass block, and the grassland model is randomized through the random seeds to generate grass leaf data and bones;
step 2, simulating the random grassland model according to the grass leaf data and the skeleton to obtain a dynamic grassland model;
step 3, removing the grassland model by using a management method based on grass blocks;
and 4, efficiently rendering the eliminated grassland model by using the surface subdivision.
As an optimization, the specific implementation method of step 1 is as follows:
step 1.1, defining the scale of the grassland based on the triangular mesh: the grass blocks are divided according to the granularity of a world coordinate system 1 x 1, each grass block comprises a plurality of randomly generated random coordinates, the random coordinates of the grass blocks and a directory corresponding to the grass blocks are stored, the directory is the serial number of a first vertex arranged at the head of each grass block in a vertex cache region, and the vertex is a point needing to be rendered on grass leaves forming the grass blocks;
step 1.2, calculating a random number of the random grassland model: calculating the random number of the grassland model by using the random coordinates called as grass root coordinates or as random seeds:
wherein the content of the first and second substances,representing random seeds or random coordinates, namely grass root coordinates of grass leaves, and rand representing random numbers generated according to the random seeds;
step 1.3, generating a vertex, an index and a normal, UV: expanding the grass extending leaf skeleton line to two sides of the grass root coordinate to generate a vertex, an index, a normal line and UV (ultraviolet) by taking the grass root coordinate as a base point, and randomizing the vertex by using a random number; the vertex calculation process is as follows:
wherein the content of the first and second substances,representing vertices, segmentsiRepresents the ith grass segment, the grass segment is a certain segment of grass leaves,indicating that the axis of rotation provides a rotational reference for the vertex, M indicating the edgeRotation of theta degree, x, y, z being the axis of rotationThe coordinates of (a);
step 1.4, generating skeleton: and generating a plurality of PBD skeleton nodes serving as skeletons along the central axis of the grass blade by taking the grass root position as a base point, and defining grass blade data corresponding to each PBD skeleton node.
As an optimization, the cursive data includes current coordinates of the bone, predicted coordinates of the bone, a moving speed of the bone, an initial position of the bone, an outward expansion vector of the vertex, a vertex directory, a bone directory, an absolute position constraint, a distance constraint, and mass size information of the bone.
As an optimization, there are 4 PBD bone nodes.
As an optimization, the specific implementation method of step 2 is as follows:
step 2.1, packaging the grass leaf data and the grass blocks and transmitting the grass leaf data and the grass blocks into a Graphics Processing Unit (GPU), independently storing each data of the grass leaf data and the grass blocks into a computing buffer area (computeBuffer), and simultaneously operating a computing shader to divide the grass blocks into two groups according to the distance between the grass blocks and a camera;
the calculation of the distance dist between the grass block and the camera is as follows:
among them, gridsiThe center coordinates of each grass block are represented,representing camera coordinates;
2.2, according to the two grouped grass blocks obtained in the step 2.1, calculating the influence of wind power on the speed of the skeleton of each grass blade by using different iteration times, wherein the wind power is stored by using a noise map, and the action of the wind power is represented by UV of a time rolling noise map;
the influence of the wind force on the speed of the skeleton of each of the grass blades is calculated as follows:
wherein the content of the first and second substances,is the force of gravity, the frictions are the coefficients of friction,is the initial position of the bone in the initial position,the predicted position of the skeleton is represented by the window noise, the window noise Z is the corresponding color value of the skeleton in the noise map, delta t is the running time, and the running time is a fixed value;
step 2.3, updating the prediction coordinate of each bone according to the speed of the bones of the grass blades calculated in the step 2.2, wherein the prediction coordinate of the bones is calculated in the following way:
wherein the content of the first and second substances,respectively representing the predicted and current positions of the skeleton,is the speed of the skeleton, Δ t is the run time;
step 2.4, packing a collision body and transmitting the collision body into a GPU, wherein the collision body is calculated according to a sphere, the information of the collision body comprises a collision body sphere center and a collision body radius, a spherical collision is calculated for each skeleton, and if the skeleton is in the collision body, the skeleton is moved to the intersection of the collision body sphere center to the extension line of the skeleton and the sphere of the collision body;
step 2.5, calculating distance constraint and absolute position constraint, wherein the distance constraint is the constraint among all bones on the same grass blade, and the absolute position constraint is the coordinate of the root bones of the grass blade; the distance constraint is calculated as follows:
wherein E is expressed as the elastic coefficient, R is expressed as the elastic limit of the spring,representing the velocity at time t of the i-1 st bone constrained by the distance constraint,representing the velocity at time t of the ith bone constrained by the distance constraint,representing the velocity of the ith bone at time t-1 as constrained by the distance constraint,representing the velocity of the i-1 st bone at t-1 time constrained by the distance constraint;
and 2.6, updating the speed of the current skeleton according to the predicted coordinate and the current coordinate of the skeleton, wherein the updating mode is as follows:
step 2.7, changing the current coordinate of the skeleton into a prediction coordinate;
and 2.8, updating the coordinates and normal line information of the vertexes of the grass-leaf triangular mesh in a calculation shader according to the calculated predicted coordinates of the bones and the calculated outward expansion vectors of the vertexes.
As an optimization, the specific implementation method of step 3 is as follows:
3.1, transmitting the cursive blocks to be rendered into a computation shader;
step 3.2, distance elimination is carried out on the grass blocks: judging the distance between the center coordinate of the grass block and the coordinate of the camera, and removing the grass block which is too far away from the camera;
3.3, carrying out visual cone elimination on the grass blocks passing through the step 3.2, transforming the centers of the grass blocks to a cutting space, judging the position relation between the center coordinates of the grass blocks and a screen, and eliminating the grass blocks outside the visual cones of the camera;
the calculation mode of the position relation between the central coordinates of the grass blocks and the screen is as follows:
wherein VP represents a product of a view matrix representing a transformation matrix transforming vertices from a model space to a view space and a projection matrix representing transforming vertices from the view space to a clipping space,the coordinates of the center of the grass block are shown,representing the coordinate of the central coordinate of the grass block in the cutting space;
step 3.4, carrying out shielding and removing on the grass blocks passing through the step 3.3: storing the depth image of the previous frame image through post-processing, then sampling downwards, obtaining the projection size and the coordinates of the grass block on a screen by using the FOV of a view cone, obtaining the level mip of the depth image to be obtained according to the projection size of the grass block, judging whether the grass leaves are shielded, and then removing the shielded grass leaves;
the hierarchy mip is calculated as follows:
wherein the content of the first and second substances,representing the central coordinates of the grass blockThe coordinates in the NDC space are,representing the coordinate of the center coordinate of the grass block in a cutting space, half FOV of the view cone, and mip representing the mip level of the depth map needing sampling;
and 3.5, adding the grass blocks after being removed in the step 3.2-3.4 into a grass block visible set for rendering.
As an optimization, in step 3.4, judging whether the grass leaves are shielded specifically includes: and sampling UV and mip levels, and if the depth values of the upper, lower, left and right pixels of the pixel point of the corresponding position of the grass block on the depth map are larger than the z value of the normalized equipment coordinate space of the grass block, the grass leaf is considered to be shielded.
As an optimization, the specific implementation method of step 2 is as follows:
step 4.1, determining hardware instantiation rendering parameters according to the visible set of the grass blocks, packaging vertexes (vertexes of the whole grassland model), indexes, normal lines of grass leaves and UV information of the grass leaves into arrays, and transmitting the arrays to a GPU (graphics processing unit) in a one-time mode;
step 4.2, determining vertex information to be rendered of the cursive block according to the corresponding SV _ VertexID and SV _ InstanceID and the directory of the cursive block in a vertex shader, converting the vertex information to a clipping space from a model space, and simultaneously converting vertex coordinates, a normal and UV information of the cursive leaf;
4.3, judging whether the vertex needs to be subjected to surface subdivision by using UV of the grass section, and bending the subdivided vertex by using a sine function in a specific mode as follows:
wherein the content of the first and second substances,the first control point representing a tessellation,a third control point representing a tessellation,the normal to the vertex is represented as,UV representing the vertex;vertex coordinates representing bending effect obtained by offsetting subdivided vertexes of a surfaceRepresenting the tessellated unbent vertex coordinates,the most basic work of the surface subdivision is to increase the number of vertexes, so that the model is more refined, and if the vertexes are not shifted, all the vertexes are generated in the original triangle;
and 4.4, rendering the grass blades after the curved surface subdivision by using a pixel shader, wherein the illumination model for rendering the grass blades is a Lambert illumination model.
As an optimization, in step 3.1, the cursive block to be rendered is considered as a sphere with a radius of 0.5.
As an optimization, in step 4.3, the determination criteria for determining whether the vertex needs to be tessellated using the UV of the grass segment are: the y-component of the grass segments UV to be tessellated is 0.3333 to 0.6666.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the invention adopts the physical simulation algorithm based on the PBD, has low computational power requirement and high algorithm parallelization, and has better effect on large-scale grassland rendering.
2. The invention uses the management method based on the grass block, the calculation granularity is increased from each grass leaf to each grass block, the operation times are greatly reduced, and simultaneously, considerable elimination effect is provided, the whole grass block management process is carried out at the GPU end, the CPU pressure is released, and the efficiency is higher.
Drawings
In order to more clearly illustrate the technical solutions of the exemplary embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and that for those skilled in the art, other related drawings can be obtained from these drawings without inventive effort. In the drawings:
FIG. 1 is a flow chart of a large scale grass rendering and simulation method according to the present invention;
FIG. 2 is a diagram illustrating the definition of grass leaf shape in a large scale grass rendering and simulation method according to the present invention;
FIG. 3 is a schematic diagram illustrating grass leaf tessellation in a large-scale grass rendering and simulation method according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not meant to limit the present invention.
Examples
As shown in fig. 1, a large scale grassland rendering and simulation method includes the following steps:
step 1, generating a grassland model based on a triangular mesh, wherein the grassland model comprises at least one grass block, a plurality of random seeds are arranged in the grass block, and the grassland model is randomized through the random seeds to generate grass leaf data and bones;
2, simulating the random grassland model by adopting a physical simulation algorithm based on PBD according to the grass leaf data and the skeleton to obtain a dynamic grassland model;
step 3, removing the grassland model by using a management method based on grass blocks;
and 4, efficiently rendering the eliminated grassland model by using the surface subdivision.
In this embodiment, the specific implementation method of step 1 is as follows:
step 1.1, defining the scale of the grassland based on the triangular mesh: the grass blocks are divided according to the granularity of a world coordinate system 1 x 1, each grass block comprises a plurality of randomly generated random coordinates, the random coordinates of the grass blocks and a directory corresponding to the grass blocks are stored, the directory is the serial number of a first vertex arranged at the head of each grass block in a vertex cache region, and the vertex is a point needing to be rendered on grass leaves forming the grass blocks; the vertex cache region is a cache region in hardware, the stored data is data which needs to be submitted to a display card by a CPU, meanwhile, a directory of the cursive block is used for mapping with the vertex cache region, an index of the cursive block is stored in the index cache region, and the index cache region is used for specifying a connection mode of the vertex in the vertex cache region.
Step 1.2, calculating a random number of the random grassland model: calculating the random number of the grassland model by using the random coordinates, namely, the grass root coordinates or random seeds, wherein the random number can be used for randomly generating the grass leaf shape, so that the grassland is more vivid:
wherein the content of the first and second substances,representing random seeds or random coordinates, namely grass root coordinates of grass leaves, and rand representing random numbers generated according to the random seeds;
step 1.3, generating a vertex, an index and a normal, UV: with the grass root coordinate as a base point, expanding the skeleton line of the grass extending blade to two sides of the y axis of the grass root coordinate to generate vertexes, indexes and normal lines, wherein the UV is used for representing the mapping relation of textures to one point inside a triangle, the normal lines are perpendicular to the triangle and are used for illumination coloring, the indexes are necessary for generating a grass field grid, the expansion is that the skeleton line of the grass extending blade extends to two sides of the y axis, as shown in FIG. 2, the skeleton line of the grass blade is the axis from the grass tip to the grass root coordinate, the skeleton line of the grass blade can be understood as being spliced by line segments between adjacent skeletons, the extending end point represents the width of the grass blade, the width of each grass segment from bottom to top can be reduced layer by layer, the topmost end point is a vertex, the grass segment is a certain segment of the grass blade, as shown in FIG. 2, one grass has 7 (grass blade) vertexes, and randomizes by using random numbers; the vertex calculation process is as follows:
wherein the content of the first and second substances,representing vertices, segmentsiThe ith grass segment is shown, in the embodiment, the grass leaf is divided into 3 segments, each layer represents a grass segment, the lower two grass segments are trapezoids formed by splicing two triangles, the uppermost segment is a triangle,the rotation axes are two, the vertex is randomly rotated around the y axis and the straight line of the two vertexes of the grass root respectively, and M represents the line along the vertexRotation of theta degree, x, y, z being the axis of rotationThe coordinates of (a).
Step 1.4, generating skeleton: and generating a plurality of PBD skeleton nodes serving as skeletons along the central axis of the grass blade by taking the grass root position as a base point, and defining grass blade data corresponding to each PBD skeleton node. The skeleton can be generated by means of storing an array in sequence. In this embodiment, there are 4 PBD bone nodes. I.e. one grass leaf contains 4 PBD skeletal nodes, 3 grass segments. The grass data for all grass blades is stored in each array. The array classification is classified according to rules, which is a conventional means for those skilled in the art, and is not described here again.
In this embodiment, the cursive data includes current coordinates of the bone, predicted coordinates of the bone, a moving speed of the bone, an initial position of the bone, an outward expansion vector of the vertex, a vertex directory, a bone directory, an absolute position constraint, a distance constraint, and mass size information of the bone.
Here, the initial predicted coordinates of the skeleton are the same as the initial positions, the initial positions of the skeleton are known from the initial information in step 1.4, the outward expansion vectors of the vertices refer to two sides of the skeleton line of the grass leaf, the positions of the vertices to be expanded to the two sides of the skeleton line of the grass leaf are represented by vectors, the vertex list is the serial number of the first vertex of the grass leaf in the vertex cache area, and the bone list is the serial number of the first skeleton of the grass leaf in the bone array.
In this embodiment, the specific implementation method of step 2 is as follows:
and 2.1, packaging the grass leaf data and the grass blocks and transmitting the grass leaf data and the grass blocks into a Graphics Processing Unit (GPU), independently storing each data of the grass leaf data and the grass blocks into a computing buffer area (computeBuffer), and simultaneously operating a computing shader to divide the grass blocks into two groups according to the distance between the grass blocks and a camera.
The calculation cache region comprises an index cache region, a vertex cache region and a cache region for storing the skeleton array. The camera is the viewpoint position of the player, and the distance from the camera here is the straight line distance between the center coordinate of the grass patch and the coordinate of the camera. The coordinates of the center of the grass block are known at the time of creating the grass block and can be calculated by those skilled in the art.
The calculation of the distance dist between the grass block and the camera is as follows:
among them, gridsiThe center coordinates of each grass block are represented,representing camera coordinates;
and 2.2, according to the two grouped grass blocks obtained in the step 2.1, calculating the influence of wind power on the speed of the skeleton of each grass blade by using different iteration times, and performing physical operation by using different iteration times, so that the remote grass blades are not easy to perceive the poor simulation precision even under the condition of poor physical simulation precision, and the calculation power is greatly improved. The wind force is stored by using a noise map, and the action of the wind force is represented by UV (UV is used for mapping textures with a certain point inside a triangle) of the time rolling noise map; the noise map is a picture, the component of each pixel RGB represents the direction of wind power, but only the wind power in the xz direction is considered, the y axis is omitted, the iteration times are set according to the distance between the grass block and the camera, the iteration times at the positions close to the camera are large, the iteration times at the positions far away from the camera are small, and the image of the grass leaf in the application on the three-dimensional software capable of interacting in real time is dynamic by designing the wind power.
First, the effect of force on the velocity of the bone of each of the grass blades is calculated as follows: (initial velocity of 0)
Wherein the content of the first and second substances,is the force of gravity, the frictions are the coefficients of friction,is the initial position of the bone in the initial position,is the predicted position of the skeleton, Δ t is the running time, the running time is a fixed value, and can be 0.02 s;
the influence of the wind force on the speed of the top bone of the grass blades is then calculated as follows:
wherein, the window noise X and the window noise Z are corresponding color values of the skeleton in the noise map, delta t is the running time, and the running time is a fixed value;
in summary, the influence of wind on the speed of the skeleton of each of the blades is calculated as follows:
step 2.3, updating the prediction coordinate of each bone according to the speed of the bones of the grass blades calculated in the step 2.2, wherein the prediction coordinate of the bones is calculated in the following way:
wherein the content of the first and second substances,respectively representing the predicted and current positions of the skeleton,is the speed of the skeleton, Δ t is the run time;
step 2.4, packing a collision body and transmitting the collision body into a GPU, wherein the collision body is calculated according to a sphere, the information of the collision body comprises a collision body sphere center and a collision body radius, a spherical collision is calculated for each skeleton, and if the skeleton is in the collision body, the skeleton is moved to the intersection of the collision body sphere center to the extension line of the skeleton and the sphere of the collision body; the effect of the collider is to simulate the interaction of an external object with the grass, and moving the skeleton to the intersection is to simulate the effect of the collision.
Step 2.5, calculating distance constraint and absolute position constraint, wherein the distance constraint is the constraint among all bones on the same grass blade, and the absolute position constraint is the coordinate of the root bones of the grass blade, namely keeping the coordinate of the bones still; the distance constraint is calculated as follows:
wherein E is expressed as the elastic coefficient, R is expressed as the elastic limit of the spring,representing the velocity at time t of the i-1 st bone constrained by the distance constraint,representing the velocity at time t of the ith bone constrained by the distance constraint,representing the velocity of the ith bone at time t-1 as constrained by the distance constraint,representing the velocity of the i-1 st bone at t-1 time constrained by the distance constraint; the order of i-1 and i here is the order of the orders of the bones in the array.
And 2.6, updating the speed of the current skeleton according to the predicted coordinate and the current coordinate of the skeleton, wherein the updating mode is as follows:
step 2.7, changing the current coordinate of the skeleton into a prediction coordinate;
and 2.8, updating the coordinates and normal line information (the normal line is used for representing a vector which passes through a certain point in the triangle and is vertical to the plane of the triangle) of the vertex of the grass leaf triangular grid in a calculation shader according to the calculated predicted coordinates (grass leaf predicted coordinates) of the skeleton and the outward expansion vectors (the vectors of the vertex which expands towards two sides of the grass leaf skeleton line along the skeleton), and modifying the coordinates of the vertices at two sides of the skeleton position according to the predicted coordinates of the skeleton.
In this embodiment, the specific implementation method of step 3 is as follows:
3.1, transmitting the cursive blocks to be rendered into a computation shader;
step 3.2, distance elimination is carried out on the grass blocks: judging the distance between the center coordinate of the grass block and the coordinate of the camera, and removing the grass block which is too far away from the camera; here, a threshold value can be set, and the direct proposal that the distance between the center coordinate of the grass block and the coordinate of the camera is greater than the threshold value
3.3, carrying out visual cone elimination on the grass blocks passing through the step 3.2, transforming the centers of the grass blocks to a cutting space, judging the position relation between the center coordinates of the grass blocks and a screen, and eliminating the grass blocks outside the visual cones of the camera;
the calculation mode of the position relation between the central coordinates of the grass blocks and the screen is as follows:
wherein VP represents a product of a view matrix representing a transformation matrix transforming vertices from a model space to a view space and a projection matrix representing transforming vertices from the view space to a clipping space,the coordinates of the center of the grass block are shown,representing the coordinate of the central coordinate of the grass block in the cutting space;
step 3.4, carrying out shielding and removing on the grass blocks passing through the step 3.3: storing the depth image of the previous frame image through post-processing, then sampling downwards, obtaining the projection size and the coordinates of the grass block on a screen by using the FOV of a view cone, obtaining the level mip of the depth image to be obtained according to the projection size of the grass block, judging whether the grass leaves are shielded, and then removing the shielded grass leaves;
the hierarchy mip is calculated as follows:
wherein the content of the first and second substances,representing the central coordinates of the grass blockThe coordinates in the NDC space are,representing the coordinate of the center coordinate of the grass block in a cutting space, half FOV of the view cone, and mip representing the mip level of the depth map needing sampling;
and 3.5, adding the grass blocks after being removed in the step 3.2-3.4 into a grass block visible set for rendering.
In this embodiment, in step 3.4, the specific steps of determining whether the grass blades are blocked are as follows: and sampling UV and mip levels, and if the depth values of the upper, lower, left and right pixels of the pixel point of the corresponding position of the grass block on the depth map are larger than the z value of the normalized equipment coordinate space of the grass block, the grass leaf is considered to be shielded.
In this embodiment, the specific implementation method of step 2 is as follows:
step 4.1, determining hardware instantiation rendering parameters according to the visible set of the grass blocks, packaging vertexes (vertexes of the whole grassland model), indexes, normal lines of grass leaves and UV information of the grass leaves into arrays, and transmitting the arrays to a GPU (graphics processing unit) in a one-time mode;
the hardware instantiation rendering parameters are the vertex number of each instance, the number of the instances, the initial vertex index and the initial instance index respectively. The example here is what is rendered at each time, in this application the grass grid contained for each grass block.
Step 4.2, determining vertex information to be rendered of the cursive block according to the SV _ VertexID and the SV _ InstanceID (the SV _ VertexID represents the serial number of the vertex calculated by the current vertex shader program in the vertex cache region, and the SV _ InstanceID represents the current running case, namely the fifth cursive block) and the directory of the cursive block in the vertex shader, transforming the vertex information to be rendered of the cursive block from the model space to the clipping space, and simultaneously transforming the vertex coordinates, the normal and the UV information of the cursive block;
step 4.3, performing tessellation on the vertex, as shown in fig. 3, the region to be tessellated is mainly the middle region of the grass leaf (the middle region is the vertex of the grass leaf mesh, that is, the mesh vertex of the grass leaf can be seen by human eyes, but the mesh vertices are controlled by invisible bone information), so in the tessellation process, using the UV of the grass segment to judge whether the vertex needs to be tessellated (the y component of the UV of the grass segment to be tessellated is 0.3333 to 0.6666), and using a sine function to bend the subdivided vertex (only bending the middle grass segment, as shown in fig. 3), the concrete way of bending is as follows:
wherein the content of the first and second substances,the first control point representing a tessellation,a third control point representing a tessellation,the normal to the vertex is represented as,UV representing the vertex;vertex coordinates representing bending effect obtained by offsetting subdivided vertexes of a surfaceRepresenting the vertex coordinates which are subdivided by the surface and are not bent, wherein the most basic work of the surface subdivision is to increase the number of the vertexes, so that the model is convenient to be finer, and if the vertexes are not shifted, all the vertexes can be generated in the original triangle; the control points are the vertexes of the triangles needing to be subjected to surface subdivision;
and 4.4, rendering the grass blades after the curved surface subdivision by using a pixel shader, wherein the illumination model for rendering the grass blades is a Lambert illumination model.
The Lambert lighting model has a light transmission effect. Grass blades are defined using two colors (what colors are both rows, one representing the grass tips (the grid vertices of the grass blades), one representing the grass roots, and the color between the grass tips and the grass roots is interpolated using UV.
Wherein the content of the first and second substances,a vector representing the pixels to the camera,which indicates the direction of the light illumination,is a normal line, the above parameter vector can be calculated by the front surface in the shader, and is not described here again.
In this embodiment, in step 3.1, the cursive block to be rendered is regarded as a sphere with a radius of 0.5.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (10)
1. A large-scale grassland rendering and simulating method is characterized by comprising the following steps:
step 1, generating a grassland model based on a triangular mesh, wherein the grassland model comprises at least one grass block, a plurality of random seeds are arranged in the grass block, and the grassland model is randomized through the random seeds to generate grass leaf data and bones;
step 2, simulating the random grassland model according to the grass leaf data and the skeleton to obtain a dynamic grassland model;
step 3, removing the grassland model by using a management method based on grass blocks;
and 4, efficiently rendering the eliminated grassland model by using the surface subdivision.
2. The large-scale grassland rendering and simulation method as claimed in claim 1, wherein the specific implementation method of step 1 is as follows:
step 1.1, defining the scale of the grassland based on the triangular mesh: the grass blocks are divided according to the granularity of a world coordinate system 1 x 1, each grass block comprises a plurality of randomly generated random coordinates, the random coordinates of the grass blocks and a grass block directory corresponding to the grass blocks are stored, the grass block directory is the serial number of a first vertex arranged at the head of each grass block in a vertex cache region, and the vertex is a point needing to be rendered on grass leaves forming the grass block;
step 1.2, calculating a random number of the random grassland model: calculating the random number of the grassland model by using the random coordinates called as grass root coordinates or as random seeds:
wherein the content of the first and second substances,representing random seeds or random coordinates, namely grass root coordinates of grass leaves, and rand representing random numbers generated according to the random seeds;
step 1.3, generating a vertex, an index and a normal, UV: with the grass root coordinate as a base point, expanding the grass extending leaf skeleton line to two sides of the grass root coordinate to generate a vertex, generating a normal, an index and UV, and randomizing the vertex by using a random number; the generation calculation process of the vertex is as follows:
wherein the content of the first and second substances,representing vertices, segmentsiRepresents the ith grass segment, the grass segment is a certain segment of grass leaves,representing the axis of rotation providing a rotational reference for the vertex, M representing the edgeRotation of theta degree, x, y, z being the axis of rotationThe coordinates of (a);
step 1.4, generating skeleton: and generating a plurality of PBD skeleton nodes serving as skeletons along the central axis of the grass blade by taking the grass root position as a base point, and defining grass blade data corresponding to each PBD skeleton node.
3. The method of claim 2, wherein the grass data comprises current coordinates of the bone, predicted coordinates of the bone, moving speed of the bone, initial position of the bone, dilation vector of the vertex, vertex directory, bone directory, absolute position constraint, distance constraint, and mass size information of the bone.
4. The large-scale grassland rendering and simulation method of claim 2, wherein there are 4 PBD skeletal nodes.
5. The large-scale grassland rendering and simulation method as claimed in claim 1, wherein the specific implementation method of the step 2 is as follows:
step 2.1, packaging the grass leaf data and the grass blocks and transmitting the grass leaf data and the grass blocks into a graphic processor, independently storing each data of the grass leaf data and the grass blocks into a calculation buffer area, and simultaneously operating a calculation shader to divide the grass blocks into two groups according to the distance between the grass blocks and a camera;
the calculation of the distance dist between the grass block and the camera is as follows:
among them, gridsiThe center coordinates of each grass block are represented,representing camera coordinates;
2.2, according to the two grouped grass blocks obtained in the step 2.1, calculating the influence of wind force on the speed of the skeleton of each grass blade by using different iteration times, storing the wind force by using a noise map, and representing the action of the wind force by UV of the time rolling noise map;
the influence of the wind force on the speed of the skeleton of each of the grass blades is calculated as follows:
wherein the content of the first and second substances,is the force of gravity, the frictions are the coefficients of friction,is the initial position of the bone in the initial position,the predicted position of the skeleton is represented by the formula of WindNoiseX, WindNoiseZ, the corresponding color value of the skeleton in the noise map, delta t is the running time, and the running time is a fixed value;
step 2.3, updating the prediction coordinate of each bone according to the speed of the bones of the grass blades calculated in the step 2.2, wherein the prediction coordinate of the bones is calculated in the following way:
wherein the content of the first and second substances,respectively representing the predicted and current positions of the skeleton,is the speed of the skeleton, Δ t is the run time;
step 2.4, packing a collision body and transmitting the collision body into a GPU, wherein the collision body is calculated according to a sphere, the information of the collision body comprises a collision body sphere center and a collision body radius, a spherical collision is calculated for each skeleton, and if the skeleton is in the collision body, the skeleton is moved to the intersection of the collision body sphere center to the extension line of the skeleton and the sphere of the collision body;
step 2.5, calculating distance constraint and absolute position constraint, wherein the distance constraint is the constraint among all bones on the same grass blade, and the absolute position constraint is the coordinate of the root bones of the grass blade; the distance constraint is calculated as follows:
wherein E is expressed as the elastic coefficient, R is expressed as the elastic limit of the spring,representing the velocity at time t of the i-1 st bone constrained by the distance constraint,representing the velocity at time t of the ith bone constrained by the distance constraint,representing the velocity of the ith bone at time t-1 as constrained by the distance constraint,representing the velocity of the i-1 st bone at t-1 time constrained by the distance constraint;
and 2.6, updating the speed of the current skeleton according to the predicted coordinate and the current coordinate of the skeleton, wherein the updating mode is as follows:
step 2.7, changing the current coordinate of the skeleton into a prediction coordinate;
and 2.8, updating the coordinates and normal line information of the vertexes of the grass-leaf triangular mesh in a calculation shader according to the calculated predicted coordinates of the bones and the calculated outward expansion vectors of the vertexes.
6. The large-scale grassland rendering and simulation method as claimed in claim 5, wherein the specific implementation method of step 3 is as follows:
3.1, transmitting the cursive blocks to be rendered into a computation shader;
step 3.2, distance elimination is carried out on the grass blocks: judging the distance between the center coordinate of the grass block and the coordinate of the camera, and removing the grass block which is too far away from the camera;
3.3, carrying out visual cone elimination on the grass blocks passing through the step 3.2, transforming the centers of the grass blocks to a cutting space, judging the position relation between the center coordinates of the grass blocks and a screen, and eliminating the grass blocks outside the visual cones of the camera;
the calculation mode of the position relation between the central coordinates of the grass blocks and the screen is as follows:
wherein VP represents a product of a view matrix representing a transformation matrix transforming vertices from model space to view space and a projection matrix representing transforming vertices from view space to clipping space,the coordinates of the center of the grass block are shown,representing the coordinate of the central coordinate of the grass block in the cutting space;
step 3.4, carrying out shielding and removing on the grass blocks passing through the step 3.3: storing the depth image of the previous frame image through post-processing, then sampling downwards, obtaining the projection size and the coordinates of the grass block on a screen by using the FOV of a view cone, obtaining the level mip of the depth image to be obtained according to the projection size of the grass block, judging whether the grass leaves are shielded, and then removing the shielded grass leaves;
the hierarchy mip is calculated as follows:
wherein the content of the first and second substances,representing the central coordinates of the grass blockThe coordinates in the NDC space are,representing the coordinate of the center coordinate of the grass block in a cutting space, half FOV of the view cone, and mip representing the mip level of the depth map needing sampling;
and 3.5, adding the grass blocks after being removed in the step 3.2-3.4 into a grass block visible set for rendering.
7. The large-scale grassland rendering and simulation method of claim 6, wherein the step 3.4 of determining whether the grass blades are occluded specifically comprises: and sampling UV and mip levels, and if the depth values of the upper, lower, left and right pixels of the pixel point of the corresponding position of the grass block on the depth map are larger than the z value of the normalized equipment coordinate space of the grass block, the grass leaf is considered to be shielded.
8. The large-scale grassland rendering and simulation method as claimed in claim 6, wherein the specific implementation method of the step 2 is as follows:
step 4.1, determining hardware instantiation rendering parameters according to the visible set of the grass blocks, packaging vertexes, indexes, normal lines of the grass leaves and UV information of the grass leaves into an array, and transmitting the array to a GPU (graphics processing unit) in a one-time mode;
step 4.2, determining vertex information to be rendered of the cursive block according to the corresponding SV _ VertexID and SV _ InstanceID and the directory of the cursive block in a vertex shader, converting the vertex information to a clipping space from a model space, and simultaneously converting vertex coordinates, a normal and UV information of the cursive leaf;
4.3, judging whether the vertex needs to be subjected to surface subdivision by using UV of the grass section, and bending the subdivided vertex by using a sine function in a specific mode as follows:
wherein the content of the first and second substances,the first control point representing a tessellation,a third control point representing a tessellation,the normal to the vertex is represented as,UV representing the vertex;vertex coordinates representing bending effect obtained by offsetting subdivided vertexes of a surfaceRepresenting the vertex coordinates which are subdivided by the surface and are not bent, wherein the most basic work of the surface subdivision is to increase the number of the vertexes, so that the model is convenient to be finer, and if the vertexes are not shifted, all the vertexes can be generated in the original triangle;
and 4.4, rendering the grass blades after the curved surface subdivision by using a pixel shader, wherein the illumination model for rendering the grass blades is a Lambert illumination model.
9. The large-scale grassland rendering and simulation method of claim 6, wherein in step 3.1, the grass blocks to be rendered are regarded as spheres with a radius of 0.5.
10. The large-scale grassland rendering and simulation method of claim 8, wherein in step 4.3, the determination criteria for determining whether the vertices need to be tessellated using the UV of the grass segments are: the y-component of the grass segments UV to be tessellated is 0.3333 to 0.6666.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111526989.6A CN114254501A (en) | 2021-12-14 | 2021-12-14 | Large-scale grassland rendering and simulating method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111526989.6A CN114254501A (en) | 2021-12-14 | 2021-12-14 | Large-scale grassland rendering and simulating method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114254501A true CN114254501A (en) | 2022-03-29 |
Family
ID=80792145
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111526989.6A Pending CN114254501A (en) | 2021-12-14 | 2021-12-14 | Large-scale grassland rendering and simulating method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114254501A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116977523A (en) * | 2023-07-25 | 2023-10-31 | 深圳市快速直接工业科技有限公司 | STEP format rendering method at WEB terminal |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101751694A (en) * | 2008-12-10 | 2010-06-23 | 中国科学院自动化研究所 | Method for rapidly simplifying and drawing complex leaf |
WO2013066339A1 (en) * | 2011-11-04 | 2013-05-10 | Intel Corporation | Plant simulation for graphics engines |
CN103605501A (en) * | 2013-07-01 | 2014-02-26 | 绵阳市武道数码科技有限公司 | Game vegetation system |
CN112807685A (en) * | 2021-01-22 | 2021-05-18 | 珠海天燕科技有限公司 | Grassland rendering method, grassland rendering device and grassland rendering equipment based on game role track |
-
2021
- 2021-12-14 CN CN202111526989.6A patent/CN114254501A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101751694A (en) * | 2008-12-10 | 2010-06-23 | 中国科学院自动化研究所 | Method for rapidly simplifying and drawing complex leaf |
WO2013066339A1 (en) * | 2011-11-04 | 2013-05-10 | Intel Corporation | Plant simulation for graphics engines |
CN103605501A (en) * | 2013-07-01 | 2014-02-26 | 绵阳市武道数码科技有限公司 | Game vegetation system |
CN112807685A (en) * | 2021-01-22 | 2021-05-18 | 珠海天燕科技有限公司 | Grassland rendering method, grassland rendering device and grassland rendering equipment based on game role track |
Non-Patent Citations (4)
Title |
---|
YUMEI WANG ET AL.: "Modeling and Rendering of Dynamic Grassland Scene in the Wind", 《2009 WASE INTERNATIONAL CONFERENCE ON INFORMATION ENGINEERING》, vol. 1, 11 July 2009 (2009-07-11), pages 69 - 74, XP031516863 * |
樊增智: "草的捕捉与大规模草地渲染模拟研究和实现", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 1, 15 January 2020 (2020-01-15), pages 138 - 1704 * |
王子豪: "大规模草地渲染与仿真关键技术", 《CNKI优秀硕士学位论文全文库》, 14 June 2022 (2022-06-14), pages 1 - 77 * |
陈圣煜: "基于GPU的复杂植被场景的绘制与动态模拟", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 2, 15 February 2019 (2019-02-15), pages 138 - 2980 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116977523A (en) * | 2023-07-25 | 2023-10-31 | 深圳市快速直接工业科技有限公司 | STEP format rendering method at WEB terminal |
CN116977523B (en) * | 2023-07-25 | 2024-04-26 | 快速直接(深圳)精密制造有限公司 | STEP format rendering method at WEB terminal |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8698810B2 (en) | Reorienting properties in hair dynamics | |
US10163243B2 (en) | Simulation of hair in a distributed computing environment | |
JP6159807B2 (en) | Computer graphics method for rendering a three-dimensional scene | |
US20050210994A1 (en) | Volumetric hair rendering | |
US9519988B2 (en) | Subspace clothing simulation using adaptive bases | |
CN105205861B (en) | Tree three-dimensional Visualization Model implementation method based on Sphere Board | |
US10192342B1 (en) | Using stand-in camera to determine grid for rendering an image from a virtual camera | |
US8698799B2 (en) | Method and apparatus for rendering graphics using soft occlusion | |
US8054311B1 (en) | Rig baking for arbitrary deformers | |
CN103489216A (en) | 3d object scanning using video camera and tv monitor | |
TW201907270A (en) | Reorientation for VR in a virtual reality | |
JP4936522B2 (en) | Image processing method and image processing apparatus | |
CN114254501A (en) | Large-scale grassland rendering and simulating method | |
CN113962979A (en) | Cloth collision simulation enhancement presentation method and device based on depth image | |
JP2009020874A (en) | Hair simulation method, and device therefor | |
Peng et al. | A real-time system for crowd rendering: parallel lod and texture-preserving approach on gpu | |
US9665955B1 (en) | Pose-space shape fitting | |
US9734616B1 (en) | Tetrahedral volumes from segmented bounding boxes of a subdivision | |
Chen et al. | Real-time continuum grass | |
JPH0944698A (en) | Method and device for generating simulated marine wave image | |
US9639981B1 (en) | Tetrahedral Shell Generation | |
De Gyves et al. | Proximity queries for crowd simulation using truncated Voronoi diagrams | |
Persson | Volume-Preserving Deformation of Terrain in Real-Time | |
Im et al. | Efficient Rain Simulation based on Constrained View Frustum | |
Jæger et al. | Can't see the Forest for the Trees: Perceiving Realism of Procedural Generated Trees in First-Person Games. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |