CN113599818A - Vegetation rendering method and device, electronic equipment and readable storage medium - Google Patents

Vegetation rendering method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN113599818A
CN113599818A CN202110938397.9A CN202110938397A CN113599818A CN 113599818 A CN113599818 A CN 113599818A CN 202110938397 A CN202110938397 A CN 202110938397A CN 113599818 A CN113599818 A CN 113599818A
Authority
CN
China
Prior art keywords
vegetation
target
original
patches
polygon
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110938397.9A
Other languages
Chinese (zh)
Other versions
CN113599818B (en
Inventor
闻亚洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202110938397.9A priority Critical patent/CN113599818B/en
Publication of CN113599818A publication Critical patent/CN113599818A/en
Application granted granted Critical
Publication of CN113599818B publication Critical patent/CN113599818B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6661Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The present disclosure provides a vegetation rendering method, apparatus, electronic device and storage medium, the vegetation rendering method comprising: obtaining an original vegetation model corresponding to a vegetation object, wherein the original vegetation model comprises a plurality of original polygonal patches; the plurality of original polygonal patches form a coronal outline of the vegetation; respectively amplifying and rotating the original polygon patches to obtain a plurality of target polygon patches; obtaining a target vegetation model based on the plurality of target polygon patches, wherein each target polygon patch of the target vegetation model faces a virtual camera, and at least part of adjacent target polygon patches are crossed; rendering the target vegetation model to generate the vegetation object. According to the embodiment of the application, the rendering effect of the vegetation can be guaranteed while the rendering efficiency is improved.

Description

Vegetation rendering method and device, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to the field of graphics rendering technologies, and in particular, to a vegetation rendering method, apparatus, electronic device, and storage medium.
Background
The vegetation is an important element in a game scene, a more real natural environment can be simulated by rendering the vegetation in the game scene, the substitution sense of a player in a game is increased, and the game experience of the player is improved.
In the development process of a game scene, due to the limitation of the performance of a mobile phone, generally, for the rendering of vegetation leaf clusters in the game scene, a small cluster or a branch of a branch is modeled in a general way in a patch mode, a plurality of patches are distributed on a trunk, and a mapping of the leaf clusters is attached to serve as the leaf clusters of a tree in the game.
However, although the above method reduces the loss of the system, the rendering effect is not good, such as thin vegetation, weak volume, etc. Therefore, how to balance the performance loss and the reality of the plant rendering is a problem that needs to be solved urgently.
Disclosure of Invention
The embodiment of the disclosure at least provides a vegetation rendering method, a vegetation rendering device, electronic equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a vegetation rendering method, including:
acquiring an original vegetation model corresponding to a vegetation object; the original vegetation model comprises a plurality of original polygonal patches that form a coronal outline of the vegetation;
respectively amplifying and rotating the original polygon patches to obtain a plurality of target polygon patches;
obtaining a target vegetation model based on the plurality of target polygon patches, wherein each target polygon patch of the target vegetation model faces a virtual camera, and at least part of adjacent target polygon patches are crossed;
rendering the target vegetation model to generate the vegetation object.
In the embodiment of the disclosure, the original polygon patches are respectively amplified and rotated, so that each obtained target polygon patch faces towards the virtual camera, at least part of the obtained target polygon patches are adjacent to each other, and the target polygon patches are crossed, and further the utilization rate of the model vertex can be improved, so that under the same vertex calculation amount, the final vegetation effect can be more dense, or under the vegetation effect with the same density, the overhead of the vertex can be less, that is, the rendering efficiency can be improved, and meanwhile, the rendering effect of the vegetation is improved.
In a possible implementation according to the first aspect, the original polygon patches are quadrilateral patches.
In the embodiment of the disclosure, because the original polygonal patch is a quadrilateral patch, the rendered vegetation object has a better effect, and the rendering effect is improved.
In a possible implementation manner, before the enlarging and rotating the original polygon patches, the method further includes:
performing coordinate conversion on the original UV coordinates of the plurality of original polygonal patches;
the respectively amplifying and rotating the plurality of original polygon patches comprises:
and respectively amplifying and rotating the original polygon patches after the coordinate conversion.
In the embodiment of the disclosure, the original UV coordinates of the original polygon facets are subjected to coordinate conversion, so that the scaling according to the centers of the original polygon facets is realized, the effect obtained after the intersection of the target polygon facets is more uniform, and the rendering effect is further improved.
According to the first aspect, in a possible implementation, before the rendering the target vegetation model and generating the vegetation object, the method further comprises:
determining a target perspective orientation of the virtual camera lens upon detecting a triggering event for the virtual camera lens adjustment;
and respectively rotating the plurality of target polygon patches according to the target view angle and the orientation, so that each target polygon patch faces the virtual camera.
In the embodiment of the disclosure, after the view angle and the orientation of the virtual camera lens are changed, the plurality of target polygon patches are respectively rotated along with the view angle and the orientation, so that the rendering effect of the vegetation object is not changed, and the visual experience of a player is improved.
According to the first aspect, in a possible implementation manner, the performing amplification and rotation processing on the plurality of original polygon patches to obtain a plurality of target polygon patches includes:
amplifying the original polygonal patch;
based on the original orientation angle and the adjustment value of the original polygonal patch, performing rotation processing on the amplified original polygonal patch to obtain the target polygonal patch; the multiple target polygon patches are intersected with each other.
In the embodiment of the disclosure, based on the original orientation angle and the adjustment value of the original polygon facet, the enlarged original polygon facet is rotated, so that the orientation angles of the rotated target polygon facet are not completely the same although all the polygon facets face the virtual camera, thereby achieving the effect of disordering and interweaving leaves and improving the rendering authenticity.
According to the first aspect, in a possible implementation, before the rendering the target vegetation model, the method further comprises:
acquiring wind blowing attribute information in a target scene;
and dynamically processing the original UV coordinates of the original polygon patches based on the blowing attribute information.
In the embodiment of the disclosure, under the condition that wind blows in the game scene, the vegetation object can show the corresponding wind blowing effect, and then the rendered vegetation object is combined with the natural weather, so that the rendered reality is improved.
According to the first aspect, in one possible implementation, the rendering the target vegetation model, generating the vegetation object, includes:
and in the process of rendering the target vegetation model, removing pixels to generate the vegetation object.
In the embodiment of the disclosure, in the rendering process, the pixels are removed, so that the generated paired vegetation objects have a hollow effect, and the rendered vegetation objects are more real.
In a second aspect, an embodiment of the present disclosure provides a vegetation rendering apparatus, including:
the acquisition module is used for acquiring an original vegetation model corresponding to the vegetation object; the original vegetation model comprises a plurality of original polygonal patches that form a coronal outline of the vegetation;
the processing module is used for respectively amplifying and rotating the original polygon patches to obtain a plurality of target polygon patches;
a determining module, configured to obtain a target vegetation model based on the multiple target polygon patches, where each target polygon patch of the target vegetation model faces a virtual camera, and at least some adjacent target polygon patches are intersected;
and the rendering module is used for rendering the target vegetation model to generate the vegetation object.
According to a second aspect, in a possible implementation, the original polygon patches are quadrilateral patches.
According to the second aspect, in a possible implementation, the processing module is further configured to:
performing coordinate conversion on the original UV coordinates of the plurality of original polygonal patches; and
and respectively amplifying and rotating the original polygon patches after the coordinate conversion.
According to the second aspect, in a possible implementation, the determining module is further configured to:
determining a target perspective orientation of the virtual camera lens upon detecting a triggering event for the virtual camera lens adjustment;
the processing module is further configured to:
and respectively rotating the plurality of target polygon patches according to the target view angle and the orientation, so that each target polygon patch faces the virtual camera.
According to the second aspect, in a possible implementation manner, the processing module is specifically configured to:
amplifying the original polygonal patch;
based on the original orientation angle and the adjustment value of the original polygonal patch, performing rotation processing on the amplified original polygonal patch to obtain the target polygonal patch; the multiple target polygon patches are intersected with each other.
In a possible implementation manner, according to the second aspect, the obtaining module is further configured to:
acquiring wind blowing attribute information in a target scene;
the processing module is further configured to:
and dynamically processing the original UV coordinates of the original polygon patches based on the blowing attribute information.
According to a second aspect, in a possible implementation, the rendering module is specifically configured to:
and in the process of rendering the target vegetation model, removing pixels to generate the vegetation object.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is run, the machine-readable instructions when executed by the processor performing the vegetation rendering method of the first aspect.
In a fourth aspect, the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the vegetation rendering method according to the first aspect.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 is a schematic diagram illustrating an implementation body of a vegetation rendering method provided by an embodiment of the present disclosure;
fig. 2 shows a flowchart of a vegetation rendering method provided by an embodiment of the present disclosure;
fig. 3 illustrates a schematic diagram of an original vegetation model provided by an embodiment of the present disclosure;
fig. 4 illustrates a schematic diagram of a target vegetation model provided by an embodiment of the present disclosure;
fig. 5 illustrates a schematic diagram of a generated vegetation object provided by an embodiment of the present disclosure;
fig. 6 is a flowchart illustrating a method for performing respective enlarging and rotating processes on a plurality of original polygon patches according to an embodiment of the present disclosure;
fig. 7 shows a flowchart of another vegetation rendering method provided by an embodiment of the present disclosure;
fig. 8 is a flowchart illustrating a method for dynamically processing an original polygon patch according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a vegetation rendering apparatus provided in an embodiment of the present disclosure;
fig. 10 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
The vegetation is an important element in a game scene, a more real natural environment can be simulated by rendering the vegetation in the game scene, the substitution sense of a player in a game is increased, and the game experience of the player is improved.
With the development of the gaming industry, more games are being developed for mobile terminals based on the flexibility of the mobile terminals. At present, when a wide vegetation effect (such as grass sea, flower sea, forest and the like) is expressed in a game, a large number of patch geometries with texture maps are used, and textures are assigned on the patch geometries to simulate the vegetation effect.
In the development of a game scene, due to the limitation of the device performance of a game client on the calculation amount of a polygon vertex, the modeling and rendering of each leaf of a vegetation in a polygon mode are difficult to achieve. Generally, a small cluster or a branch and a leaf of a branch are modeled in a general way in the form of a patch, and a plurality of patches are distributed on the trunk and are pasted with a mapping of the leaf cluster to be used as the leaf cluster of a tree in a game.
However, it is found through research that although the loss of the system is reduced, the rendering effect is not good, such as thin vegetation, weak volume and the like. Therefore, how to balance the performance loss and the reality of the plant rendering is a problem that needs to be solved urgently.
The present disclosure provides a vegetation rendering method, including: acquiring an original vegetation model corresponding to a vegetation object; the original vegetation model comprises a plurality of original polygonal patches that form a coronal outline of the vegetation; respectively amplifying and rotating the original polygon patches to obtain a plurality of target polygon patches; obtaining a target vegetation model based on the plurality of target polygon patches, wherein each target polygon patch of the target vegetation model faces a virtual camera, and at least part of adjacent target polygon patches are crossed; rendering the target vegetation model to generate the vegetation object.
In the embodiment of the disclosure, the original polygon facets are respectively amplified and rotated, so that each obtained target polygon facet faces the virtual camera, and at least part of the obtained target polygon facets are intersected, and then the vegetation object rendered out can present real and full effects under the condition of reducing the number of the polygon facets, that is, the rendering efficiency can be improved, and meanwhile, the rendering effect of the vegetation is improved.
Referring to fig. 1, a schematic diagram of an execution subject of the vegetation rendering method according to the embodiment of the present disclosure is shown, where the execution subject of the method is an electronic device 100, where the electronic device 100 may include a terminal and a server. For example, the method may be applied to a terminal, and the terminal may be, without limitation, a smart phone 10, a desktop computer 20, a notebook computer 30, and the like shown in fig. 1, or may be a smart watch, a tablet computer, and the like, which are not shown in fig. 1. The method may also be applied to the server 40 or may be applied to an implementation environment consisting of the terminal and the server 40. The server 40 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud storage, big data, an artificial intelligence platform, and the like.
In other embodiments, the electronic device 100 may also include an AR (Augmented Reality) device, a VR (Virtual Reality) device, an MR (Mixed Reality) device, and the like. For example, the AR device may be a mobile phone or a tablet computer with an AR function, or may be AR glasses, which is not limited herein.
In some embodiments, the server 40 may communicate with the smart phone 10, the desktop computer 20, and the notebook computer 30 via the network 50. Network 50 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few.
In addition, the vegetation rendering method may be software running in a terminal or a server, such as an application having a vegetation rendering function. In some possible implementations, the vegetation rendering method may be implemented by a processor invoking computer readable instructions stored in a memory.
Referring to fig. 2, a flowchart of a vegetation rendering method provided in the embodiment of the present disclosure is shown, where the vegetation rendering method includes the following steps S101 to S104:
s101, obtaining an original vegetation model corresponding to a vegetation object; the original vegetation model includes a plurality of original polygon patches that form a coronal outline of the vegetation.
Referring to fig. 3, fig. 3 is a schematic view of an original vegetation model according to an embodiment of the disclosure. In the embodiment of the invention, the vegetation object required by the game scene can be created, designed and edited through three-dimensional modeling software, then the created original vegetation model is imported into the game for rendering, a corresponding game scene is generated, and the game scene is displayed for the user. Specifically, the designer can create an original vegetation model of the vegetation object in the three-dimensional modeling software, so that the game engine can acquire the original vegetation model from the three-dimensional modeling software.
Illustratively, the original vegetation model may be created based on 3d Max, Maya, etc. three-dimensional modeling software, and optionally, at least one virtual vegetation a may be included in the created original vegetation model, which may include, but is not limited to: the virtual vegetation a in the embodiment of the present disclosure is illustrated by taking a crown tree as an example, and of course, the application does not limit a specific engine type and a software type, and can be flexibly selected according to an actual application scenario.
Illustratively, in modeling the original vegetation model, the polygon patches a1 need to be separated from each other. Specifically, a break is required between the UV coordinates of each polygon patch, and the UV coordinates of each polygon patch fill the UV space (0-1). In addition, the directions of the UV coordinates of the plurality of polygonal patches change irregularly, that is, the directions of the UV coordinates of the plurality of polygonal patches are disturbed as much as possible.
In the embodiment of the present disclosure, in order to achieve a better rendering effect, the original polygon patch a1 is a quadrilateral patch. It is understood that in other embodiments, the original polygon patch may also be a triangle patch or other polygon patches, which is not limited herein.
And S102, respectively amplifying and rotating the original polygon patches to obtain a plurality of target polygon patches.
Illustratively, each original polygon patch may be individually enlarged and rotated by a Shader. Wherein, Shader is used for realizing image rendering and is used for replacing an editable program of a fixed rendering pipeline. Shaders mainly include a Vertex Shader (Vertex Shader) and a Pixel Shader (Pixel Shader). The vertex shader is mainly responsible for calculating the geometric relationship of the vertices, and the pixel shader is mainly responsible for calculating the color of the slice source.
In some embodiments, when a plurality of original polygon patches are respectively amplified, the original polygon patches are amplified with reference to the center of each original polygon patch, and the amplification ratio may be preset, that is, the original polygon patches are amplified according to a preset amplification ratio, or generated according to attribute information generated by a player, which is not limited herein. When the rotation processing is performed on each of the plurality of original polygon patches, the rotation is performed based on the position of the virtual camera.
S103, obtaining a target vegetation model based on the plurality of target polygon patches, wherein each target polygon patch of the target vegetation model faces a virtual camera, and at least part of adjacent target polygon patches are crossed.
Referring to fig. 4, fig. 4 is a schematic view of a target vegetation model according to an embodiment of the disclosure. It can be understood that after the original polygon patches are respectively amplified and rotated, a target polygon patch B1 can be obtained, and a target vegetation model B can be obtained based on the target polygon patches B1. Each target polygon patch faces the virtual camera, which means that an included angle between a lens facing direction of the virtual camera and a normal of each target polygon patch is smaller than a preset angle, and when the included angle between the lens facing direction of the virtual camera and the normal of the target polygon patch is 0 degree, the lens facing direction of the virtual camera is perpendicular to the target polygon patch.
In the embodiment of the present disclosure, the lens orientation of the virtual camera refers to a direction of a center line of a field angle of the virtual camera, and the preset angle may be 30 degrees, 40 degrees or other angles, which may be determined according to specific requirements, and is not limited herein.
In some specific implementation manners, the original UV coordinates of the plurality of polygonal patches may be obtained first, then matrix transformation processing is performed on the original UV coordinates to obtain vertex offset information, and then, in a model local space, the vertex offset information and the original vertex coordinate information of the original vegetation model are added to obtain the target vegetation model.
Specifically, the process of obtaining the vertex offset information may include the following steps (1) to (4):
(1) converting an original UV coordinate (two-dimensional data) into three-dimensional data to obtain first target data;
(2) taking the first target data as coordinate information in a world space, and converting the first target data into an observation space to obtain second target data;
(3) taking the second target data as coordinate information in the local space, and converting the second target data into a world space to obtain third target data;
(4) and carrying out normalization processing on the third target data to obtain the vertex deviation information.
And S104, rendering the target vegetation model to generate the vegetation object.
Referring to fig. 5, fig. 5 is a schematic view of a generated vegetation object according to an embodiment of the present disclosure, as shown in fig. 3, after a target vegetation model is obtained, the target vegetation model may be rendered, and a vegetation object C is generated.
In some embodiments, in the process of rendering the target vegetation model, pixels can be removed, for example, the pixels can be removed according to Alpha information when textures are rendered, so that the rendered vegetation object C has a hollow effect, and the reality of the vegetation rendering effect is further improved.
In the embodiment of the disclosure, the original polygon patches are respectively amplified and rotated, so that each obtained target polygon patch faces towards the virtual camera, at least part of the obtained target polygon patches are adjacent to each other, and the target polygon patches are crossed, and further the utilization rate of the model vertex can be improved, so that under the same vertex calculation amount, the final vegetation effect can be more dense, or under the vegetation effect with the same density, the overhead of the vertex can be less, that is, the rendering efficiency can be improved, and meanwhile, the rendering effect of the vegetation is improved.
The above step S102 will be described in detail with reference to the specific embodiment.
In some embodiments, referring to fig. 6, regarding step S102, when the original polygon patches are respectively enlarged and rotated to obtain a target polygon patch, the method may include the following steps S1021 to S1022:
and S1021, amplifying the original polygon patch.
S1022, based on the original orientation angle and the adjustment value of the original polygon facet, performing rotation processing on the amplified original polygon facet to obtain the target polygon facet; the multiple target polygon patches are intersected with each other.
For example, an original polygon facet may be first amplified, and then, based on an original orientation angle and an adjustment value of the original polygon facet, the amplified original polygon facet may be rotated to obtain a target polygon facet, so that the plurality of target polygon facets are intersected with each other.
The original orientation angle of the original polygon patch refers to an included angle between a normal of the original polygon patch and the orientation of the virtual camera lens. The adjustment value may also be referred to as a noise value, which may be a randomly generated random number or a random number generated based on the original orientation angle of the original polygon patch, so that the rotated target polygon patches all face the virtual camera, but the orientation angles are not exactly the same, thereby achieving the effect of disordering and interleaving the leaves and improving the rendering authenticity.
Referring to fig. 7, fig. 7 is a flowchart of another vegetation rendering method according to the embodiment of the present disclosure. The vegetation rendering method comprises the following steps S201-S207:
s201, obtaining an original vegetation model corresponding to a vegetation object; the original vegetation model includes a plurality of original polygon patches that form a coronal outline of the vegetation.
This step is similar to the step S101, and is not described herein again.
And S202, performing coordinate conversion on the original UV coordinates of the original polygon patches.
Illustratively, since the original UV coordinate range is (0, 1), in order to achieve scaling according to the center of the original polygon patch so as to make the effect of the intersection of the multiple target polygon patches more uniform, the original UV coordinates of the multiple original polygon patches need to be subjected to coordinate transformation, and the original UV coordinate range is transformed from (0, 1) to (-1, 1), so that the UV coordinate at the center of the polygon patch is (0, 0).
And S203, respectively amplifying and rotating the original polygon patches after the coordinate conversion.
This step is similar to the step S102, and is not described herein again.
And S204, obtaining a target vegetation model based on the plurality of target polygon patches, wherein each target polygon patch of the target vegetation model faces the virtual camera, and at least part of adjacent target polygon patches are crossed.
This step is similar to the step S103, and is not described herein again.
S205, determining the target view angle orientation of the virtual camera lens under the condition that a trigger event aiming at the virtual camera lens adjustment is detected.
It is understood that in a game scene, the visual angle orientation of the virtual camera can be changed according to the control of the player, and when the visual angle orientation of the virtual camera is changed, the presented content is different. Thus, in case a triggering event for the virtual camera lens adjustment is detected, the target view angle orientation of the virtual camera lens needs to be determined.
And S206, respectively rotating the plurality of target polygon patches according to the target view angles and the directions, so that each target polygon patch faces the virtual camera.
When the visual angle position of the virtual camera changes, if the orientation of a plurality of target polygon patches is not adjusted, the rendered vegetation object is distorted, so that the target polygon patches are rotated respectively to make each oriented towards the virtual camera according to the target visual angle position, so that the rendering effect of the rendered vegetation object is not changed after the visual angle position of the camera lens is changed, and the visual experience of a player is improved.
And S207, rendering the target vegetation model to generate the vegetation object.
This step is similar to the step S104, and is not described herein again.
It can be understood that there may be a wind blowing effect in a game scene, and if there is a wind blowing, the vegetation object does not exhibit the wind blowing effect, and will affect the experience of the player, so in some embodiments, in order to improve the real experience of the player, referring to fig. 8, before rendering the target vegetation model, the vegetation rendering method further includes the following steps S208 to S209:
and S208, acquiring the blowing attribute information in the target scene.
And S209, dynamically processing the UV coordinates of the original polygon patches based on the blowing attribute information.
In the embodiment of the present disclosure, the UV coordinates of the plurality of original polygonal patches are dynamically processed, and then the amplification and rotation processes are performed. It can be understood that, in other embodiments, the original polygon patch may be first enlarged and rotated, and then dynamically processed, that is, as long as the blowing effect can be finally achieved, and the specific implementation process is not limited herein.
The blowing attribute information includes, but is not limited to, blowing size information, blowing direction information, and the like.
In some embodiments, the dynamic processing of the original UV coordinates of the plurality of original polygon patches may specifically include the following (a) to (d):
(a) rotating the original UV coordinate to obtain a rotated UV coordinate;
(b) converting the original vegetation model from a local space to the world space to obtain target vertex coordinates of a vertex of the original vegetation model in the world space;
(c) generating noise texture based on the engine built-in function changing along with time and the target vertex coordinates;
(d) and fusing (such as lerp mixing) the original UV coordinate and the rotated UV coordinate by adopting the noise texture to obtain a target UV coordinate.
In the present embodiment, the rotation processing of the original UV coordinates means that the original UV coordinates of each polygon patch are randomly rotated by different angles such as 90 degrees, 180 degrees, and 270 degrees. After the target UV coordinate is obtained, the target UV coordinate may be subjected to subsequent corresponding steps, such as coordinate conversion, amplification, rotation, and the like.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same technical concept, the embodiment of the present disclosure further provides a vegetation rendering device corresponding to the vegetation rendering method, and as the principle of solving the problem of the device in the embodiment of the present disclosure is similar to that of the vegetation rendering method in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 9, there is shown a schematic diagram of a vegetation rendering apparatus 500 according to an embodiment of the present disclosure, the apparatus including:
an obtaining module 501, configured to obtain an original vegetation model corresponding to a vegetation object; the original vegetation model comprises a plurality of original polygonal patches that form a coronal outline of the vegetation;
a processing module 502, configured to amplify and rotate the multiple original polygon patches respectively to obtain multiple target polygon patches;
a determining module 503, configured to obtain a target vegetation model based on the multiple target polygon patches, where each target polygon patch of the target vegetation model faces a virtual camera, and at least some adjacent target polygon patches are intersected;
a rendering module 504, configured to render the target vegetation model to generate the vegetation object.
In one possible implementation, the original polygon patch is a quadrilateral patch.
In a possible implementation, the processing module 502 is further configured to:
performing coordinate conversion on the original UV coordinates of the plurality of original polygonal patches; and
and respectively amplifying and rotating the original polygon patches after the coordinate conversion.
In a possible implementation, the determining module 503 is further configured to:
determining a target perspective orientation of the virtual camera lens upon detecting a triggering event for the virtual camera lens adjustment;
the processing module 502 is further configured to:
and respectively rotating the plurality of target polygon patches according to the target view angle and the orientation, so that each target polygon patch faces the virtual camera.
In a possible implementation, the processing module 502 is specifically configured to:
amplifying the original polygonal patch;
based on the original orientation angle and the adjustment value of the original polygonal patch, performing rotation processing on the amplified original polygonal patch to obtain the target polygonal patch; the multiple target polygon patches are intersected with each other.
In a possible implementation, the obtaining module 501 is further configured to:
acquiring wind blowing attribute information in a target scene;
the processing module 502 is further configured to:
and dynamically processing the original UV coordinates of the original polygon patches based on the blowing attribute information.
In a possible implementation, the rendering module 504 is specifically configured to:
and in the process of rendering the target vegetation model, removing pixels to generate the vegetation object.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Based on the same technical concept, the embodiment of the disclosure also provides an electronic device. Referring to fig. 10, a schematic structural diagram of an electronic device 700 provided in the embodiment of the present disclosure includes a processor 701, a memory 702, and a bus 703. The memory 702 is used for storing execution instructions and includes a memory 7021 and an external memory 7022; the memory 7021 is also referred to as an internal memory and temporarily stores operation data in the processor 701 and data exchanged with an external memory 7022 such as a hard disk, and the processor 701 exchanges data with the external memory 7022 via the memory 7021.
In this embodiment, the memory 702 is specifically configured to store application program codes for executing the scheme of the present application, and is controlled by the processor 701 to execute. That is, when the electronic device 700 is operated, the processor 701 and the memory 702 communicate with each other through the bus 703, so that the processor 701 executes the application program code stored in the memory 702, thereby executing the method described in any of the foregoing embodiments.
The Memory 702 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor 701 may be an integrated circuit chip having signal processing capabilities. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 700. In other embodiments of the present application, the electronic device 700 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the vegetation rendering method in the above method embodiment are executed. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the vegetation rendering method in the foregoing method embodiments, which may be referred to specifically for the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A vegetation rendering method, comprising:
acquiring an original vegetation model corresponding to a vegetation object; the original vegetation model comprises a plurality of original polygonal patches that form a coronal outline of the vegetation;
respectively amplifying and rotating the original polygon patches to obtain a plurality of target polygon patches;
obtaining a target vegetation model based on the plurality of target polygon patches, wherein each target polygon patch of the target vegetation model faces a virtual camera, and at least part of adjacent target polygon patches are crossed;
rendering the target vegetation model to generate the vegetation object.
2. The method of claim 1, wherein the original polygon patches are quadrilateral patches.
3. The method of claim 1, wherein prior to the magnifying and rotating the plurality of original polygon patches, respectively, the method further comprises:
performing coordinate conversion on the original UV coordinates of the plurality of original polygonal patches;
the respectively amplifying and rotating the plurality of original polygon patches comprises:
and respectively amplifying and rotating the original polygon patches after the coordinate conversion.
4. The method of any one of claims 1-3, wherein prior to rendering the target vegetation model to generate the vegetation object, the method further comprises:
determining a target perspective orientation of the virtual camera lens upon detecting a triggering event for the virtual camera lens adjustment;
and respectively rotating the plurality of target polygon patches according to the target view angle and the orientation, so that each target polygon patch faces the virtual camera.
5. The method of claim 1, wherein the enlarging and rotating the original polygon patches to obtain target polygon patches comprises:
amplifying the original polygonal patch;
based on the original orientation angle and the adjustment value of the original polygonal patch, performing rotation processing on the amplified original polygonal patch to obtain the target polygonal patch; the multiple target polygon patches are intersected with each other.
6. The method of claim 1, wherein prior to rendering the target vegetation model, the method further comprises:
acquiring wind blowing attribute information in a target scene;
and dynamically processing the original UV coordinates of the original polygon patches based on the blowing attribute information.
7. The method of claim 1, wherein the rendering the target vegetation model to generate the vegetation object comprises:
and in the process of rendering the target vegetation model, removing pixels to generate the vegetation object.
8. A vegetation rendering apparatus, comprising:
the acquisition module is used for acquiring an original vegetation model corresponding to the vegetation object; the original vegetation model comprises a plurality of original polygonal patches that form a coronal outline of the vegetation;
the processing module is used for respectively amplifying and rotating the original polygon patches to obtain a plurality of target polygon patches;
a determining module, configured to obtain a target vegetation model based on the multiple target polygon patches, where each target polygon patch of the target vegetation model faces a virtual camera, and at least some adjacent target polygon patches are intersected;
and the rendering module is used for rendering the target vegetation model to generate the vegetation object.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the vegetation rendering method of any of claims 1-7.
10. A computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the vegetation rendering method of any one of claims 1-7.
CN202110938397.9A 2021-08-16 2021-08-16 Vegetation rendering method and device, electronic equipment and readable storage medium Active CN113599818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110938397.9A CN113599818B (en) 2021-08-16 2021-08-16 Vegetation rendering method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110938397.9A CN113599818B (en) 2021-08-16 2021-08-16 Vegetation rendering method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN113599818A true CN113599818A (en) 2021-11-05
CN113599818B CN113599818B (en) 2023-07-21

Family

ID=78308696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110938397.9A Active CN113599818B (en) 2021-08-16 2021-08-16 Vegetation rendering method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113599818B (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003228725A (en) * 2002-02-04 2003-08-15 Japan Science & Technology Corp 3d image processing system
US20050117215A1 (en) * 2003-09-30 2005-06-02 Lange Eric B. Stereoscopic imaging
US20140225893A1 (en) * 2013-02-12 2014-08-14 Disney Enterprises, Inc. Enhanced system and method for rendering visual surface
CN105701313A (en) * 2016-02-24 2016-06-22 福州大学 Virtual plant canopy photosynthesis effective radiation distribution simulating method of multi-layer data structure
CN106204735A (en) * 2016-07-18 2016-12-07 中国人民解放军理工大学 Unity3D terrain data using method in Direct3D 11 environment
CN106408637A (en) * 2016-08-29 2017-02-15 北京像素软件科技股份有限公司 Vegetation scene rendering method
CN107452066A (en) * 2017-08-08 2017-12-08 中国林业科学研究院资源信息研究所 A kind of tree crown three-dimensional configuration analogy method based on B-spline curves
WO2018175625A1 (en) * 2017-03-22 2018-09-27 Magic Leap, Inc. Depth based foveated rendering for display systems
CN110084894A (en) * 2019-04-30 2019-08-02 贝壳技术有限公司 Partial enlargement methods of exhibiting, device and the electronic equipment of threedimensional model
CN110124318A (en) * 2019-06-12 2019-08-16 网易(杭州)网络有限公司 The method and device of virtual vegetation production, electronic equipment, storage medium
CN110880204A (en) * 2019-11-21 2020-03-13 腾讯科技(深圳)有限公司 Virtual vegetation display method and device, computer equipment and storage medium
CN111476877A (en) * 2020-04-16 2020-07-31 网易(杭州)网络有限公司 Shadow rendering method and device, electronic equipment and storage medium
CN112700517A (en) * 2020-12-28 2021-04-23 北京字跳网络技术有限公司 Method for generating visual effect of fireworks, electronic equipment and storage medium
CN112862968A (en) * 2021-03-15 2021-05-28 网易(杭州)网络有限公司 Rendering display method, device and equipment of target vegetation model and storage medium
CN112884873A (en) * 2021-03-12 2021-06-01 腾讯科技(深圳)有限公司 Rendering method, device, equipment and medium for virtual object in virtual environment
CN112950753A (en) * 2019-12-11 2021-06-11 腾讯科技(深圳)有限公司 Virtual plant display method, device, equipment and storage medium
CN113034350A (en) * 2021-03-24 2021-06-25 网易(杭州)网络有限公司 Vegetation model processing method and device
CN114283230A (en) * 2021-12-13 2022-04-05 网易(杭州)网络有限公司 Vegetation model rendering method and device, readable storage medium and electronic device

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003228725A (en) * 2002-02-04 2003-08-15 Japan Science & Technology Corp 3d image processing system
US20050117215A1 (en) * 2003-09-30 2005-06-02 Lange Eric B. Stereoscopic imaging
US20140225893A1 (en) * 2013-02-12 2014-08-14 Disney Enterprises, Inc. Enhanced system and method for rendering visual surface
CN105701313A (en) * 2016-02-24 2016-06-22 福州大学 Virtual plant canopy photosynthesis effective radiation distribution simulating method of multi-layer data structure
CN106204735A (en) * 2016-07-18 2016-12-07 中国人民解放军理工大学 Unity3D terrain data using method in Direct3D 11 environment
CN106408637A (en) * 2016-08-29 2017-02-15 北京像素软件科技股份有限公司 Vegetation scene rendering method
WO2018175625A1 (en) * 2017-03-22 2018-09-27 Magic Leap, Inc. Depth based foveated rendering for display systems
CN107452066A (en) * 2017-08-08 2017-12-08 中国林业科学研究院资源信息研究所 A kind of tree crown three-dimensional configuration analogy method based on B-spline curves
CN110084894A (en) * 2019-04-30 2019-08-02 贝壳技术有限公司 Partial enlargement methods of exhibiting, device and the electronic equipment of threedimensional model
CN110124318A (en) * 2019-06-12 2019-08-16 网易(杭州)网络有限公司 The method and device of virtual vegetation production, electronic equipment, storage medium
CN110880204A (en) * 2019-11-21 2020-03-13 腾讯科技(深圳)有限公司 Virtual vegetation display method and device, computer equipment and storage medium
CN112950753A (en) * 2019-12-11 2021-06-11 腾讯科技(深圳)有限公司 Virtual plant display method, device, equipment and storage medium
CN111476877A (en) * 2020-04-16 2020-07-31 网易(杭州)网络有限公司 Shadow rendering method and device, electronic equipment and storage medium
CN112700517A (en) * 2020-12-28 2021-04-23 北京字跳网络技术有限公司 Method for generating visual effect of fireworks, electronic equipment and storage medium
CN112884873A (en) * 2021-03-12 2021-06-01 腾讯科技(深圳)有限公司 Rendering method, device, equipment and medium for virtual object in virtual environment
CN112862968A (en) * 2021-03-15 2021-05-28 网易(杭州)网络有限公司 Rendering display method, device and equipment of target vegetation model and storage medium
CN113034350A (en) * 2021-03-24 2021-06-25 网易(杭州)网络有限公司 Vegetation model processing method and device
CN114283230A (en) * 2021-12-13 2022-04-05 网易(杭州)网络有限公司 Vegetation model rendering method and device, readable storage medium and electronic device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
何秋海;彭月橙;黄心渊;: "景观表现中高真实度竹林仿真的研究", 计算机工程与应用, no. 03, pages 179 - 184 *
刘贇;: "在游戏中通过法线映射模拟体积渲染效果", 电子技术与软件工程, no. 10, pages 48 - 49 *
张倩倩;淮永建;: "网络环境中虚拟树木的建模和实时渲染研究", 计算机仿真, no. 04, pages 269 - 272 *
黄正宇;张志毅;耿楠;胡少军;: "基于细分曲面控制网格优化的光滑树模型重建", 计算机仿真, no. 12, pages 179 - 184 *

Also Published As

Publication number Publication date
CN113599818B (en) 2023-07-21

Similar Documents

Publication Publication Date Title
US7554540B2 (en) System and method of visible surface determination in computer graphics using interval analysis
US9773343B2 (en) Method for real-time and realistic rendering of complex scenes on internet
ES2660434T3 (en) System and procedure to provide improved graphics pipeline
CN108154548A (en) Image rendering method and device
Bao et al. Large-scale forest rendering: Real-time, realistic, and progressive
CN112206535B (en) Rendering display method and device of virtual object, terminal and storage medium
EP1866870B1 (en) Rendering 3d computer graphics using 2d computer graphics capabilities
CN112052864B (en) Image drawing method and device, electronic equipment and readable storage medium
CN111583398B (en) Image display method, device, electronic equipment and computer readable storage medium
CN108182723B (en) Starry sky simulation method and starry sky simulation device
CN108230430B (en) Cloud layer mask image processing method and device
CN115375822A (en) Cloud model rendering method and device, storage medium and electronic device
CN109903374B (en) Eyeball simulation method and device for virtual object and storage medium
CN112132938B (en) Model element deformation processing and picture rendering method, device, equipment and medium
Dyken et al. Semi‐Uniform Adaptive Patch Tessellation
CN113599818B (en) Vegetation rendering method and device, electronic equipment and readable storage medium
CN111882637B (en) Picture rendering method, device, equipment and medium
CN115965735A (en) Texture map generation method and device
CN115761105A (en) Illumination rendering method and device, electronic equipment and storage medium
CN115063330A (en) Hair rendering method and device, electronic equipment and storage medium
CN113240787A (en) Shadow rendering method and device and electronic equipment
CN114399421A (en) Storage method, device and equipment for three-dimensional model visibility data and storage medium
CN111445572B (en) Method and device for displaying virtual three-dimensional model
CN117392358B (en) Collision detection method, collision detection device, computer device and storage medium
de Gomensoro Malheiros et al. A hybrid geometry and billboard-based model for trees

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant