CN115375822A - Cloud model rendering method and device, storage medium and electronic device - Google Patents

Cloud model rendering method and device, storage medium and electronic device Download PDF

Info

Publication number
CN115375822A
CN115375822A CN202210976751.1A CN202210976751A CN115375822A CN 115375822 A CN115375822 A CN 115375822A CN 202210976751 A CN202210976751 A CN 202210976751A CN 115375822 A CN115375822 A CN 115375822A
Authority
CN
China
Prior art keywords
cloud model
model
information
sampling
vertex
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210976751.1A
Other languages
Chinese (zh)
Inventor
梁普彦
谢耿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210976751.1A priority Critical patent/CN115375822A/en
Publication of CN115375822A publication Critical patent/CN115375822A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The disclosure discloses a cloud model rendering method and device, a storage medium and an electronic device. The method comprises the following steps: acquiring an initial cloud model to be rendered; performing insert processing on the model vertex of the initial cloud model based on the polygonal surface patch to obtain an inserted initial cloud model; sampling a preset transparent chartlet based on texture coordinates of a polygonal surface patch in the initial cloud model after the sheet insertion to obtain first sampling information; sampling a preset noise map based on the vertex information of the model vertex to obtain second sampling information; and rendering the initial cloud model after the insertion based on the first sampling information and the second sampling information to obtain a target cloud model. The method and the device solve the technical problem that in the prior art, the rendering effect of the rendering cloud model is poor.

Description

Cloud model rendering method and device, storage medium and electronic device
Technical Field
The present disclosure relates to the field of image rendering, and in particular, to a cloud model rendering method and apparatus, a storage medium, and an electronic apparatus.
Background
At present, a big bottleneck of the mobile end game is a performance problem, and realizing the rendering of the volume cloud under the condition of performance limitation is also a pain point difficulty of many mobile end games. In the related art, when a game is artistic designed, a cloud sea scene is generally avoided, and particularly under a game with realistic rendering, more ways of selecting pictures need to be observed at a long distance. While non-realistic games often use different skills to simulate the cloud, for example, a volume cloud model is drawn as a multi-layer cloud model, the vertices of each layer of the cloud model extend along the normal direction at a certain distance, and then the 3D noise wave pattern is cut, and the more layers of the cloud model are, the more models are cut.
When the cloud model is manufactured, a plurality of layers of cloud models need to be manufactured, and each layer of cloud model needs to be subjected to complex calculation, so that the manufacturing cost of the cloud model is increased. Further, the cloud model is relatively hard in rendering effect and lacks a plush feeling and a plain feeling of an edge required for art. Moreover, when the volume cloud model is manufactured, the texture coordinates of the model are required to be used for sampling the 3D noise map, and when the 3D noise map is manufactured, the relevant design parameters are difficult to control by the art personnel, so that the rendering effect of the cloud model is random and undefined.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
At least some embodiments of the present disclosure provide a cloud model rendering method, apparatus, storage medium, and electronic apparatus, so as to at least solve the technical problem in the prior art that the rendering effect of rendering a cloud model is poor.
According to an embodiment of the present disclosure, a method for rendering a cloud model is provided, including: acquiring an initial cloud model to be rendered; performing insert processing on the model vertex of the initial cloud model based on the polygonal patch to obtain an inserted initial cloud model; sampling a preset transparent chartlet based on texture coordinates of a polygonal surface patch in the initial cloud model after the sheet insertion to obtain first sampling information; sampling a preset noise map based on the vertex information of the model vertex to obtain second sampling information; and rendering the initial cloud model after the insertion based on the first sampling information and the second sampling information to obtain a target cloud model.
According to an embodiment of the present disclosure, there is also provided a cloud model rendering apparatus, including: the acquisition module is used for acquiring an initial cloud model to be rendered; the insert module is used for carrying out insert processing on the model vertex of the initial cloud model based on the polygonal patch to obtain the inserted initial cloud model; the first sampling module is used for sampling a preset transparent chartlet based on texture coordinates of the polygonal surface patch in the initial cloud model after the sheet insertion to obtain first sampling information; the second sampling module is used for sampling the preset noise map based on the vertex information of the model vertex to obtain second sampling information; and the rendering module is used for rendering the initial cloud model after the insertion sheet based on the first sampling information and the second sampling information to obtain a target cloud model.
According to an embodiment of the present disclosure, there is also provided a computer-readable storage medium having a computer program stored therein, where the computer program is configured to execute the above-mentioned rendering method of the cloud model when running.
According to an embodiment of the present disclosure, there is also provided an electronic apparatus, including a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program to perform the above-mentioned cloud model rendering method.
In at least some embodiments of the present disclosure, a mode of rendering a cloud model after sheet insertion is adopted, after an initial cloud model of a model to be rendered is obtained, sheet insertion processing is performed on a model vertex of the initial cloud model based on a polygonal surface sheet to obtain the initial cloud model after sheet insertion, and a preset transparent chartlet is sampled based on texture coordinates of the polygonal surface sheet in the initial cloud model after sheet insertion to obtain first sampling information; meanwhile, sampling is carried out on the preset noise map based on the vertex information of the model vertex, and second sampling information is obtained. And finally, rendering the initial cloud model after the insertion based on the first sampling information and the second sampling information to obtain a target cloud model.
In the process, in the process of rendering the cloud model, a multilayer cloud model does not need to be manufactured, only the cloud model needs to be subjected to insert sheet processing, and the cloud model after the insert sheet is rendered. In addition, through sampling transparent pictures, the flocculence of the cloud layer can be simulated, and the rendering effect of the cloud model is improved. In addition, the vertex information of the model is adopted to sample the noise map instead of the texture coordinates of the model, so that the problem of disordered texture caused by sampling the noise map by adopting the texture coordinates of the model in the related technology is solved, and the rendering effect of the cloud model is further improved.
According to the above content, the scheme provided by the disclosure achieves the purpose of rendering the cloud model, so that the technical effect of improving the rendering effect of the cloud model is achieved, and the technical problem that the rendering effect of the rendering cloud model is poor in the prior art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the disclosure and together with the description serve to explain the disclosure and not to limit the disclosure. In the drawings:
fig. 1 is a block diagram of a hardware structure of a mobile terminal of a rendering method of a cloud model according to an embodiment of the present disclosure;
FIG. 2 is a flow diagram of a method of rendering a cloud model according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an initial cloud model after a blade according to one embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a predetermined transparency map according to one embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a cloud model before blade insertion according to one embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a post-blade cloud model according to one embodiment of the present disclosure;
FIG. 7 is a noise map according to one embodiment of the present disclosure;
FIG. 8 is a schematic diagram of a rendered model according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of a rendered model according to one embodiment of the present disclosure;
FIG. 10 is a schematic diagram of a cloud model according to an embodiment of the present disclosure;
FIG. 11 is a schematic diagram of a cloud model after normal smoothing according to an embodiment of the present disclosure;
FIG. 12 is a schematic diagram of a cloud model according to an embodiment of the present disclosure;
FIG. 13 is a schematic diagram of a cloud model according to an embodiment of the present disclosure;
FIG. 14 is a block diagram of a rendering apparatus of a cloud model according to an embodiment of the present disclosure;
FIG. 15 is a schematic view of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those skilled in the art, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only some embodiments of the present disclosure, not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present disclosure without making creative efforts shall fall within the protection scope of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with one embodiment of the present disclosure, there is provided an embodiment of a rendering method for a cloud model, it is noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
The method embodiments may be performed in a mobile terminal, a computer terminal or a similar computing device. Taking the example of the Mobile terminal running on the Mobile terminal, the Mobile terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet Device (MID), a PAD, a game console, etc. Fig. 1 is a block diagram of a hardware structure of a mobile terminal of a rendering method of a cloud model according to an embodiment of the present disclosure. As shown in fig. 1, the mobile terminal may include one or more (only one shown in fig. 1) processors 102 (the processors 102 may include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), an Artificial Intelligence (AI) type processor, etc.) and a memory 104 for storing data. Optionally, the mobile terminal may further include a transmission device 106, an input/output device 108, and a display device 110 for communication functions. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program and a module of an application software, such as a computer program corresponding to the cloud model rendering method in the embodiment of the present disclosure, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, that is, implements the cloud model rendering method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The inputs in the input output Device 108 may come from a plurality of Human Interface Devices (HIDs). For example: keyboard and mouse, game pad, other special game controller (such as steering wheel, fishing rod, dance mat, remote controller, etc.). In addition to providing input functionality, some human interface devices may also provide output functionality, such as: force feedback and vibration of the gamepad, audio output of the controller, etc.
The display device 110 may be, for example, a head-up display (HUD), a touch screen type Liquid Crystal Display (LCD), and a touch display (also referred to as a "touch screen" or "touch display screen"). The liquid crystal display may enable a user to interact with a user interface of the mobile terminal. In some embodiments, the mobile terminal has a Graphical User Interface (GUI) with which a user can interact by touching finger contacts and/or gestures on a touch-sensitive surface, where the human interaction functionality optionally includes the following interactions: executable instructions for creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, emailing, call interfacing, playing digital video, playing digital music, and/or web browsing, etc., for performing the above-described human-computer interaction functions, are configured/stored in one or more processor-executable computer program products or readable storage media.
In one embodiment of the present disclosure, the rendering method of the cloud model may be executed in a local terminal device, and a graphical user interface is provided by the terminal device, where the terminal device may be the aforementioned local terminal device, and may also be a client device in a cloud interaction system. Fig. 2 is a flowchart of a rendering method of a cloud model according to an embodiment of the present disclosure, and as shown in fig. 2, the method includes the following steps:
step S202, an initial cloud model to be rendered is obtained.
In step S202, a GPU (Graphics Processing Unit) of the terminal device may obtain relevant information of the initial cloud model to be rendered, and render the initial cloud model. The initial cloud model may be a volume cloud model, but the volume cloud model does not have a flocculence, that is, the initial cloud model is a volume cloud model with sharp edges.
It should be noted that, in this embodiment, the rendering of the cloud model is implemented in the GPU, and a CPU (Central Processing Unit) of the terminal device performs only simple data Processing, so that the performance of the rendering system can be improved.
And S204, performing insert processing on the model vertex of the initial cloud model based on the polygonal patch to obtain the inserted initial cloud model.
In step S204, the GPU inserts a polygon patch at each model vertex position of the initial cloud model, and in the subsequent model rendering process, the polygon patch may be processed using the transparent map to simulate a flocculent cloud. In the schematic diagram of the inserted initial cloud model shown in fig. 3, the white part represents the basic cloud model, and the black block represents the polygon patch on the corresponding vertex of the initial cloud model. In step S204, the polygon patches corresponding to each model vertex are the same, and for example, the polygon patches of each model vertex are all triangles, all quadrangles, all pentagons, or the like.
It should be noted that the insert sheet is mainly used for sampling the transparent map, so as to simulate the cloud plush texture, the quadrilateral patch can completely ensure the cloud plush texture, and if the quadrilateral patch (for example, the pentagonal patch) is used, the number of vertices of the whole model can be increased, and the number of vertices can be increased by multiples along with the total number of vertices of the cloud model, so as to bring unnecessary performance consumption. And the triangular surface patch cannot ensure the smoothness of the cloud model, so that the rendering effect of the cloud model obtained by rendering is poor. Therefore, in this embodiment, the polygon patch is a quadrilateral patch.
In addition, it should be noted that, in this embodiment, a large number of polygon patches need to be generated, and if the CPU is used to generate the polygon patches one by one and then transmit the generated polygon patches to the GPU for rendering, the time cost for rendering the cloud model by the rendering system is increased, and the system performance is also reduced. To avoid the above problem, in the embodiment, the plug-in process for the model vertex is implemented by calling the native API (Application Programming Interface) of the Unity game engine to use the GPU instantiation solution. In other words, the CPU transmits information (e.g., normal information) related to model vertices of the initial cloud model to the GPU, and the GPU batch-processes the information related to the model vertices to generate and render the cloud model, thereby reducing performance consumption and shortening the model generation time.
The GPU instantiation solution described above is to place the drawing of multiple objects with the same grid and the same material in the same layer Draw Call. Although the GPU instantiation can only put objects with the same grid and material into the same layer Draw Call for rendering, each instance may have different material parameters (e.g., color, range), so that the rendering performance and rendering efficiency of the rendering engine may be significantly improved.
And S206, sampling a preset transparent map based on texture coordinates of the polygonal surface patch in the initial cloud model after the insertion of the patch to obtain first sampling information.
Optionally, the preset transparent map may be a map at least including a solid portion and a virtual portion, for example, the preset transparent map may be a middle solid-four-periphery virtual map shown in fig. 4. It should be noted that, in practical applications, when sampling the preset transparent map, the preset transparent map is usually sampled according to the origin of the polygon patch. In the case that the preset transparent map is a map with a middle part being real and a periphery being virtual, the user may set the origin of the polygonal patch as the center point of the polygonal patch, for example, the center point of the quadrilateral patch may be a point where two diagonals of the quadrilateral patch intersect. When the preset transparent map is other maps, the user can adjust the origin of the polygonal patch according to the distribution of the solid part and the virtual part in the preset transparent map.
In addition, in the related art, when the volume cloud is produced, the overall effect of the volume cloud is relatively stiff and lacks a pile feeling and a fringed feeling. In order to simulate the flocculent sense of the cloud layer, in the embodiment, the transparent mapping is sampled, so that the transparency of the polygonal surface patch of the initial cloud model after the insertion sheet is adjusted according to the sampling result (namely, the first sampling information), the flocculent sense of the cloud layer can be simulated, the rendering effect of the cloud model is improved, and the obtained cloud model has the feeling of fuzz.
And S208, sampling the preset noise map based on the vertex information of the model vertex to obtain second sampling information.
It should be noted that, in the related art, the noise map is sampled based on the texture coordinates of the cloud model, but the actually used cloud model is not a simple plane, and when the noise map is sampled, the texture coordinates of the cloud model are disconnected, so that the cloud model cannot be naturally transitionally connected between different surfaces in the three-dimensional space, and the rendering effect of the cloud model is poor. In order to avoid the above problem, in this embodiment, the model vertex of the cloud model is used to sample the noise map, so that the problem of sampling confusion is reduced, the situation of fracture when the model vertex deviates is greatly reduced, and the rendering effect of the cloud model is further improved.
And S210, rendering the inserted initial cloud model based on the first sampling information and the second sampling information to obtain a target cloud model.
It should be noted that the first sampling information represents transparency information of the cloud model, and the second sampling information represents texture information of the cloud model, that is, in the present application, only simple sampling is performed on a preset transparency map and a noise map, and the cloud model can be rendered by using a sampling result without performing complex computation, so that a problem of high rendering cost of the cloud model in related technologies where a multi-layer cloud model needs to be manufactured is avoided, and the rendering efficiency of the cloud model and the system performance of a rendering system are improved.
Based on the schemes defined in the steps S202 to S210, it can be known that, in at least some embodiments of the present disclosure, after obtaining an initial cloud model of a model to be rendered, an inserting sheet process is performed on a model vertex of the initial cloud model based on a polygonal surface sheet to obtain the initial cloud model after inserting the sheet, and a preset transparent mapping is sampled based on texture coordinates of the polygonal surface sheet in the initial cloud model after inserting the sheet to obtain first sampling information; and meanwhile, sampling the preset noise map based on the vertex information of the model vertex to obtain second sampling information. And finally, rendering the inserted initial cloud model based on the first sampling information and the second sampling information to obtain a target cloud model.
It is easy to notice that, in the process of rendering the cloud model, a multilayer cloud model does not need to be made, only the cloud model needs to be subjected to insert sheet processing, and the cloud model after the insert sheet is rendered, in addition, in the process of rendering, complex calculation does not need to be carried out, and only the corresponding map needs to be sampled, so that the problem of high rendering cost of the cloud model in the related art, which exists in the process of making the multilayer cloud model, is avoided. In addition, through sampling transparent pictures, the flocculence of the cloud layer can be simulated, and the rendering effect of the cloud model is improved. In addition, the vertex information of the model is adopted to sample the noise map instead of the texture coordinates of the model, so that the problem of disordered texture caused by sampling the noise map by adopting the texture coordinates of the model in the related technology is solved, and the rendering effect of the cloud model is further improved.
According to the content, the purpose of rendering the cloud model is achieved through the scheme provided by the disclosure, so that the technical effect of improving the rendering effect of the cloud model is achieved, and the technical problem that the rendering effect of rendering the cloud model is poor in the prior art is solved.
In an alternative embodiment, as shown in fig. 1, after acquiring the initial cloud model to be rendered, the GPU of the terminal device performs plug-in processing on the initial cloud model. After the model vertex of the initial cloud model is subjected to sheet insertion processing based on the polygon patches to obtain the initial cloud model after sheet insertion, the terminal equipment can determine information corresponding to each polygon patch. Specifically, the terminal device obtains a plurality of pieces of grouping information, where each piece of grouping information at least includes vertex information of a corresponding model vertex and a vertex identifier of a corresponding model vertex.
It should be noted that, the GPU of the terminal device obtains the above grouping information from the CPU, and the above vertex information may include, but is not limited to, vertex position information of model vertices, normal information, texture coordinates, a transformation matrix, and the like.
Optionally, the CPU of the terminal device distinguishes and packages the related information (including vertex information and vertex identifiers) of all the model vertices, and transmits the information to the GPU, that is, the related information of all the model vertices is grouped according to the vertex identifiers, and all the grouped information is sent to the GPU. Before the CPU sends the grouping information to the GPU, the CPU can transmit the grouping information to the shader through the computing Buffer area, namely, the GPU rendering stage is entered, and then the operations of illumination calculation, basic color rendering, vertex animation and the like of the cloud model can be realized in the shader in the GPU rendering stage.
It should be noted that, when the GPU instantiates the plug-in sheet, the GPU generates a plug-in sheet identifier Instance ID to distinguish the plug-in sheet, and meanwhile, the GPU can read the grouping information of the vertex of the corresponding model from the computing Buffer area in the shader through the plug-in sheet identifier Instance ID, so as to transmit the vertex information of the vertex of the model into the corresponding polygon patch, as shown in fig. 5, which is a cloud model before the plug-in sheet, and fig. 6, which is a cloud model after the plug-in sheet, i.e., fig. 6 visually indicates that the vertex normal of the cloud model has been transmitted into the corresponding polygon patch.
Further, as shown in fig. 1, after the patch processing is performed on the initial cloud model, the GPU of the terminal device may respectively sample the preset transparent map and the preset noise map. The two sampling processes may be performed simultaneously or in a preset order, and the specific sampling order is not limited herein. The sampling process of the preset transparent map in step S106 is already described, and the sampling process of the preset noise map is described below.
In an optional embodiment, in the process of sampling the preset noise map, the GPU of the terminal device determines vertex information of model vertices corresponding to the polygon patch based on the vertex identifiers, and then samples the preset two-dimensional noise map based on the vertex information of the model vertices to obtain second sampling information.
In the related art, the noise map is sampled based on the texture coordinates of the cloud model, but the actually used cloud model is not a simple plane, and the texture coordinates of the cloud model are disconnected when the noise map is sampled. Therefore, if the noise map is sampled based on the texture coordinates, a phenomenon of model fracture may occur, as shown in fig. 7 and 8, where fig. 7 is the noise map, fig. 8 is the model obtained after rendering, and each square in the texture map in fig. 7 corresponds to one face of a cube, and in the case of disordered texture coordinates, sampling of the map is also disturbed, and a fracture phenomenon occurs between the first face and the second face of the model shown in fig. 8.
In order to solve the above problem, in this embodiment, a three-surface mapping manner is adopted, that is, vertex coordinates of the cloud model are used instead of texture coordinates to sample the noise map, so that occurrence of sampling confusion is reduced, and a situation that a vertex is broken when being shifted is reduced. As shown in fig. 9, by using the scheme provided by the present disclosure, a fracture phenomenon between two surfaces of the model is eliminated, and then the rendering effect of the model is improved.
In addition, it should be further noted that, in this embodiment, the preset noise map is a noise grayscale map, and optionally, the noise map is a two-dimensional noise map.
In an optional embodiment, after the first sampling information and the second sampling information are obtained, the GPU of the terminal device renders the inserted initial cloud model based on the first sampling information and the second sampling information, so as to obtain a target cloud model. And the GPU of the terminal equipment performs virtualization processing on the edge of the initial cloud model after the insertion based on the first sampling information, and performs dynamic processing on the initial cloud model after the insertion based on the second sampling information to obtain a target cloud model.
That is, in the present disclosure, blurring of the edge of the cloud model may be achieved based on the first sampling information, so that the rendered cloud model may vividly simulate a cloud-like cloud; and the dynamic cloud model can be realized based on the second sampling information, so that the rendered cloud model can vividly simulate the flow of a cloud layer, and the rendering effect of the cloud model is improved.
In the process of blurring the edge of the initial cloud model after the sheet insertion based on the first sampling information, the GPU of the terminal device determines transparency information corresponding to the polygonal patch, and performs blurring on the edge of the initial cloud model after the sheet insertion based on the transparency information and the first sampling information.
Optionally, the user may set a transparency corresponding to each polygon patch, and render the polygon patches through the first sampling information and the transparency corresponding to each polygon patch, so as to implement virtualization of the edge of the cloud model, and further ensure that the cloud model obtained after rendering has a flocculent feeling.
In the process of performing dynamic processing on the initial cloud model after the plugging based on the second sampling information, the GPU of the terminal equipment performs offset processing on texture coordinates of the initial cloud model after the plugging to obtain an offset initial cloud model, and renders the offset initial cloud model based on the second sampling information to obtain a dynamic cloud model.
It should be noted that, by sampling the preset noise map through the vertex information of the model vertex, the cloud model can be moved as a whole, but details flowing in the cloud layer are lacked. In the present embodiment, the Flow Map technique is used to increase the Flow sensation inside the cloud layer.
The Flow Map is a texture in which two-dimensional vector information is recorded, and a color (usually, an RG channel) on the Flow Map records the direction of a vector field at that point, so that a certain point on the model shows a characteristic of quantitative Flow. In general, a flow effect is simulated by performing a shift process on texture coordinates in a shader and then sampling a predetermined texture map.
Optionally, the cloud model may be dynamically implemented by the following steps:
step S1, sampling the Flow Map to obtain vector field information;
s2, determining the change information of texture coordinates when the preset texture map is sampled based on the vector field information, wherein the change information of the texture coordinates can be the phase difference of the texture coordinates sampled twice;
and S3, sampling the same map (for example, acquiring twice with a phase difference of a half period) based on the change information of the texture coordinates, and performing linear interpolation, so that the maps flow continuously, and the flow effect of the cloud model is further realized.
It should be noted that, in the schematic diagram of the cloud model shown in fig. 10, each model vertex has two normals, and in this scenario, if the model vertex is shifted in the normal direction, the model vertex may start in two different directions, so that the plane of the cloud model is discontinuous, and the model may break. Therefore, before vertex shifting is performed on the cloud model in the shader, processing of smoothing the normal of the cloud model is also required.
Specifically, the GPU of the terminal device obtains normal information of a model vertex corresponding to the initial cloud model after the blade insertion from the vertex information, and performs smoothing processing on the normal of the initial cloud model after the blade insertion based on the normal information. Optionally, the GPU of the terminal device may perform average calculation on multiple normal lines of the same model vertex, so that the normal line of the model vertex only faces one direction, thereby obtaining one target normal line, where the target normal line is the normal line corresponding to the model vertex, as shown in fig. 11, after performing smoothing processing on the model vertex, each model vertex only has one normal line.
In addition, before the initial cloud model after the plug-in is rendered based on the first sampling information and the second sampling information to obtain the target cloud model, the GPU of the terminal equipment performs illumination calculation on the cloud model.
Specifically, the GPU of the terminal device determines the model position of the initial cloud model after the sheet insertion in the target scene, determines the light source position of the virtual light source in the model position, and finally performs illumination calculation on the initial cloud model after the sheet insertion based on the light source position.
Optionally, in this embodiment, the cloud model may be illuminated using the Blinn-Phong classical illumination model. In addition, in order to simulate the transparency of the cloud layer, especially in the case of backlight, a small portion of light rays are emitted from the contour of the cloud, and the light rays are emitted from the periphery of the contour of the cloud model as shown in fig. 12. In this embodiment, the light source position of the virtual light source is set on the back of the post-insertion initial cloud model, so that when the post-insertion initial cloud model is backlit, the light source illuminates the cloud model from the back, as shown in the cloud model of fig. 13.
As can be seen from the above, the present disclosure inserts a polygonal patch at each vertex position of the cloud model, and the polygonal patch is processed using an art-designed transparency sticker to simulate a flocculent cloud. And then, transmitting the data of the vertex corresponding to the cloud model into the patch of the vertex for illumination calculation. In addition, in order to simulate the Flow of the cloud layer, the scheme provided by the disclosure also adds the position offset of the vertex, and enriches the flowing details by using the Flow Map technology, so that flocculent clouds are added on the original basis, the rendering effect can be controlled to the maximum extent by the art in the process, and the natural flowing effect of the cloud sea can be ensured.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method according to the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium (such as a ROM/RAM, a magnetic disk, and an optical disk), and includes several instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, or a network device) to perform the methods according to the embodiments of the present disclosure.
In this embodiment, a rendering apparatus of a cloud model is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and details are not repeated for what has been described. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware or a combination of software and hardware is also possible and contemplated.
Fig. 14 is a block diagram of a structure of a rendering apparatus of a cloud model according to an embodiment of the present disclosure, and as shown in fig. 14, the apparatus includes: an acquisition module 1401, a slice insertion module 1403, a first sampling module 1405, a second sampling module 1407, and a rendering module 1409.
The obtaining module 1401 is configured to obtain an initial cloud model to be rendered; an insert sheet module 1403, configured to perform insert sheet processing on the model vertex of the initial cloud model based on the polygonal patch to obtain an inserted initial cloud model; a first sampling module 1405, configured to sample a preset transparent map based on texture coordinates of a polygonal patch in the post-insertion initial cloud model to obtain first sampling information; a second sampling module 1407, configured to sample the preset noise map based on the vertex information of the model vertex to obtain second sampling information; and the rendering module 1409 is configured to render the initial cloud model after the plug-in based on the first sampling information and the second sampling information, so as to obtain a target cloud model.
Optionally, the rendering apparatus of the cloud model further includes: the first obtaining module is used for carrying out insert sheet processing on model vertexes of the initial cloud model based on the polygonal patch to obtain a plurality of pieces of grouping information after the original cloud model after insert sheet processing is carried out, wherein each piece of grouping information at least comprises vertex information of a corresponding model vertex and vertex identification of a corresponding model vertex.
Optionally, the second sampling module includes: the device comprises a first determining module and a third sampling module. The first determining module is used for determining vertex information of a model vertex corresponding to the polygon patch based on the vertex identification; and the third sampling module is used for sampling the preset two-dimensional noise map based on the vertex information of the model vertex to obtain second sampling information.
Optionally, the rendering module includes: and the processing module is used for carrying out virtualization processing on the edge of the initial cloud model after the sheet insertion based on the first sampling information and carrying out dynamic processing on the initial cloud model after the sheet insertion based on the second sampling information to obtain a target cloud model.
Optionally, the processing module includes: a second determining module and an edge blurring module. The second determining module is used for determining the transparency information corresponding to the polygon patch; and the edge blurring module is used for blurring the edges of the initial cloud model after the insertion based on the transparency information and the first sampling information.
Optionally, the processing module includes: a coordinate offset module and a first rendering module. The coordinate offset module is used for offsetting texture coordinates of the initial cloud model after the sheet insertion to obtain an offset initial cloud model; and the first rendering module is used for rendering the shifted initial cloud model based on the second sampling information to obtain a dynamic cloud model.
Optionally, the rendering apparatus of the cloud model further includes: a second obtaining module and a smoothing module. The second obtaining module is used for obtaining normal line information of a model vertex corresponding to the initial cloud model after the insertion from the vertex information before the texture coordinates of the initial cloud model after the insertion are subjected to offset processing to obtain the initial cloud model after the offset; and the smoothing module is used for smoothing the normal of the initial cloud model after the sheet insertion based on the normal information.
Optionally, the rendering apparatus of the cloud model further includes: the device comprises a third determining module, a fourth determining module and an illumination calculating module. The third determining module is used for determining the model position of the inserted initial cloud model in the target scene before the inserted initial cloud model is rendered based on the first sampling information and the second sampling information to obtain the target cloud model; a fourth determination module for determining a light source position of the virtual light source based on the model position; and the illumination calculation module is used for performing illumination calculation on the initial cloud model after the insertion based on the light source position.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present disclosure also provide a computer-readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps in the above-mentioned method embodiments when executed.
Optionally, in this embodiment, the nonvolatile storage medium may include but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Optionally, in this embodiment, the nonvolatile storage medium may be located in any one of computer terminals in a computer terminal group in a computer network, or in any one of mobile terminals in a mobile terminal group.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a computer-readable storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present application, a computer-readable storage medium has stored thereon a program product capable of implementing the above-described method of the present embodiment. In some possible implementations, various aspects of the embodiments of the present disclosure may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps according to various exemplary implementations of the present disclosure described in the above section "exemplary method" of this embodiment, when the program product is run on the terminal device.
According to the program product for implementing the above method of the embodiments of the present disclosure, it may employ a portable compact disc read only memory (CD-ROM) and include program codes, and may be run on a terminal device, such as a personal computer. However, the program product of the disclosed embodiments is not limited in this respect, and in the disclosed embodiments, the computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product described above may employ any combination of one or more computer-readable media. The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that the program code embodied on the computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Embodiments of the present disclosure also provide an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Fig. 15 is a schematic diagram of an electronic device according to an embodiment of the disclosure. As shown in fig. 15, the electronic device 1500 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 15, the electronic apparatus 1500 is embodied in the form of a general purpose computing device. The components of electronic device 1500 may include, but are not limited to: the at least one processor 1510, the at least one memory 1520, a bus 1530 connecting the various system components (including the memory 1520 and the processor 1510), and a display 1540.
Wherein the above-mentioned memory 1520 stores program code that can be executed by the processor 1510 to cause the processor 1510 to perform the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned method parts of the embodiments of the present application.
The memory 1520 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM) 15201 and/or a cache memory unit 15202, may further include a read only memory unit (ROM) 15203, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
In some examples, memory 1520 may also include a program/utility 15204 having a set (at least one) of program modules 15205, such program modules 15205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment. The memory 1520 may further include memory remotely located from the processor 1510, which may be connected to the electronic device 1500 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The bus 1530 may be any one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, and the processor 1510, or a local bus using any of a variety of bus architectures.
Display 1540 may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of electronic device 1500.
Optionally, the electronic apparatus 1500 may also communicate with one or more external devices 1600 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic apparatus 1500, and/or with any devices (e.g., router, modem, etc.) that enable the electronic apparatus 1500 to communicate with one or more other computing devices. Such communication may occur through input/output (I/O) interfaces 1550. Also, the electronic device 1500 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 1560. As shown in fig. 15, the network adapter 1560 communicates with the other modules of the electronic device 1500 over a bus 1530. It should be appreciated that although not shown in FIG. 15, other hardware and/or software modules may be used in conjunction with the electronic device 1500, which may include but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The electronic apparatus 1500 may further include: a keyboard, a cursor control device (e.g., a mouse), an input/output interface (I/O interface), a network interface, a power source, and/or a camera.
It will be understood by those skilled in the art that the structure shown in fig. 15 is only an illustration and is not intended to limit the structure of the electronic device. For example, electronic device 1500 may also include more or fewer components than shown in FIG. 15, or have a different configuration than shown in FIG. 15. The memory 1520 may be used for storing computer programs and corresponding data, such as computer programs and corresponding data corresponding to the rendering method of the cloud model in the embodiments of the present disclosure. The processor 1510 executes various functional applications and data processing, i.e., implements the above-described rendering method of the cloud model, by running computer programs stored in the memory 1520.
The above-mentioned serial numbers of the embodiments of the present disclosure are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present disclosure, the description of each embodiment has its own emphasis, and reference may be made to the related description of other embodiments for parts that are not described in detail in a certain embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or may not be executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be an indirect coupling or communication connection through some interfaces, units or modules, and may be electrical or in other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present disclosure, and it should be noted that modifications and embellishments could be made by those skilled in the art without departing from the principle of the present disclosure, and these should also be considered as the protection scope of the present disclosure.

Claims (11)

1. A rendering method of a cloud model is characterized by comprising the following steps:
acquiring an initial cloud model to be rendered;
performing insert processing on the model vertex of the initial cloud model based on the polygonal surface patch to obtain an inserted initial cloud model;
sampling a preset transparent chartlet based on texture coordinates of the polygonal surface patch in the initial cloud model after the sheet insertion to obtain first sampling information;
sampling a preset noise map based on the vertex information of the model vertex to obtain second sampling information;
and rendering the initial cloud model after the insertion based on the first sampling information and the second sampling information to obtain a target cloud model.
2. The method of claim 1, wherein after the step of performing a step processing on model vertices of the initial cloud model based on a polygon patch to obtain a post-step initial cloud model, the method further comprises:
acquiring a plurality of grouping information, wherein each grouping information at least comprises vertex information of a corresponding model vertex and a vertex identification of the corresponding model vertex.
3. The method of claim 2, wherein sampling the preset noise map based on vertex information of the model vertices to obtain second sampling information comprises:
determining vertex information of model vertices corresponding to the polygon patches based on the vertex identifications;
and sampling a preset two-dimensional noise map based on the vertex information of the model vertex to obtain second sampling information.
4. The method of claim 1, wherein rendering the post-patch initial cloud model based on the first sampling information and the second sampling information to obtain a target cloud model comprises:
and performing virtualization processing on the edge of the initial cloud model after the insertion based on the first sampling information, and performing dynamic processing on the initial cloud model after the insertion based on the second sampling information to obtain the target cloud model.
5. The method of claim 4, wherein blurring the edges of the post-patched initial cloud model based on the first sampling information comprises:
determining transparency information corresponding to the polygon facet;
and blurring the edge of the initial cloud model after the insertion sheet based on the transparency information and the first sampling information.
6. The method of claim 4, wherein dynamically processing the post-patch initial cloud model based on the second sampling information comprises:
shifting the texture coordinates of the initial cloud model after the sheet insertion to obtain a shifted initial cloud model;
rendering the initial cloud model after the deviation based on the second sampling information to obtain a dynamic cloud model.
7. The method of claim 6, wherein before the shifting the texture coordinates of the post-patched initial cloud model to obtain the shifted initial cloud model, the method further comprises:
obtaining normal line information of a model vertex corresponding to the initial cloud model after the sheet insertion from the vertex information;
and smoothing the normal of the initial cloud model after the insertion based on the normal information.
8. The method of claim 1, wherein before rendering the post-patched initial cloud model based on the first and second sampling information to obtain a target cloud model, the method further comprises:
determining the model position of the initial cloud model after the insertion in a target scene;
determining a light source position of a virtual light source based on the model position;
and carrying out illumination calculation on the initial cloud model after the insertion based on the light source position.
9. An apparatus for rendering a cloud model, comprising:
the acquisition module is used for acquiring an initial cloud model to be rendered;
the insert module is used for carrying out insert processing on the model vertex of the initial cloud model based on the polygonal patch to obtain the inserted initial cloud model;
the first sampling module is used for sampling a preset transparent mapping based on texture coordinates of the polygonal patch in the initial cloud model after the patch insertion to obtain first sampling information;
the second sampling module is used for sampling a preset noise map based on the vertex information of the model vertex to obtain second sampling information;
and the rendering module is used for rendering the initial cloud model after the insertion based on the first sampling information and the second sampling information to obtain a target cloud model.
10. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to, when executed by a processor, perform a method for rendering a cloud model as claimed in any one of claims 1 to 8.
11. An electronic apparatus comprising a memory and a processor, wherein the memory has stored therein a computer program, and the processor is configured to execute the computer program to perform the method of rendering a cloud model according to any one of claims 1 to 8.
CN202210976751.1A 2022-08-15 2022-08-15 Cloud model rendering method and device, storage medium and electronic device Pending CN115375822A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210976751.1A CN115375822A (en) 2022-08-15 2022-08-15 Cloud model rendering method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210976751.1A CN115375822A (en) 2022-08-15 2022-08-15 Cloud model rendering method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN115375822A true CN115375822A (en) 2022-11-22

Family

ID=84065873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210976751.1A Pending CN115375822A (en) 2022-08-15 2022-08-15 Cloud model rendering method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN115375822A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115712580A (en) * 2022-11-25 2023-02-24 格兰菲智能科技有限公司 Memory address allocation method and device, computer equipment and storage medium
CN117036570A (en) * 2023-05-06 2023-11-10 沛岱(宁波)汽车技术有限公司 Automatic generation method and system for 3D point cloud model mapping

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115712580A (en) * 2022-11-25 2023-02-24 格兰菲智能科技有限公司 Memory address allocation method and device, computer equipment and storage medium
CN115712580B (en) * 2022-11-25 2024-01-30 格兰菲智能科技有限公司 Memory address allocation method, memory address allocation device, computer equipment and storage medium
CN117036570A (en) * 2023-05-06 2023-11-10 沛岱(宁波)汽车技术有限公司 Automatic generation method and system for 3D point cloud model mapping
CN117036570B (en) * 2023-05-06 2024-04-09 沛岱(宁波)汽车技术有限公司 Automatic generation method and system for 3D point cloud model mapping

Similar Documents

Publication Publication Date Title
CN111145326B (en) Processing method of three-dimensional virtual cloud model, storage medium, processor and electronic device
CN115375822A (en) Cloud model rendering method and device, storage medium and electronic device
CN105283900A (en) Scheme for compressing vertex shader output parameters
US20230405452A1 (en) Method for controlling game display, non-transitory computer-readable storage medium and electronic device
CN109102560A (en) Threedimensional model rendering method and device
US9483873B2 (en) Easy selection threshold
CN115738249A (en) Method and device for displaying three-dimensional model of game role and electronic device
CN115713586A (en) Method and device for generating fragmentation animation and storage medium
CN115131489A (en) Cloud layer rendering method and device, storage medium and electronic device
CN114816457A (en) Method, device, storage medium and electronic device for cloning virtual model
CN114299203A (en) Processing method and device of virtual model
CN114283230A (en) Vegetation model rendering method and device, readable storage medium and electronic device
CN114742970A (en) Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device
CN115375797A (en) Layer processing method and device, storage medium and electronic device
CN113706675A (en) Mirror image processing method, mirror image processing device, storage medium and electronic device
CN116889723A (en) Picture generation method and device of virtual scene, storage medium and electronic device
CN115089964A (en) Method and device for rendering virtual fog model, storage medium and electronic device
CN113599818B (en) Vegetation rendering method and device, electronic equipment and readable storage medium
WO2024093609A1 (en) Superimposed light occlusion rendering method and apparatus, and related product
CN114299211A (en) Information processing method, information processing apparatus, readable storage medium, and electronic apparatus
CN116310039A (en) Model rendering method and device and electronic device
CN116630509A (en) Image processing method, image processing apparatus, computer-readable storage medium, and electronic apparatus
CN116452704A (en) Method and device for generating lens halation special effect, storage medium and electronic device
CN115120972A (en) Target fluid rendering method and device, storage medium and electronic device
CN114299207A (en) Virtual object rendering method and device, readable storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination