CN116402940A - Method, device, equipment and storage medium for generating virtual cloud in game - Google Patents

Method, device, equipment and storage medium for generating virtual cloud in game Download PDF

Info

Publication number
CN116402940A
CN116402940A CN202310177233.8A CN202310177233A CN116402940A CN 116402940 A CN116402940 A CN 116402940A CN 202310177233 A CN202310177233 A CN 202310177233A CN 116402940 A CN116402940 A CN 116402940A
Authority
CN
China
Prior art keywords
cloud
map
illumination
target
volume
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310177233.8A
Other languages
Chinese (zh)
Inventor
李浩瑄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202310177233.8A priority Critical patent/CN116402940A/en
Publication of CN116402940A publication Critical patent/CN116402940A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The invention provides a method, a device, equipment and a storage medium for generating virtual clouds in a game; wherein the method comprises the following steps: generating a volume cloud corresponding to the three-dimensional model based on the shape of the specified three-dimensional model; the appointed three-dimensional model is a basic model corresponding to the shape of the object in the game; trimming the volume cloud and performing map rendering to obtain an initial illumination map and a shape map; performing directional light vector processing and texture coordinate offset processing on the initial illumination map through a shader to obtain a target illumination map; rendering is carried out based on the target illumination map and the shape map, and the rendered volume cloud is displayed through the target facing surface patch so as to generate the target virtual cloud. According to the method, through the trimming of volume cloud, the directional light vector processing and the texture coordinate offset processing, the illumination performance and the dynamic effect of the virtual cloud are dynamically adjusted in real time, and a certain visual performance effect can be met on the premise of low performance consumption.

Description

Method, device, equipment and storage medium for generating virtual cloud in game
Technical Field
The present invention relates to the field of computer graphics, and in particular, to a method, an apparatus, a device, and a storage medium for generating a virtual cloud in a game.
Background
In a game scenario, the cloud in the sky can not only increase the immersion of the player, but can also convey various information to the player, such as weather information, day-night changes, or some special characters to be staged, etc.
Currently, a cloud volume rendering (virtual cloud generation in a game) method generally uses a noise map subdivision model, a plurality of models are interleaved together on one surface to form a cloud shape, and a raymarching sampling three-dimensional texture is used for realizing the cloud volume rendering.
However, in the case of a device with a lower performance, such as a mobile phone, the method is not suitable for the user, and is difficult to achieve the stylized effect, and a large amount of model resources are required to be consumed, so that the existing virtual cloud generating method in the game has the defects of high performance consumption and poor visual performance effect.
Disclosure of Invention
Accordingly, the present invention is directed to a method, apparatus, device, and storage medium for generating a virtual cloud in a game, which dynamically adjusts the light performance and dynamic effect of the virtual cloud in real time by trimming the volume cloud, processing the directional light vector, and processing the texture coordinate offset, so that a certain visual performance effect can be satisfied with low performance consumption.
In a first aspect, an embodiment of the present invention provides a method for generating a virtual cloud in a game, where the method for generating a virtual cloud in a game includes:
generating a volume cloud corresponding to the three-dimensional model based on the shape of the specified three-dimensional model; the appointed three-dimensional model is a basic model corresponding to the target shape in the game;
trimming the volume cloud, and performing map rendering to obtain an initial illumination map and a shape map;
performing directional light vector processing and texture coordinate offset processing on the initial illumination map through a shader to obtain a target illumination map;
rendering is carried out based on the target illumination map and the shape map, and the rendered volume cloud is displayed through a target orientation surface patch so as to generate a target virtual cloud.
In a second aspect, an embodiment of the present invention provides a device for generating a virtual cloud in a game, where the device for generating a virtual cloud in a game includes:
a generation module for generating a volume cloud corresponding to a specified three-dimensional model based on a shape of the model; the appointed three-dimensional model is a basic model corresponding to the target shape in the game;
The trimming module is used for trimming the volume cloud and performing map rendering to obtain an initial illumination map and a shape map;
the offset module is used for carrying out directional light vector processing and texture coordinate offset processing on the initial illumination map through a shader to obtain a target illumination map;
and the display module is used for rendering based on the target illumination map and the shape map, and displaying the rendered volume cloud through a target orientation surface patch so as to generate a target virtual cloud.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor and a memory, where the memory stores machine executable instructions executable by the processor, and the processor executes the machine executable instructions to implement the method for generating a virtual cloud in a game.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement a method for generating virtual clouds in a game as described above.
The embodiment of the invention has the following beneficial effects:
The method, the device, the equipment and the storage medium for generating the virtual cloud in the game generate a volume cloud corresponding to a specified three-dimensional model based on the shape of the three-dimensional model; the appointed three-dimensional model is a basic model corresponding to the target shape in the game; trimming the volume cloud, and performing map rendering to obtain an initial illumination map and a shape map; performing directional light vector processing and texture coordinate offset processing on the initial illumination map through a shader to obtain a target illumination map; rendering is carried out based on the target illumination map and the shape map, and the rendered volume cloud is displayed through a target orientation surface patch so as to generate a target virtual cloud. According to the method, through the trimming of volume cloud, the directional light vector processing and the texture coordinate offset processing, the illumination performance and the dynamic effect of the virtual cloud are dynamically adjusted in real time, and a certain visual performance effect can be met on the premise of low performance consumption.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are some embodiments of the invention and that other drawings may be obtained from these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an embodiment of a method for generating virtual clouds in a game according to an embodiment of the present invention;
fig. 2 is a schematic diagram of another embodiment of a method for generating virtual clouds in a game according to an embodiment of the present invention;
FIG. 3 is a schematic representation of one embodiment of a generated volumetric cloud provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of an embodiment of an effect diagram of generating a small cloud batting according to an edge of a three-dimensional model according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of one embodiment of multi-directional illumination rendering provided by an embodiment of the present invention;
FIG. 6 is a schematic diagram of one embodiment of a rendered illumination map provided by an embodiment of the present invention;
FIG. 7 is a schematic diagram of an embodiment of an effect diagram for controlling cloud shape through a color mask according to an embodiment of the present invention;
fig. 8 is a schematic diagram of an embodiment of an effect diagram of cloud transparency control according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of an embodiment of an effect diagram of a transparency-adjusted volume cloud according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of one embodiment of an effect graph of colors or self-shadows that may be added by an attributed point node provided by an embodiment of the present invention;
FIG. 11 is a schematic view of an embodiment of illumination provided by an embodiment of the present invention;
FIG. 12 is a diagram of one embodiment of the multiplication of an anti-direction illumination vector and a camera vector provided by an embodiment of the present invention;
FIG. 13 is a graph showing one embodiment of the results of color modulation by a 24-hour color profile provided by an embodiment of the present invention;
FIG. 14 is a schematic view of an embodiment of a reinforced illumination map provided by an embodiment of the present invention;
FIG. 15 is a schematic diagram of an embodiment of a noise map offset UV performance effect map according to an embodiment of the present invention;
FIG. 16 is a schematic diagram of an embodiment of an effect diagram of a cloud edge continuously drifting and a cloud interior shadow continuously flowing according to an embodiment of the present invention;
FIG. 17 is a schematic diagram of an embodiment of a target orientation patch display provided by an embodiment of the present invention;
fig. 18 is a schematic diagram of a device for generating virtual clouds in a game according to an embodiment of the present invention;
fig. 19 is a schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In a game scenario, the cloud in the sky can not only increase the immersion of the player, but can also convey various information to the player, such as weather information, day-night changes, or some special characters to be staged, etc. For example, a skeleton head shaped cloud is used to convey to the player information that there are some monster BOSS and rewards in the game in the islands in this direction, guiding the player to fight before. The method for manufacturing the cloud with the special shape like the skeleton head has a plurality of methods, and the method with the best performance effect at present is a method for performing ray stepping RayMarch, but the performance consumption is overlarge, and similar methods are unlikely to be used on equipment with lower performance, such as a mobile phone end, so that more real volume illumination and dynamic effects are difficult to achieve. There are other methods for separating the cloud shape by using the basic model or inserting a plurality of patch models to achieve the cloud effect, but these methods have great limitations, are not easy to modify later, have poor performance, and cause high overdraw due to improper use. In summary, the existing virtual cloud generation method in the game is difficult to realize the stylized effect, and needs to consume a large amount of model resources, so that the method has the defects of high performance consumption and poor visual performance effect.
Based on the above, the embodiment of the invention provides a method, a device, equipment and a storage medium for generating virtual clouds in a game. The method is mainly applied to games.
The method for generating virtual clouds in the game in one embodiment of the invention can be operated on a terminal device or a server. The terminal device may be a local terminal device. When the method for generating the virtual cloud in the game runs on the server, the method can be realized and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device.
In an alternative real-time manner, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud game. Taking cloud game as an example, cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, the running main body of the game program and the game picture presentation main body are separated, the storage and running of the information interaction method are completed on the cloud game server, and the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the terminal device for information processing is cloud game server of cloud. When playing the game, the player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the client device through a network, and finally decodes the data through the client device and outputs the game pictures.
In an alternative embodiment, the terminal device may be a local terminal device. Taking a game as an example, the local terminal device stores a game program and is used to present a game screen. The local terminal device is used for interacting with the player through the graphical user interface, namely, conventionally downloading and installing the game program through the electronic device and running. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal, or provided to the player by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including game visuals, and a processor for running the game, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
For easy understanding, the following describes a specific flow of an embodiment of the present invention, referring to fig. 1, and one embodiment of a method for generating a virtual cloud in a game in the embodiment of the present invention includes the following steps:
step 101, generating a volume cloud corresponding to the three-dimensional model based on the shape of the specified three-dimensional model; the specified three-dimensional model is a basic model corresponding to the shape of the object in the game.
The three-dimensional model may be a prefabricated, reusable basic model corresponding to a target shape in a game, rather than a model to be created for generating a cloud of a predetermined shape, and may be a basic model corresponding to a target shape, for example, a cloud model, or a model corresponding to a basic shape of a virtual object. The volume cloud is a cloud of a VDB volume whose basic shape is formed by the shape of a three-dimensional model. The three-dimensional model may be converted to a volume cloud by invoking a preset volume cloud conversion tool. The preset volume cloud conversion tool is a platform or application software for converting the three-dimensional model into a volume cloud, for example, cloudcomputer software, blender and houdini, which are not limited herein.
And 102, trimming the volume cloud and performing map rendering to obtain an initial illumination map and a shape map.
After the three-dimensional model is converted into the cloud of the VDB volume (i.e., volume cloud), the converted cloud of the VDB volume is modified by a preset parameter to implement a custom modification, i.e., trimming, which may be, by way of example and not limitation, volume, color, shape, and texture details. The modification of the preset parameters can be performed on the converted cloud of the VDB volume by using the volume cloud conversion tool, the volume cloud conversion tool has a custom operation function, or a color mask is drawn by using a preset attribpoint node, so that the transparency and the detail quantity of the edge of the volume cloud are controlled, and the modification of the preset parameters on the converted cloud of the VDB volume is realized, or the modification of the preset parameters on the converted cloud of the VDB volume is performed by using other platforms, game engines or other tools, which is not limited herein.
It should be noted that the timing logic of the trimming of the preset parameters is not limited herein, that is, the trimming of the preset parameters may be performed in parallel, or may be performed according to the preset timing logic, for example, may be: acquiring a tracing line position based on the model shape, and generating points of random positions through the tracing line position; performing cloud flocculation generation processing on points at random positions to obtain point cloud flocculation, combining the point cloud flocculation and a main body cloud to obtain volume cloud, creating a color shade of the volume cloud, mapping the volume cloud through the color shade to obtain a volume cloud with an adjusted shape, performing transparency regulation on the volume cloud with the adjusted shape to obtain a volume cloud with the adjusted transparency, and performing color addition on the volume cloud with the adjusted transparency to obtain an adjusted volume cloud; the transparency regulation and control can be carried out on the volume cloud to obtain a volume cloud with the transparency adjusted, a color shade of the volume cloud with the transparency adjusted is created, the volume cloud with the transparency adjusted is mapped through the color shade to obtain the volume cloud with the shape adjusted, the volume cloud with the shape adjusted is subjected to color addition to obtain the volume cloud with the color added, the texture of the cloud is created according to the three-dimensional model to obtain the main cloud, the tracing line position of the three-dimensional model corresponding to the volume cloud with the color added is obtained, points with random positions are generated through the tracing line position, cloud generation processing is carried out on the points with random positions to obtain point cloud battles, and the point cloud battles and the main cloud are combined to obtain the regulated whole volume cloud.
After the volume cloud is trimmed (i.e., the adjusted volume cloud is obtained), performing illumination map rendering, transparency map rendering and self-luminous map rendering on the trimmed volume cloud in multiple directions, so as to obtain illumination maps, transparency maps and self-luminous maps in multiple directions, and performing splicing or merging processing on the illumination maps, the transparency maps and the self-luminous maps in multiple directions, so as to obtain the illumination maps and the shape maps of the initial operators.
And step 103, performing directional light vector processing and texture coordinate offset processing on the initial illumination map through a shader to obtain the target illumination map.
Since the cloud needs to change in color and shadow angle according to the change of 24 hours time and the angle of sun (directional light) in the game scene, the initial illumination map needs to be controlled by the vector of the directional light direction in the game scene, so that the initial illumination map is mixed with each other, and the color of the volume cloud is controlled by the weather curve of 24 hours. The dynamic part of the cloud is to control texture coordinates UV of the map sampling through a noise map, so that the cloud flutter is expressed. In one implementation, the xyz values of the directional light vectors are stored in the up-down direction, the left-right direction and the front-back direction, respectively, so that the xyz values of the corresponding directional illumination maps can be regulated and controlled by combining the xyz values of the directional light vectors, and the directional light vector processing of the initial illumination maps by the shader can be realized. Specifically, the initial illumination map can be subjected to illumination intensity regulation and control based on a directional light vector through the shader and cloud backlight strengthening treatment so as to realize the directional light vector treatment on the initial illumination map through the shader.
The effect of cloud motion is produced by controlling the texture coordinates UV in the shader. The dynamic effect of the volume cloud, namely texture coordinate offset processing, can be realized by implementing two parts of movement of internal shadow textures and twisting of cloud edges of the volume cloud of the enhanced illumination map through a shader loader. Specifically, a noise map of the enhanced illumination map can be obtained, and the noise map is sampled by a shader based on a preset moving texture coordinate, so that the sampled noise map is obtained; adding the sampled noise map with a value of a preset moving texture coordinate to obtain a target texture coordinate; and sampling the enhanced illumination map through the target texture coordinates to obtain the target illumination map.
And 104, rendering based on the target illumination map and the shape map, and displaying the rendered volume cloud through the target orientation surface patch to generate a target virtual cloud.
Rendering based on a target illumination map and a shape map through a renderer or a shader in a preset volume cloud conversion tool, and displaying the rendered volume cloud towards a face plate to generate a target virtual cloud, wherein the target towards the face plate is a face plate which is always towards the camera and used in an engine.
According to the method for generating the virtual cloud in the game, the pre-designed model is used for generating the map, the map is attached to one panel in the game to generate the target virtual cloud, and the panel with four vertexes is used for generating the target virtual cloud, so that the performance consumption is low.
According to the method for generating the virtual cloud in the game, through the trimming of the volume cloud, the directional light vector processing and the texture coordinate offset processing, the illumination performance and the dynamic effect of the virtual cloud are dynamically adjusted in real time, and a certain visual performance effect can be met on the premise of low performance consumption.
Referring to fig. 2, another embodiment of a method for generating a virtual cloud in a game is shown, where the generating a virtual cloud in a game includes the following steps:
step 201, generating a volume cloud corresponding to the three-dimensional model based on the shape of the specified three-dimensional model; the specified three-dimensional model is a basic model corresponding to the shape of the object in the game.
By way of example, and not limitation, the embodiment of the present invention is described with reference to a three-dimensional model in the shape of a rabbit, and the generated volume cloud is shown in fig. 3, where (1) in fig. 3 shows the three-dimensional model, and (2) in fig. 3 shows the volume cloud.
In one implementation, the step 201 specifically includes: creating a corresponding texture based on the shape of the specified three-dimensional model to obtain a subject cloud; acquiring a tracing line position based on the model shape, and generating points of random positions through the tracing line position; carrying out cloud flocculation generation treatment on the points at random positions to obtain point cloud flocculation; and combining the point cloud wadding and the main cloud to obtain a volume cloud.
The three-dimensional model is converted into a volume cloud based on the shape of the specified three-dimensional model through a preset volume cloud conversion tool. For ease of understanding, by way of example and not limitation, embodiments of the present invention are described with respect to a volumetric cloud conversion tool as Houdini, which is a three-dimensional computer graphics software, redeveloped on the basis of prism, which is a product designed based entirely on node patterns, and which has a great difference in structure, operation, etc. from other three-dimensional software, and a renderer on which Houdini is provided is Mantra.
By volume nodes in Houdini, a corresponding basic texture (i.e., texture map) is created based on the shape of the specified three-dimensional model, resulting in a subject cloud. Edge detection and edge tracing are performed on the main body cloud through a preset edge detection algorithm and an edge tracing algorithm, so that an edge tracing line based on the shape of a model (a specified three-dimensional model) is obtained, and the position of the edge tracing line, namely the position of the edge tracing line, is obtained. Generating points at random positions based on the positions of the tracing lines, and generating cloud batting with a preset size on the points to obtain the cloud batting, wherein the cloud batting with the preset size can form a basic shape corresponding to a specified three-dimensional model, as shown in fig. 4, an effect diagram of generating small cloud batting according to edges of the three-dimensional model is shown in fig. 4, the three-dimensional model is shown in fig. 4 (1), the points at random positions based on the positions of the tracing lines are shown in fig. 4 (2), and cloud batting generation processing is performed on the points at random positions to obtain the cloud batting in fig. 4 (3). And combining the point cloud wadding and the main cloud to obtain a volume cloud.
The volume cloud is obtained through the generation of the point cloud of the tracing line and the combination of the point cloud and the main body cloud, so that the accuracy of the volume cloud is improved, and the visualization of the volume cloud is enhanced.
And 202, trimming the volume cloud and performing map rendering to obtain an initial illumination map and a shape map.
In one implementation, the step 202 specifically includes: adjusting the volume cloud to obtain an adjusted integral volume cloud; performing multidirectional rendering on the adjusted volume cloud to obtain a lighting map, a transparency map and a self-luminous map in multiple directions; and combining the illumination maps, the transparency maps and the self-luminous maps in multiple directions to obtain an initial illumination map and a shape map.
And carrying out light arrangement on a plurality of preset directions of the adjusted volume cloud, carrying out map rendering on the directions after the light arrangement through a Mantra renderer in Houdini, so as to obtain illumination maps, transparency maps and self-luminous maps in the plurality of directions, and combining the illumination maps, the transparency maps and the self-luminous maps in the plurality of directions through a preset map combining tool to obtain an initial illumination map and a shape map, so that resources are saved, and adding Noise maps to be used as cloud dynamic maps. By way of example and not limitation, the map merging tool is a substatemedesign, self-luminous map for indicating itself to illuminate, rather than a map that achieves the effect of illumination by other reflections.
For example, a plurality of directions are preset to be an upper direction, a left direction, a right direction and a back direction, 4 lights are arranged in the upper direction, the left direction, the right direction and the back direction of the adjusted overall cloud, as shown in fig. 5, fig. 5 shows a schematic diagram of multi-directional illumination rendering, the illumination maps, the transparency maps and the self-luminous maps in the 4 directions after the light arrangement are rendered by a Mantra renderer in Houdini, the rendered illumination maps are shown in fig. 6, and the illumination maps, the transparency maps and the self-luminous maps in the plurality of directions are combined by a substatance designer, so that an initial illumination map and a shape map which are combined in four directions are obtained.
Through multidirectional rendering and merging, the illumination details, transparency details and self-luminous details of the initial illumination map and the shape map are enriched, and the illumination performance of the whole cloud can be dynamically adjusted in real time, so that the volume cloud meets a certain visual performance.
In one implementation manner, the step of adjusting the volume cloud to obtain the adjusted integral volume cloud includes: creating a color shade of the volume cloud, and mapping the volume cloud through the color shade to obtain the volume cloud with the adjusted shape; performing transparency regulation and control on the volume cloud with the shape adjusted to obtain the volume cloud with the transparency adjusted; and adding colors to the volume cloud with the adjusted transparency to obtain an adjusted volume cloud.
The detail of the volume cloud generated only through the volume nodes in Houdini is not rich enough, and the desired effect cannot be achieved, so that the volume cloud is adjusted, and the adjusted overall volume cloud is obtained. By way of example and not limitation, embodiments of the present invention preferably adjust the volume cloud by a preset attribtain node, i.e., draw a color mask by the attribtain node, control the transparency and amount of detail of the edges of the volume cloud, and may draw colors to be added to the volume cloud.
Specifically, a color mask of the volume cloud is created through the attribtain node, and the volume cloud is mapped through the color mask. Because the volume generated based on the three-dimensional model is round, the surface of the volume cloud is uneven and has variation, the concave-convex details of the volume cloud are controlled through the color shade, wherein the intensity of concave-convex of the black part is smaller, the intensity of concave-convex of the white part is larger, and the concave-convex degree of different areas is distinguished through the intensity of the color value. As shown in fig. 7, fig. 7 shows an effect diagram of controlling the cloud shape by the color mask, fig. 7 (1) shows a three-dimensional model, fig. 7 (2) shows a created color mask, a central gray portion enables the center of the volume cloud to appear gray, and fig. 7 (3) shows the volume cloud after the color mask process, that is, the volume cloud after the shape adjustment.
And drawing the transparency of the volume cloud through the attribpoint node to obtain corresponding color data, wherein the color data can be, by way of example and not limitation, transparent in a black part and opaque in a white part, semitransparent in a gray part, storing the color data in a Point attribute, regulating the overall transparency of the volume cloud through a noise map in the node through a VOP node in Houdini, enabling the transparency change to be more random, regulating the overall transparency of the volume cloud after regulating the overall transparency through the color data in the Point attribute, and regulating the transparency of the volume cloud after regulating the shape to obtain the volume cloud after regulating the transparency. As shown in fig. 8, fig. 8 shows an effect diagram of cloud transparency control, and a portion with higher transparency of an edge black portion of a volume cloud can be drawn through an attribtain node. By adjusting the transparency of the volume cloud after the shape adjustment, the volume of the cloud without details originally can be made more natural, as shown in fig. 9, the effect diagram of the volume cloud after the transparency adjustment is shown in fig. 9, the (1) of fig. 9 is the volume cloud without details originally, and the (2) of fig. 9 is the volume cloud with the cloud volume edge detail quantity controlled.
And adding colors to the volume cloud with the transparency adjusted through the attribtain node to obtain an adjusted overall volume cloud, wherein the ambient light shade (Ambient Occlusion, AO), shadow and/or color can be added by self. As shown in FIG. 10, FIG. 10 is an effect diagram of colors or self-shading that can be added by an attribvent node.
By means of custom finishing operation, details of the adjusted volume cloud are enriched, the mapping of virtual clouds in various shapes can be generated rapidly, and model resources in games can be reused to generate virtual clouds in corresponding shapes.
And 203, regulating the illumination intensity of the initial illumination map based on the directional light vector through a preset shader, and obtaining the regulated initial illumination map.
In one implementation, the step 203 specifically includes: acquiring a plurality of color channel components of the initial illumination map based on the directional light vector through a preset shader, wherein the plurality of color channel components are used for indicating an R channel component, a G channel component, a B channel component and an A channel component based on the illumination direction in a tangential space; judging whether each color channel component is larger than a preset value or not to obtain a judging result; determining target illumination maps in multiple directions through the judging result and the multiple color channel components; and performing linear interpolation processing on the target illumination maps in multiple directions to obtain the regulated initial illumination map.
Since xyz of the directional light vector stores values of up-down direction, left-right direction and front-back direction, respectively, where xyz corresponds to RGBA of a color (i.e., a plurality of color channel components, R channel component, G channel component, B channel component and a channel component based on the illumination direction in the tangential space), the illumination schematic diagram is shown in fig. 11, and thus, the illumination effect of the object in the left-right direction is: when the direction light is on the left side of the object, the object is taken as the coordinate center, at the moment, the value of x is negative, the illumination map of the direction light-left (namely (2) in fig. 6) is used as the illumination shadow of the object at the moment, the x after the absolute value is multiplied, the intensity of the shadow is controlled by the value of x, the other directions and the like, and finally, the initial illumination maps which are regulated and controlled based on the direction light vectors in all directions are added together to be used as the light shadow of the object.
For example, specifically, regulation of the illumination intensity from the right or left may be achieved by (lightdir.r >0.0 light: left) ×abs (lightdir.r), that is, by obtaining, by a preset shader loader, an illumination map in which direction to use according to the RGB channel components of the illumination direction lightDir in the tangential space, using the G channel component (right illumination effect) of the initial illumination map when the value of the R channel component of the lightDir is greater than 0 (i.e., a preset value), using the R channel component (left illumination effect) of the initial illumination map when the value of the R component of the lightDir is less than 0 (i.e., a preset value), and multiplying the final result (i.e., the G channel component (right illumination effect) of the initial illumination map or the R channel component (left illumination effect)) by the R channel component of the lightDir to control the illumination intensity from the right or left of the initial illumination map, thereby obtaining the illumination map or left;
Regulating the illumination intensity from above by (lightdir.g >0.0top: 0) ×abs (lightdir.g), namely when the value of the G channel component of the lightDir is larger than 0 (i.e. a preset value), using the a channel component of the initial illumination map (upper illumination effect), when the value of the G channel component of the lightDir is smaller than 0 (i.e. a preset value), using a fixed value 0 (bottom illumination uses a fixed value 0 because the angle of sunlight accepted by the cloud on the sky is in the left, right, front and rear 5 directions), and finally, controlling the illumination intensity from above by the final result (i.e. the a channel component of the initial illumination map or the fixed value 0) multiplied by abs (the G component of the lightDir), so as to obtain an upper illumination map, and obtaining a Fang Guangzhao map in the same way;
regulation and control of illumination intensity from the back side and the front side are achieved by (lightdir.b >0.0front: back): abs (lightdir.b), i.e. when the value of the B channel component of lightDir is greater than 0 (preset value), the illumination intensity from the back side and the front side is controlled by using a saturate (i.e. 0.5: (initial illumination map R channel component + initial illumination map G channel component) (front side illumination effect is simulated by using a left-right direction illumination map), when the value of the B channel component of lightDir is less than 0 (preset value), the B channel component of initial illumination map (back side illumination effect) is used, and finally the result is multiplied by abs (B channel component of lightDir), so that the back side illumination map and the front side illumination map are obtained;
It should be noted that, the illumination results in 6 directions (i.e., the target illumination maps in multiple directions) are added at last, and the range is limited between 0 and 1 by lerp, that is, the right illumination map, the left illumination map, the back illumination map, the front illumination map, the upper Fang Guangzhao map and the lower illumination map (i.e., the target illumination maps in multiple directions) are added to obtain an added result, and the added result (i.e., the target illumination maps in multiple directions) is subjected to linear interpolation processing by a preset linear interpolation function lerp, so as to obtain the regulated initial illumination map.
The target illumination maps in multiple directions are determined through the multiple color channel components, and the regulated initial illumination map is obtained through linear interpolation, so that the illumination performance of the cloud can be dynamically regulated in real time, and the automatic fine art regulation of subsequent details is facilitated.
And 204, performing cloud backlight reinforcement treatment on the regulated initial illumination map to obtain the reinforced illumination map.
In one implementation, the step 204 specifically includes: acquiring a reverse illumination vector and a camera vector of the regulated initial illumination map; calculating the dot multiplication result of the opposite illumination vector and the camera vector; and carrying out scattering reinforcement treatment on the regulated initial illumination map through the dot multiplication result to obtain the reinforced illumination map.
The reverse direction illumination vector is a vector in the reverse direction of the reverse direction illumination vector.
Since the dot product of the two vectors is the included angle of the two vectors, the similarity of the two vectors can be determined by the size of the included angle, that is, when the two vectors are unit vectors, the geometric meaning of the dot products can be understood as the similarity (larger is similar, smaller is dissimilar), the dot product of the opposite direction illumination vector and the camera vector is used to control the basic color and the illumination color of the volume cloud of the B channel of the initial illumination map so as to strengthen the effect of light transmission of the cloud layer edge when the backlight is performed, the larger the dot product represents the stronger the effect of the cloud backlight, and when the angular change of the directional illumination is caused by a pow function (aperture (V-L)), the backlight effect appears earlier, as shown in fig. 12, the dot product vectors of the two vectors (that is, the opposite direction illumination vector-L of the initial illumination map after regulation and the camera vector V) coincide with each other, that is, the maximum the included angle of the opposite direction illumination vector and the camera vector V. Specifically, this can be achieved by a saturation (dot (V, -L), 5)).
In another implementation manner, after the adjusted initial illumination map is subjected to the cloud backlight enhancement processing, the color of the adjusted initial illumination map is adjusted through a preset time-varying color curve, so that the enhanced illumination map is obtained, and by way of example, but not limitation, a preferred time-varying color curve in the embodiment of the present invention is a 24-hour-varying color curve, and the result of color adjustment through the 24-hour-varying color curve is shown in fig. 13. The enhanced illumination map is shown in fig. 14.
The cloud backlight enhancement treatment of the opposite illumination vector and the camera vector effectively enhances the effect of light transmission of the edges of the cloud layer of the enhanced illumination map.
And 205, performing offset processing on texture coordinates of the enhanced illumination map through a shader to obtain a target illumination map.
The initial illumination map is subjected to illumination intensity regulation, cloud backlight strengthening treatment and texture coordinate offset treatment through the shader, so that the effect of light transmission of the cloud layer edge of the initial illumination map is strengthened, and the function of stylized clouds changing along with illumination angle change is realized.
In one implementation, the step 205 specifically includes: obtaining a noise map of the enhanced illumination map, and sampling the noise map by a shader based on a preset moving texture coordinate to obtain a sampled noise map; adding the sampled noise map with a value of a preset moving texture coordinate to obtain a target texture coordinate; and sampling the enhanced illumination map through the target texture coordinates to obtain the target illumination map.
The effect of cloud drifting is made by controlling the texture coordinates UV of the initial lighting map with the noise map offset. When the noise map is sampled, the texture coordinates UV are controlled to continuously increment values, and since the range of UV values is between 0 and 1, the increment values are continuously cycled from 0 to 1, so that UV moves in one direction, and finally the noise map is sampled by using the moved UV (i.e., the preset moving texture coordinates).
Since UV is offset due to addition of UV and value, UV (i.e. target texture coordinate) which moves continuously and has random movement speed appears when the sampled noise map and the value of the preset moving texture coordinate are added, and then the illumination map (i.e. the enhanced illumination map) and the transparency map are sampled by the UV (i.e. the target texture coordinate), so that the flowing effect of cloud edges and cloud shadows is generated, and a better cloud dynamic effect is simulated. The noise graph is shifted from the UV to have the performance effect shown in fig. 15, and the performance effect shown in fig. 15 is the performance effect after the adjustment, so that the observation is convenient. The cloud edge is continually drifting and the effect of the cloud interior shadows also continually flowing is shown in fig. 16.
The enhanced illumination map is sampled through the offset texture coordinates, so that a stronger cloud dynamic effect is realized, and certain visual performance is met on the premise of low performance consumption.
And 206, rendering based on the target illumination map and the shape map, and displaying the rendered volume cloud through the target towards the surface patch to generate a target virtual cloud.
In one implementation, the step 206 specifically includes: acquiring a target orientation patch, wherein the target orientation patch is used for indicating a patch which always faces a camera; rendering by a preset shader based on the target illumination map and the shape map, and displaying the volume cloud rendered by the shader by a preset engine based on the target orientation patch to generate a target virtual cloud.
Rendering by a preset shader based on the target illumination map and the shape map, and displaying by a preset engine a volume cloud rendered by the shader based on the target orientation patch to generate a target virtual cloud, wherein the patch always oriented towards the camera is understood as a patch always oriented towards the camera in the engine. Displaying a schematic of a shader-rendered volumetric cloud through a target-oriented patch is shown in fig. 17, with (1) in fig. 17 showing the target-oriented patch and (2) in fig. 17 showing the shader-rendered volumetric cloud.
By always facing the face sheet of the camera and matching with the camera, the camera can display the front face of the volume cloud rendered by the shader, so that the normal shape of the volume cloud rendered by the shader can be displayed.
According to the method for generating the virtual clouds in the game, the mapping of the virtual clouds with various shapes can be generated rapidly through the trimming of volume clouds, the directional light vector processing, the illumination intensity regulation and control, the cloud backlight strengthening processing and the texture coordinate offset processing, the virtual clouds with low consumption and good effect can be produced in batches, the model resources in the game can be reused to generate the virtual clouds with corresponding shapes, the illumination performance and the dynamic effect of the whole cloud can be regulated dynamically in real time, and the effects of a certain visual performance effect and the effect of automatically modifying the cloud details of fine arts can be met on the premise of small performance consumption.
Corresponding to the above method embodiment, referring to fig. 18, a schematic diagram of an apparatus for generating virtual clouds in a game is shown, where the apparatus includes:
a generating module 1810 configured to generate a volume cloud corresponding to the three-dimensional model based on the shape of the specified three-dimensional model; the appointed three-dimensional model is a basic model corresponding to the shape of the object in the game;
the trimming module 1820 is configured to trim the volume cloud and perform map rendering to obtain an initial illumination map and a shape map;
the offset module 1830 is configured to perform directional light vector processing and texture coordinate offset processing on the initial illumination map through the shader, so as to obtain a target illumination map;
the display module 1840 is configured to render based on the target illumination map and the shape map, and display the rendered volume cloud through the target towards the panel to generate a target virtual cloud.
According to the device for generating the virtual cloud in the game, through the trimming of the volume cloud, the directional light vector processing and the texture coordinate offset processing, the illumination performance and the dynamic effect of the virtual cloud are dynamically adjusted in real time, and a certain visual performance effect can be met on the premise of low performance consumption.
The offset module 1830 includes:
The intensity regulating unit 1831 is configured to regulate the illumination intensity of the initial illumination map based on the directional light vector by using a preset shader, so as to obtain a regulated initial illumination map;
the strengthening processing unit 1832 is used for carrying out cloud backlight strengthening processing on the regulated initial illumination map to obtain a strengthened illumination map;
and the offset processing unit 1833 is configured to perform offset processing on the texture coordinates of the enhanced illumination map by using a shader, so as to obtain a target illumination map.
An intensity control unit 1831 for:
acquiring a plurality of color channel components of the initial illumination map based on the directional light vector through a preset shader, wherein the plurality of color channel components are used for indicating an R channel component, a G channel component, a B channel component and an A channel component based on the illumination direction in a tangential space;
judging whether each color channel component is larger than a preset value or not to obtain a judging result;
determining target illumination maps in multiple directions through the judging result and the multiple color channel components;
and performing linear interpolation processing on the target illumination maps in multiple directions to obtain the regulated initial illumination map.
An intensive processing unit 1832 for:
acquiring a reverse illumination vector and a camera vector of the regulated initial illumination map;
Calculating the dot multiplication result of the opposite illumination vector and the camera vector;
and carrying out scattering reinforcement treatment on the regulated initial illumination map through the dot multiplication result to obtain the reinforced illumination map.
An offset processing unit 1833 for:
obtaining a noise map of the enhanced illumination map, and sampling the noise map by a shader based on a preset moving texture coordinate to obtain a sampled noise map;
adding the sampled noise map with a value of a preset moving texture coordinate to obtain a target texture coordinate;
and sampling the enhanced illumination map through the target texture coordinates to obtain the target illumination map.
The trimming module 1820 includes:
the adjusting unit 1821 is configured to adjust the volume cloud to obtain an adjusted overall volume cloud;
a rendering unit 1822, configured to perform multi-directional rendering on the adjusted volume cloud to obtain a lighting map, a transparency map, and a self-luminous map in multiple directions;
the merging unit 1823 is configured to merge the illumination map, the transparency map, and the self-luminous map in multiple directions to obtain an initial illumination map and a shape map.
An adjusting unit 1821 for:
creating a color shade of the volume cloud, and mapping the volume cloud through the color shade to obtain the volume cloud with the adjusted shape;
Performing transparency regulation and control on the volume cloud with the shape adjusted to obtain the volume cloud with the transparency adjusted;
and adding colors to the volume cloud with the adjusted transparency to obtain an adjusted volume cloud.
A generating module 1810 configured to:
creating a corresponding texture based on the shape of the specified three-dimensional model to obtain a subject cloud;
acquiring a tracing line position based on the model shape, and generating points of random positions through the tracing line position;
carrying out cloud flocculation generation treatment on the points at random positions to obtain point cloud flocculation;
and combining the point cloud wadding and the main cloud to obtain a volume cloud.
A display module 1840 for:
acquiring a target orientation patch, wherein the target orientation patch is used for indicating a patch which always faces a camera;
rendering by a preset shader based on the target illumination map and the shape map, and displaying the volume cloud rendered by the shader by a preset engine based on the target orientation patch to generate a target virtual cloud.
The embodiment also provides an electronic device, which comprises a processor and a memory, wherein the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to realize the method for generating the virtual cloud in the game. The electronic device may be a server or a terminal device.
Referring to fig. 19, the electronic device includes a processor 1900 and a memory 1901, where the memory 1901 stores machine executable instructions executable by the processor 1900, and the processor 1900 executes the machine executable instructions to implement the method for generating virtual clouds in a game described above.
Further, the electronic device shown in fig. 19 further includes a bus 1902 and a communication interface 1903, and the processor 1900, the communication interface 1903, and the memory 1901 are connected via the bus 1902.
The memory 1901 may include a high-speed random access memory (RAM, random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one disk memory. The communication connection between the system element and at least one other element is implemented via at least one communication interface 1903 (which may be wired or wireless), which may use the internet, a wide area network, a local network, a metropolitan area network, etc. Bus 1902 may be an ISA bus, a PCI bus, or an EISA bus, among others. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in fig. 19, but not only one bus or one type of bus.
Processor 1900 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the methods described above may be performed by integrated logic circuitry in hardware or by instructions in software in processor 1900. The processor 1900 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processor, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 1901, and the processor 1900 reads the information in the memory 1901, and in combination with the hardware, performs the following steps:
Generating a volume cloud corresponding to the three-dimensional model based on the shape of the specified three-dimensional model; the appointed three-dimensional model is a basic model corresponding to the shape of the object in the game;
trimming the volume cloud and performing map rendering to obtain an initial illumination map and a shape map;
performing directional light vector processing and texture coordinate offset processing on the initial illumination map through a shader to obtain a target illumination map;
rendering is carried out based on the target illumination map and the shape map, and the rendered volume cloud is displayed through the target facing surface patch so as to generate the target virtual cloud.
By trimming the volume cloud, processing the directional light vector and processing the texture coordinate offset, the illumination performance and the dynamic effect of the virtual cloud are dynamically adjusted in real time, and a certain visual performance effect can be met on the premise of low performance consumption.
The step of performing directional light vector processing and texture coordinate offset processing on the initial illumination map by using a shader to obtain a target illumination map includes:
regulating and controlling the illumination intensity of the initial illumination map based on the directional light vector through a preset coloring device to obtain a regulated and controlled initial illumination map;
performing cloud backlight strengthening treatment on the regulated initial illumination map to obtain a strengthened illumination map;
And (3) performing offset processing on texture coordinates of the enhanced illumination map through a shader to obtain the target illumination map.
The initial illumination map is subjected to illumination intensity regulation, cloud backlight strengthening treatment and texture coordinate offset treatment through the shader, so that the effect of light transmission of the cloud layer edge of the initial illumination map is strengthened, and the function of a cloud which changes along with the change of illumination angles is realized.
The step of adjusting the illumination intensity of the initial illumination map based on the directional light vector by the preset tinter to obtain the adjusted initial illumination map comprises the following steps:
acquiring a plurality of color channel components of the initial illumination map based on the directional light vector through a preset shader, wherein the plurality of color channel components are used for indicating an R channel component, a G channel component, a B channel component and an A channel component based on the illumination direction in a tangential space;
judging whether each color channel component is larger than a preset value or not to obtain a judging result;
determining target illumination maps in multiple directions through the judging result and the multiple color channel components;
and performing linear interpolation processing on the target illumination maps in multiple directions to obtain the regulated initial illumination map.
The target illumination maps in multiple directions are determined through the multiple color channel components, and the regulated initial illumination map is obtained through linear interpolation, so that the illumination performance of the cloud can be dynamically regulated in real time, and the automatic fine art regulation of subsequent details is facilitated.
The step of performing cloud backlight strengthening treatment on the regulated initial illumination map to obtain the strengthened illumination map comprises the following steps:
acquiring a reverse illumination vector and a camera vector of the regulated initial illumination map;
calculating the dot multiplication result of the opposite illumination vector and the camera vector;
and carrying out scattering reinforcement treatment on the regulated initial illumination map through the dot multiplication result to obtain the reinforced illumination map.
The cloud backlight enhancement treatment of the opposite illumination vector and the camera vector effectively enhances the effect of light transmission of the edges of the cloud layer of the enhanced illumination map.
The step of performing offset processing on the texture coordinates of the enhanced illumination map by using the shader to obtain the target illumination map includes:
obtaining a noise map of the enhanced illumination map, and sampling the noise map by a shader based on a preset moving texture coordinate to obtain a sampled noise map;
adding the sampled noise map with a value of a preset moving texture coordinate to obtain a target texture coordinate;
and sampling the enhanced illumination map through the target texture coordinates to obtain the target illumination map.
The enhanced illumination map is sampled through the offset texture coordinates, so that a stronger cloud dynamic effect is realized, and certain visual performance is met on the premise of low performance consumption.
The step of trimming the volume cloud and performing map rendering to obtain an initial illumination map and a shape map comprises the following steps:
adjusting the volume cloud to obtain an adjusted integral volume cloud;
performing multidirectional rendering on the adjusted volume cloud to obtain a lighting map, a transparency map and a self-luminous map in multiple directions;
and combining the illumination maps, the transparency maps and the self-luminous maps in multiple directions to obtain an initial illumination map and a shape map.
Through multidirectional rendering and merging, the illumination details, transparency details and self-luminous details of the initial illumination map and the shape map are enriched, and the illumination performance of the whole cloud can be dynamically adjusted in real time, so that the volume cloud meets a certain visual performance.
The step of adjusting the volume cloud to obtain the adjusted integral volume cloud comprises the following steps:
creating a color shade of the volume cloud, and mapping the volume cloud through the color shade to obtain the volume cloud with the adjusted shape;
performing transparency regulation and control on the volume cloud with the shape adjusted to obtain the volume cloud with the transparency adjusted;
and adding colors to the volume cloud with the adjusted transparency to obtain an adjusted volume cloud.
By means of custom finishing operation, details of the adjusted integral cloud are enriched, the mapping of virtual clouds in various shapes can be generated quickly, and model resources in games can be reused to generate virtual clouds in corresponding shapes.
The step of generating a volume cloud corresponding to the three-dimensional model based on the shape of the specified three-dimensional model includes:
creating a corresponding texture based on the shape of the specified three-dimensional model to obtain a subject cloud;
acquiring a tracing line position based on the model shape, and generating points of random positions through the tracing line position;
carrying out cloud flocculation generation treatment on the points at random positions to obtain point cloud flocculation;
and combining the point cloud wadding and the main cloud to obtain a volume cloud.
The volume cloud is obtained through the generation of the point cloud of the tracing line and the combination of the point cloud and the main body cloud, so that the accuracy of the volume cloud is improved, and the visualization of the volume cloud is enhanced.
The step of rendering based on the target illumination map and the shape map and displaying the rendered volume cloud through the target orientation surface patch to generate a target virtual cloud comprises the following steps:
acquiring a target orientation patch, wherein the target orientation patch is used for indicating a patch which always faces a camera;
rendering by a preset shader based on the target illumination map and the shape map, and displaying the volume cloud rendered by the shader by a preset engine based on the target orientation patch to generate a target virtual cloud.
By always facing the face sheet of the camera and matching with the camera, the camera can display the front face of the volume cloud rendered by the shader, so that the normal shape of the volume cloud rendered by the shader can be displayed.
The present embodiment also provides a machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement the following steps of the method for generating a virtual cloud in a game described above:
generating a volume cloud corresponding to the three-dimensional model based on the shape of the specified three-dimensional model; the appointed three-dimensional model is a basic model corresponding to the shape of the object in the game;
trimming the volume cloud and performing map rendering to obtain an initial illumination map and a shape map;
performing directional light vector processing and texture coordinate offset processing on the initial illumination map through a shader to obtain a target illumination map;
rendering is carried out based on the target illumination map and the shape map, and the rendered volume cloud is displayed through the target facing surface patch so as to generate the target virtual cloud.
By trimming the volume cloud, processing the directional light vector and processing the texture coordinate offset, the illumination performance and the dynamic effect of the virtual cloud are dynamically adjusted in real time, and a certain visual performance effect can be met on the premise of low performance consumption.
The step of performing directional light vector processing and texture coordinate offset processing on the initial illumination map by using a shader to obtain a target illumination map includes:
Regulating and controlling the illumination intensity of the initial illumination map based on the directional light vector through a preset coloring device to obtain a regulated and controlled initial illumination map;
performing cloud backlight strengthening treatment on the regulated initial illumination map to obtain a strengthened illumination map;
and (3) performing offset processing on texture coordinates of the enhanced illumination map through a shader to obtain the target illumination map.
The initial illumination map is subjected to illumination intensity regulation, cloud backlight strengthening treatment and texture coordinate offset treatment through the shader, so that the effect of light transmission of the cloud layer edge of the initial illumination map is strengthened, and the function of a cloud which changes along with the change of illumination angles is realized.
The step of adjusting the illumination intensity of the initial illumination map based on the directional light vector by the preset tinter to obtain the adjusted initial illumination map comprises the following steps:
acquiring a plurality of color channel components of the initial illumination map based on the directional light vector through a preset shader, wherein the plurality of color channel components are used for indicating an R channel component, a G channel component, a B channel component and an A channel component based on the illumination direction in a tangential space;
judging whether each color channel component is larger than a preset value or not to obtain a judging result;
Determining target illumination maps in multiple directions through the judging result and the multiple color channel components;
and performing linear interpolation processing on the target illumination maps in multiple directions to obtain the regulated initial illumination map.
The target illumination maps in multiple directions are determined through the multiple color channel components, and the regulated initial illumination map is obtained through linear interpolation, so that the illumination performance of the cloud can be dynamically regulated in real time, and the automatic fine art regulation of subsequent details is facilitated.
The step of performing cloud backlight strengthening treatment on the regulated initial illumination map to obtain the strengthened illumination map comprises the following steps:
acquiring a reverse illumination vector and a camera vector of the regulated initial illumination map;
calculating the dot multiplication result of the opposite illumination vector and the camera vector;
and carrying out scattering reinforcement treatment on the regulated initial illumination map through the dot multiplication result to obtain the reinforced illumination map.
The cloud backlight enhancement treatment of the opposite illumination vector and the camera vector effectively enhances the effect of light transmission of the edges of the cloud layer of the enhanced illumination map.
The step of performing offset processing on the texture coordinates of the enhanced illumination map by using the shader to obtain the target illumination map includes:
Obtaining a noise map of the enhanced illumination map, and sampling the noise map by a shader based on a preset moving texture coordinate to obtain a sampled noise map;
adding the sampled noise map with a value of a preset moving texture coordinate to obtain a target texture coordinate;
and sampling the enhanced illumination map through the target texture coordinates to obtain the target illumination map.
The enhanced illumination map is sampled through the offset texture coordinates, so that a stronger cloud dynamic effect is realized, and certain visual performance is met on the premise of low performance consumption.
The step of trimming the volume cloud and performing map rendering to obtain an initial illumination map and a shape map comprises the following steps:
adjusting the volume cloud to obtain an adjusted integral volume cloud;
performing multidirectional rendering on the adjusted volume cloud to obtain a lighting map, a transparency map and a self-luminous map in multiple directions;
and combining the illumination maps, the transparency maps and the self-luminous maps in multiple directions to obtain an initial illumination map and a shape map.
Through multidirectional rendering and merging, the illumination details, transparency details and self-luminous details of the initial illumination map and the shape map are enriched, and the illumination performance of the whole cloud can be dynamically adjusted in real time, so that the volume cloud meets a certain visual performance.
The step of adjusting the volume cloud to obtain the adjusted integral volume cloud comprises the following steps:
creating a color shade of the volume cloud, and mapping the volume cloud through the color shade to obtain the volume cloud with the adjusted shape;
performing transparency regulation and control on the volume cloud with the shape adjusted to obtain the volume cloud with the transparency adjusted;
and adding colors to the volume cloud with the adjusted transparency to obtain an adjusted volume cloud.
By means of custom finishing operation, details of the adjusted integral cloud are enriched, the mapping of virtual clouds in various shapes can be generated quickly, and model resources in games can be reused to generate virtual clouds in corresponding shapes.
The step of generating a volume cloud corresponding to the three-dimensional model based on the shape of the specified three-dimensional model includes:
creating a corresponding texture based on the shape of the specified three-dimensional model to obtain a subject cloud;
acquiring a tracing line position based on the model shape, and generating points of random positions through the tracing line position;
carrying out cloud flocculation generation treatment on the points at random positions to obtain point cloud flocculation;
and combining the point cloud wadding and the main cloud to obtain a volume cloud.
The volume cloud is obtained through the generation of the point cloud of the tracing line and the combination of the point cloud and the main body cloud, so that the accuracy of the volume cloud is improved, and the visualization of the volume cloud is enhanced.
The step of rendering based on the target illumination map and the shape map and displaying the rendered volume cloud through the target orientation surface patch to generate a target virtual cloud comprises the following steps:
acquiring a target orientation patch, wherein the target orientation patch is used for indicating a patch which always faces a camera;
rendering by a preset shader based on the target illumination map and the shape map, and displaying the volume cloud rendered by the shader by a preset engine based on the target orientation patch to generate a target virtual cloud.
By always facing the face sheet of the camera and matching with the camera, the camera can display the front face of the volume cloud rendered by the shader, so that the normal shape of the volume cloud rendered by the shader can be displayed.
The method, the device, the equipment and the computer program product of the storage medium for generating the virtual cloud in the game provided by the embodiment of the invention comprise a computer readable storage medium storing program codes, wherein the instructions included in the program codes can be used for executing the method described in the method embodiment, and specific implementation can be seen in the method embodiment and will not be repeated here.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In addition, in the description of embodiments of the present invention, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood by those skilled in the art in specific cases.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention for illustrating the technical solution of the present invention, but not for limiting the scope of the present invention, and although the present invention has been described in detail with reference to the foregoing examples, it will be understood by those skilled in the art that the present invention is not limited thereto: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (12)

1. A method for generating virtual clouds in a game, the method comprising:
generating a volume cloud corresponding to the three-dimensional model based on the shape of the specified three-dimensional model; the appointed three-dimensional model is a basic model corresponding to the target shape in the game;
trimming the volume cloud, and performing map rendering to obtain an initial illumination map and a shape map;
performing directional light vector processing and texture coordinate offset processing on the initial illumination map through a shader to obtain a target illumination map;
rendering is carried out based on the target illumination map and the shape map, and the rendered volume cloud is displayed through a target orientation surface patch so as to generate a target virtual cloud.
2. The method of claim 1, wherein the step of performing directional light vector processing and texture coordinate shift processing on the initial illumination map by a shader to obtain a target illumination map comprises:
regulating and controlling the illumination intensity of the initial illumination map based on the directional light vector through a preset coloring device to obtain a regulated and controlled initial illumination map;
performing cloud backlight strengthening treatment on the regulated initial illumination map to obtain a strengthened illumination map;
And performing offset processing on texture coordinates of the enhanced illumination map through the shader to obtain a target illumination map.
3. The method according to claim 2, wherein the step of performing illumination intensity adjustment and control on the initial illumination map by a preset shader based on the directional light vector to obtain the adjusted initial illumination map comprises:
acquiring a plurality of color channel components of the initial illumination map based on a directional light vector through a preset shader, wherein the plurality of color channel components are used for indicating an R channel component, a G channel component, a B channel component and an A channel component based on illumination directions in a tangential space;
judging whether each color channel component is larger than a preset value or not to obtain a judging result;
determining target illumination maps in multiple directions according to the judging result and the multiple color channel components;
and performing linear interpolation processing on the target illumination maps in the multiple directions to obtain an initial illumination map after regulation.
4. The method of claim 2, wherein the step of performing cloud backlight enhancement processing on the adjusted initial illumination map to obtain an enhanced illumination map comprises:
Acquiring an opposite illumination vector and a camera vector of the regulated initial illumination map;
calculating the dot multiplication result of the opposite illumination vector and the camera vector;
and carrying out scattering strengthening treatment on the regulated initial illumination map through the dot multiplication result to obtain the strengthened illumination map.
5. The method of claim 2, wherein the step of performing, by the shader, an offset process on texture coordinates of the enhanced illumination map to obtain a target illumination map includes:
acquiring a noise map of the enhanced illumination map, and sampling the noise map by the shader based on a preset moving texture coordinate to obtain a sampled noise map;
adding the sampled noise map and the value of the preset moving texture coordinate to obtain a target texture coordinate;
and sampling the enhanced illumination map through the target texture coordinates to obtain a target illumination map.
6. The method of claim 1, wherein the step of trimming and mapping the volumetric cloud to obtain an initial illumination map and a shape map comprises:
Adjusting the volume cloud to obtain an adjusted integral volume cloud;
performing multidirectional rendering on the adjusted volume cloud to obtain illumination maps, transparency maps and self-luminous maps in multiple directions;
and combining the illumination maps, the transparency maps and the self-luminous maps in multiple directions to obtain an initial illumination map and a shape map.
7. The method of claim 6, wherein the step of adjusting the volumetric cloud to obtain an adjusted overall volumetric cloud comprises:
creating a color shade of the volume cloud, and mapping the volume cloud through the color shade to obtain a volume cloud with an adjusted shape;
performing transparency regulation and control on the volume cloud with the adjusted shape to obtain the volume cloud with the adjusted transparency;
and adding colors to the volume cloud with the adjusted transparency to obtain an adjusted volume cloud.
8. The method of any of claims 1-7, wherein the step of generating a volume cloud corresponding to the three-dimensional model based on the shape of the specified three-dimensional model comprises:
creating a corresponding texture based on the shape of the specified three-dimensional model to obtain a subject cloud;
Acquiring a tracing line position based on a model shape, and generating points of random positions through the tracing line position;
cloud flocculation generation processing is carried out on the points at the random positions to obtain point cloud flocculation;
and combining the point cloud wadding and the main cloud to obtain a volume cloud.
9. The method of claim 1, wherein the step of rendering based on the target illumination map and the shape map and displaying the rendered volume cloud through a target towards a panel to generate a target virtual cloud comprises:
acquiring a target orientation patch, wherein the target orientation patch is used for indicating a patch which always faces a camera;
rendering by a preset shader based on the target illumination map and the shape map, and displaying the volume cloud rendered by the shader by a preset engine based on the target orientation patch to generate a target virtual cloud.
10. The device for generating the virtual cloud in the game is characterized by comprising the following components:
a generation module for generating a volume cloud corresponding to a specified three-dimensional model based on a shape of the model; the appointed three-dimensional model is a basic model corresponding to the target shape in the game;
The trimming module is used for trimming the volume cloud and performing map rendering to obtain an initial illumination map and a shape map;
the offset module is used for carrying out directional light vector processing and texture coordinate offset processing on the initial illumination map through a shader to obtain a target illumination map;
and the display module is used for rendering based on the target illumination map and the shape map, and displaying the rendered volume cloud through a target orientation surface patch so as to generate a target virtual cloud.
11. An electronic device comprising a processor and a memory, the memory storing machine-executable instructions executable by the processor, the processor executing the machine-executable instructions to implement the method of generating virtual clouds in a game of any one of claims 1-9.
12. A computer readable storage medium storing machine executable instructions that, when invoked and executed by a processor, the machine-executable instructions cause the processor to implement the method of generating virtual clouds in a game of any one of claims 1-9.
CN202310177233.8A 2023-02-16 2023-02-16 Method, device, equipment and storage medium for generating virtual cloud in game Pending CN116402940A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310177233.8A CN116402940A (en) 2023-02-16 2023-02-16 Method, device, equipment and storage medium for generating virtual cloud in game

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310177233.8A CN116402940A (en) 2023-02-16 2023-02-16 Method, device, equipment and storage medium for generating virtual cloud in game

Publications (1)

Publication Number Publication Date
CN116402940A true CN116402940A (en) 2023-07-07

Family

ID=87006469

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310177233.8A Pending CN116402940A (en) 2023-02-16 2023-02-16 Method, device, equipment and storage medium for generating virtual cloud in game

Country Status (1)

Country Link
CN (1) CN116402940A (en)

Similar Documents

Publication Publication Date Title
US7583264B2 (en) Apparatus and program for image generation
ES2407080T3 (en) Achromatic lighting in a graphics system and method
CN112316420A (en) Model rendering method, device, equipment and storage medium
US7064753B2 (en) Image generating method, storage medium, image generating apparatus, data signal and program
JP3748545B2 (en) Program, information storage medium, and image generation apparatus
CN113240783B (en) Stylized rendering method and device, readable storage medium and electronic equipment
CN111986303A (en) Fluid rendering method and device, storage medium and terminal equipment
US7479961B2 (en) Program, information storage medium, and image generation system
KR100453529B1 (en) APPARATUS AND METHOD FOR DRAWING THREE DIMENSIONAL GRAPHICS BY CONTROLLING α VALUE BASED ON Z COORDINATE VALUE
US5880735A (en) Method for and apparatus for transparency conversion, image processing system
CN116402940A (en) Method, device, equipment and storage medium for generating virtual cloud in game
CN112465941B (en) Volume cloud processing method and device, electronic equipment and storage medium
US7710419B2 (en) Program, information storage medium, and image generation system
KR100603134B1 (en) Method and apparatus for 3 dimension rendering processing using the monochromatic lighting
US7724255B2 (en) Program, information storage medium, and image generation system
US20230410406A1 (en) Computer-readable non-transitory storage medium having image processing program stored therein, image processing apparatus, image processing system, and image processing method
US6259454B1 (en) Method and apparatus for interpolative, adaptive illumination in 3D graphics
CN112669437B (en) Role model coloring method, coloring device, equipment and storage medium
EP1156455A2 (en) Fog visual effect rendering method
CN110354499B (en) Contour light control method and device
KR100900076B1 (en) Texturing System and Method for Border Lins is Natural
CN117218270A (en) Rendering method and device of transition region, electronic equipment and storage medium
CN116051716A (en) Rendering method and device of model and electronic equipment
CN116370956A (en) Method and device for rendering snow in game scene, electronic equipment and storage medium
CN116524102A (en) Cartoon second-order direct illumination rendering method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination