CN117599420A - Method and device for generating volume fog, electronic equipment and storage medium - Google Patents

Method and device for generating volume fog, electronic equipment and storage medium Download PDF

Info

Publication number
CN117599420A
CN117599420A CN202311658939.2A CN202311658939A CN117599420A CN 117599420 A CN117599420 A CN 117599420A CN 202311658939 A CN202311658939 A CN 202311658939A CN 117599420 A CN117599420 A CN 117599420A
Authority
CN
China
Prior art keywords
target
model
volume
preset
fog
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311658939.2A
Other languages
Chinese (zh)
Inventor
�田润
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202311658939.2A priority Critical patent/CN117599420A/en
Publication of CN117599420A publication Critical patent/CN117599420A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/603D [Three Dimensional] animation of natural phenomena, e.g. rain, snow, water or plants
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/663Methods for processing data by generating or executing the game program for rendering three dimensional images for simulating liquid objects, e.g. water, gas, fog, snow, clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/16Using real world measurements to influence rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a method, a device, electronic equipment and a storage medium for generating volume fog, wherein the method comprises the steps of determining a target game scene of a target volume fog model to be generated; determining a volume space corresponding to the target volume fog model to be generated based on the target game scene; generating a plurality of preset primitive models in the volume space, and rendering materials of the preset primitive models based on the positions of the preset primitive models in a target game scene; the target volume fog model is generated based on the primitive model rendered by materials, and the generated target volume fog model is formed by each preset primitive model, so that regional interaction of volume fog can be realized, and meanwhile, a plurality of preset primitive models are generated in a determined volume space to form an integral target volume fog model, so that the method and the device can be suitable for various game scenes and can be well attached to the terrains of the various game scenes.

Description

Method and device for generating volume fog, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of volumetric fog generation technologies, and in particular, to a volumetric fog generation method, a volumetric fog generation device, an electronic device, and a storage medium.
Background
This section is intended to provide a background or context for embodiments of the present application that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
In the field of computer multimedia, in general, volume fog is implemented in games by using volume texture technology. The volumetric material may simulate the volumetric effects of the material, such as smoke and clouds, which may be achieved by inserting vaporous elements in three dimensions. Volume materials in games are typically implemented using shaders, and the interaction of light with the fog elements can be calculated using volume rendering techniques in computer graphics to simulate transparency and scattering effects. However, in the related art, the generation of the volume fog is generally directly performed through the volume material, the volume fog is attached to the model to generate the volume fog effect, and only the whole interaction can be performed during the interaction, for example, the model with the volume fog material is directly destroyed or hidden.
Disclosure of Invention
In view of the foregoing, it is an object of the present application to provide a method, an apparatus, an electronic device and a storage medium for generating a volume mist, which solve or partially solve the above-mentioned problems in the background art.
Based on the above object, the present application provides a method for generating a volume mist, comprising:
determining a target game scene of a target volume fog model to be generated;
determining a volume space corresponding to the target volume fog model to be generated based on the target game scene;
generating a plurality of preset primitive models in the volume space, and rendering materials of the preset primitive models based on the positions of the preset primitive models in a target game scene;
and generating the target volume fog model based on the primitive model after the material rendering.
Based on the same inventive concept, exemplary embodiments of the present application also provide a generating device of a volumetric fog, including:
the first determining module is used for determining a target game scene of a target volume fog model to be generated;
the second determining module is used for determining a volume space corresponding to the to-be-generated target volume fog model based on the target game scene;
the rendering module is used for generating a plurality of preset primitive models in the volume space and rendering materials of the preset primitive models based on the positions of the preset primitive models in a target game scene;
and the generating module is used for generating the target volume fog model based on the primitive model after the rendering of the materials.
Based on the same inventive concept, the exemplary embodiments of the present application also provide an electronic device including a memory, a processor, and a computer program stored on the memory and executable by the processor, the processor implementing the method of generating a volumetric fog as described above when executing the program.
Based on the same inventive concept, the present exemplary embodiments also provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of generating a volumetric fog as described above.
Based on the same inventive concept, the exemplary embodiments of the present application also provide a computer program product comprising a computer program that is executed by one or more processors to cause the processors to perform the method of adjusting game sounds as described above.
From the above, it can be seen that the method, the device, the electronic device and the storage medium for generating the volume fog provided by the present application determine a target game scene of a target volume fog model to be generated; determining a volume space corresponding to the target volume fog model to be generated based on the target game scene; generating a plurality of preset primitive models in the volume space, and rendering materials of the preset primitive models based on the positions of the preset primitive models in a target game scene; the target volume fog model is generated based on the primitive model rendered by materials, and the generated target volume fog model is formed by each preset primitive model, so that regional interaction of volume fog can be realized, and meanwhile, a plurality of preset primitive models are generated in a determined volume space to form an integral target volume fog model, so that the method and the device can be suitable for various game scenes and can be well attached to the terrains of the various game scenes.
Drawings
In order to more clearly illustrate the technical solutions of the present application or related art, the drawings that are required to be used in the description of the embodiments or related art will be briefly described below, and it is apparent that the drawings in the following description are only embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort to those of ordinary skill in the art.
Fig. 1 is a schematic diagram of an application scenario in an embodiment of the present application;
FIG. 2 is a flow chart of a method of generating a volumetric fog in accordance with an embodiment of the present application;
FIG. 3 is a schematic illustration of slicing an initial three-dimensional volumetric fog model according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating the effect of a sequential frame mapping according to an embodiment of the present application;
FIG. 5 is a schematic diagram of determining a volume in a game scene according to an embodiment of the present application;
FIG. 6 is a schematic diagram of the effect of a generated target volume fog model according to an embodiment of the present application;
FIG. 7 is a schematic diagram of an interaction time curve according to an embodiment of the present application;
FIG. 8 is a schematic diagram of another interaction time curve according to an embodiment of the present application;
fig. 9 is a schematic structural view of a generating device of a volumetric fog according to an embodiment of the present application;
Fig. 10 is a schematic structural diagram of a specific electronic device according to an embodiment of the present application.
Detailed Description
The principles and spirit of the present application will be described below with reference to several exemplary embodiments. It should be understood that these embodiments are presented merely to enable one skilled in the art to better understand and practice the present application and are not intended to limit the scope of the present application in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It should be noted that unless otherwise defined, technical or scientific terms used in the embodiments of the present disclosure should be given the ordinary meaning as understood by one of ordinary skill in the art to which the present disclosure pertains. The terms "first," "second," and the like, as used in embodiments of the present disclosure, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed. Furthermore, in the description of the present application, unless otherwise indicated, the term "plurality" refers to two or more. The term "and/or" describes an association relationship of associated objects, meaning that there may be three relationships, e.g., a and/or B, which may represent: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
According to an embodiment of the application, a method, a system, an electronic device and a storage medium for generating a volume fog are provided.
In this document, it should be understood that any number of elements in the drawings is for illustration and not limitation, and that any naming is used only for distinction and not for any limitation.
The principles and spirit of the present application are explained in detail below with reference to several representative embodiments thereof.
Summary of The Invention
In the prior art, the volume fog is generally directly generated through the volume material, the volume fog is attached to the model to generate the volume fog effect, and the whole interaction can only be carried out during the interaction, for example, the model with the volume fog material is directly destroyed or hidden. In addition, in the related art, if the volume fog of the topography of the game scene is to be generated, the volume material and the model corresponding to the topography need to be manufactured in advance, and cannot be generated in a generalized way, if the volume fog effect needs to be reused in a large area, a large amount of manpower is required to be consumed to manufacture the volume fog material and the model of the topography. Moreover, the volume fog in the related art is the whole interaction when the volume fog is interacted, and the volume fog cannot be penetrated, for example, the volume fog is thrown to a ball model, so that only the surrounding volume fog contacted with the ball model in the volume fog is interacted, but the related art cannot realize the regional detail interaction.
In order to solve the above problems, the present application provides a method for generating a volume mist, specifically including:
determining a target game scene of a target volume fog model to be generated; determining a volume space corresponding to the target volume fog model to be generated based on the target game scene; generating a plurality of preset primitive models in the volume space, and rendering materials of the preset primitive models based on the positions of the preset primitive models in a target game scene; the target volume fog model is generated based on the primitive model rendered by materials, and the generated target volume fog model is formed by each preset primitive model, so that regional interaction of volume fog can be realized, and meanwhile, a plurality of preset primitive models are generated in a determined volume space to form an integral target volume fog model, so that the method and the device can be suitable for various game scenes and can be well attached to the terrains of the various game scenes.
Having described the basic principles of the present application, various non-limiting embodiments of the present application are specifically described below.
Application scene overview
In some specific application scenarios, the volumetric fog generation method of the present application may be applied to various systems involving volumetric fog generation. Alternatively, the system may be a game or animation system, and as an example, referring to fig. 1, the application scenario includes at least one server 102 and at least one terminal 101. Terminal devices include, but are not limited to, desktop computers, mobile phones, mobile computers, tablet computers, media players, smart wearable device televisions, personal digital assistants (personal digital assistant, PDAs), or other electronic devices capable of performing the functions described above, and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (content delivery networks), basic cloud computing services such as big data and artificial intelligence platforms, and the like. The server and the terminal can communicate through a network so as to realize data transmission. The network may be a wired network or a wireless network, which is not specifically limited in this application.
The server may be a server providing various services. In particular, the server may be configured to provide background services for applications running on the terminal. Alternatively, in some implementations, the method for generating the volumetric fog provided by the embodiments of the present application may be performed by a terminal device or a server. The server may be hardware or software. When the server is hardware, the server may be implemented as a distributed server cluster formed by a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules (e.g., software or software modules for providing distributed services), or as a single software or software module. The embodiment of the present application is not particularly limited thereto.
Alternatively, the wireless network or wired network uses standard communication techniques and/or protocols. The network is typically the internet, but can be any network including, but not limited to, a local area network (local area network, LAN), metropolitan area network (metropolitan area network, MAN), wide area network (wide area network, WAN), mobile, wired or wireless network, private network, or any combination of virtual private networks. In some embodiments, data exchanged over the network is represented using techniques and/or formats including hypertext markup language (HTML), extensible markup language (extensible markup language, XML), and the like. In addition, all or some of the links can be encrypted using conventional encryption techniques such as secure socket layer (secure socket layer, SSL), transport layer security (transport layer security, TLS), virtual private network (virtual private network, VPN), internet protocol security (internet protocol security, IPsec), and the like. In other embodiments, custom and/or dedicated data communication techniques can also be used in place of or in addition to the data communication techniques described above.
The method for generating the volumetric fog according to the exemplary embodiments of the present application will be described below in connection with a specific application scenario. It should be noted that the above application scenario is only shown for the convenience of understanding the spirit and principles of the present application, and embodiments of the present application are not limited in any way in this respect. Rather, embodiments of the present application may be applied to any scenario where applicable.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 2 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
Exemplary method
Referring to fig. 2, an embodiment of the present application provides a method for generating a volume fog, and an execution subject of the method for generating a volume fog may be, but is not limited to, a server or a terminal device. The method comprises the following steps:
s101, determining a target game scene of a target volume fog model to be generated.
In specific implementation, the target game scene of the target volume fog model to be generated can be determined according to the requirement, alternatively, the target game scene can be determined directly through input of a user, or the target game scene is selected from a plurality of preset game scenes by the user, and the target game scene is determined according to a selection instruction of the user. Alternatively, the target game scene may be any game scene, for example, the target game scene may be a maze scene inside a game, or a specific scene of a certain area in the game, which is not limited.
S102, determining a volume space corresponding to the target volume fog model to be generated based on the target game scene.
In the implementation, after the target game scene is determined, the volume space corresponding to the to-be-generated target volume fog model can be determined according to the target game scene. When the volume space is specifically determined, the volume space corresponding to the target volume fog model can be determined according to the shape of the target game scene. For example, where a target game scene is a room in a game, the corresponding volumetric space may be the space of the entire room. Optionally, the volume space is used for determining the shape and the size of the finally generated target volume fog model, so that the generated target volume fog model is mutually attached to the terrain of the target game scene, and further the generated target volume fog model can fill the whole target game scene and does not have a mold penetrating phenomenon with the target game scene.
In order to accurately determine the volume space, in some embodiments, determining, based on the target game scene, the volume space corresponding to the target volumetric fog model to be generated specifically includes:
determining a target datum point in the target game scene;
transmitting a plurality of target rays around the target game scene with the target datum point as a center;
obtaining the target positions of collision of the target rays and other virtual models in the target game scene;
the volume is determined based on the target position.
In the implementation, the range of the volume space can be determined in the target game scene through ray detection, and when the emitted rays collide with other virtual models in the target game scene, the maximum extension distance of the target volume fog model in the direction is described until the other virtual models collide, so that the target volume fog model can be ensured not to penetrate through other virtual models. Optionally, the other virtual model mainly refers to other virtual models in the target game scene besides the target volume fog model. For example, the other virtual model may be a wall, roof, tree, etc. in the target game scene. When determining the specific volume space through ray detection, a target datum point is determined in the target game scene, alternatively, the target datum point can be determined according to needs, for example, the center position of the target game scene can be determined as the target datum point, or the target datum point can be determined randomly, which is not limited. And then, taking the target datum point as a center, emitting a plurality of target rays to the periphery in the target game scene, and acquiring target positions of collision of each target ray and other virtual models in the target game scene, wherein the target positions are positions of each boundary of a volume space, and finally, the volume space can be determined according to the target positions, for example, the target positions can be directly connected to form the volume space. The specific number of the plurality of target rays emitted to the periphery can be set according to the need, and generally, the larger the number is, the higher the accuracy of the corresponding generated volume space is. Referring to fig. 5, a schematic diagram of determining a volume space corresponding to a target volumetric fog model to be generated is shown, where the target game scene in fig. 5 is an annular maze, and the corresponding generated volume space is a certain section of terrain of the annular maze.
S103, generating a plurality of preset primitive models in the volume space, and rendering materials of the preset primitive models based on the positions of the preset primitive models in a target game scene.
In the implementation, after the volume space is determined, a plurality of preset primitive models can be generated in the volume space, namely, a final target volume fog model is generated in a mode of filling the preset primitive models, so that after the preset primitive models are prefabricated, various target game scenes can be filled with the preset primitive models, and the purpose of generating the target volume fog model which is attached to various terrains in a generalized mode is achieved. The preset primitive model is a minimum unit model for generating the target volume fog model, optionally, the specific form of the preset primitive model can be set according to needs, for example, the preset primitive model can be a sphere model, a cube model or a model with other shapes. After generating a plurality of preset primitive models, the preset primitive models can be further subjected to material rendering according to the positions of the preset primitive models in the target game scene. Alternatively, the same model may be rendered for all positions of the preset primitive models, for example, all preset primitive models may be rendered white in some cloud volume fogs. In some embodiments, different materials may be given to the preset primitive model according to different positions of the preset primitive model, so that the finally obtained target volume fog model has an gradient color or multiple color effects.
In some embodiments, generating a plurality of preset primitive models in the volume space specifically includes:
randomly generating a plurality of preset primitive models in the volume space;
or sequentially generating a plurality of preset primitive models according to a preset sequence, and overlapping the edge areas of two adjacent preset primitive models.
In the implementation, when generating the plurality of preset primitive models in the volume space, the generating may be performed randomly, or the plurality of preset primitive models may be sequentially generated according to a preset sequence, and a process of specifically generating the plurality of preset primitive models is not limited. Alternatively, when determining the number of the plurality of preset primitive models, the volume of the volume space and the volume of the single preset primitive model may be determined first, and then the number of the plurality of preset primitive models may be obtained by using the volume of the single preset primitive model on the volume ratio of the volume space. After the number of preset primitive models is determined, the preset primitive models may be generated in the volumetric space.
In some embodiments, rendering materials of the preset primitive model based on the position of the preset primitive model in the target game scene specifically includes:
Generating an initial three-dimensional volume fog model with a preset form based on the target game scene;
slicing the initial three-dimensional volume fog model along a preset direction to obtain a sequence frame map; wherein the sequence frame map comprises a plurality of texture maps arranged in sequence;
converting the sequence frame map into a volume map; wherein the volume map comprises spatial coordinates of each of the texture maps;
and rendering materials of the preset primitive model based on the position of the preset primitive model in the target game scene and the volume map.
In the implementation, in order to ensure that the overall material effect of the finally generated target volume fog model meets the expectations, namely is matched with the target game scene. The initial three-dimensional volumetric fog model with the preset shape can be generated based on the target game scene, and it should be noted that the preset shape can be set according to the needs, for example, a cube shape, a sphere shape, or other shapes matched with the game scene. The shape and size of the initial three-dimensional volumetric fog model cannot be directly attached to the target game scene, and the initial three-dimensional volumetric fog model is only used for providing a material reference for the target volumetric fog model, so that the material of the generated target volumetric fog model is similar to or consistent with that of the initial three-dimensional volumetric fog model. Alternatively, the initial three-dimensional volumetric fog model may be generated by a rendering engine in the related art, e.g., a 3dmax, houdini, etc. rendering engine. Optionally, the specific material of the initial three-dimensional volumetric fog model may be determined according to the target game scene, for example, if a volumetric fog released by a smoke bullet is required in the current target game scene, the material of the corresponding initial three-dimensional volumetric fog model is the material corresponding to the smoke bullet release effect.
After an initial three-dimensional volume fog model is generated, slicing the initial three-dimensional volume fog model along a preset direction to obtain a sequence frame map; wherein the sequence frame map comprises a plurality of ordered texture maps, wherein the number of the plurality of ordered texture maps is determined by the number of times the slice is taken, and typically the number of times the slice is taken plus 1 is equal to the number of the plurality of ordered texture maps. Referring to fig. 3, a schematic view of slicing the initial three-dimensional volumetric fog model from top to bottom is shown. Referring to fig. 4, for a sequence of frame maps obtained for a cut, it can be seen that the sequence of frame maps includes a plurality of texture maps arranged in sequence. After a sequence frame mapping is obtained, converting the sequence frame mapping into a volume mapping; wherein the volume map comprises spatial coordinates of each of the texture maps; and then determining the material of each primitive model from the volume map according to the position of the preset primitive model in the target game scene so as to finish the material rendering of the preset primitive model.
Since the normal texture map is a planar map, there is generally only two-dimensional coordinates, and in this embodiment, the three-dimensional space coordinates are achieved by slicing the three-dimensional volume fog model and recording the arrangement order of the slices, with respect to the coordinates in the third direction that can be assigned to the texture map corresponding to each slice.
To accurately convert a sequence frame map to a volume map, in some embodiments, converting the sequence frame map to a volume map specifically includes:
obtaining the target size of the initial three-dimensional volume fog model along the preset direction and the target number of the texture maps in the sequence frame maps;
determining component coordinates of the texture map in the preset direction based on the target size, the target number and the arrangement sequence of the texture map;
determining two-dimensional coordinates of each pixel point in the texture map;
determining spatial coordinates of the texture map based on the two-dimensional coordinates and the component coordinates;
the sequence of frame maps is converted to a volume map based on the spatial coordinates of the texture map.
In practice, the pixel points on the texture map normally have only two-dimensional coordinates, for example, coordinates (x, y), so when converting the sequence frame map into the volume map, it is necessary to determine the coordinates of each texture map in the sequence frame map in the third direction first, that is, on the basis of the two-dimensional coordinates, add the coordinates in the third direction to obtain the three-dimensional coordinates, that is, convert the coordinates (x, y) into (x, y, z). When determining the coordinates of each texture map along the third direction (z axis), the target size of the initial three-dimensional volumetric fog model along the preset direction may be determined first, where the preset direction is the third direction, for example, a certain initial three-dimensional volumetric fog model is a cube, where the cube has three directions of length, width and height in space, and if slicing is performed along the direction of the cube, the target size is the size corresponding to the height of the cube. After determining the target size of the initial three-dimensional volumetric fog model along the preset direction, component coordinates (coordinates of a third direction) of the texture map in the preset direction can be determined according to the target number of the texture maps in the sequence frame map, the target size and the arrangement order of the texture maps. Alternatively, the component coordinates in the preset direction may be determined by the following formula:
Wherein H (k) represents the component coordinates of the kth texture map in the preset direction, X represents the target size, and a represents the target number of the texture maps in the sequence frame map, i.e. the number of slices plus 1.
After determining the component coordinates of the texture map in the preset direction, adding the component coordinates to the two-dimensional coordinates of each pixel point in the texture map, so as to obtain the spatial coordinates (three-dimensional coordinates) of the texture map. It should be noted that, the component coordinates of all the pixel points on the same texture map in the preset direction are the same.
In order to accurately render the material of the preset primitive model, in some embodiments, rendering the material of the preset primitive model based on the position of the preset primitive model in the target game scene and the volume map specifically includes:
determining target position coordinates of each vertex of the preset primitive model based on the position of the preset primitive model in a target game scene;
determining a closest target texture map from the volume map based on the target position coordinates;
determining the color of the vertex after rendering the material from the target texture map based on the target position coordinates;
And rendering the material of the preset primitive model based on the color of the vertex after rendering the material.
In the implementation, since the volume map includes a plurality of texture maps having spatial coordinates, and component coordinates of all pixel points in the same texture map in a preset direction are equal, when determining a color of each vertex of the preset primitive model after rendering a material, determining a target texture map closest to the vertex, and then determining the color of the vertex after rendering the material from the target texture map according to a target position coordinate of each vertex, that is, determining the color of the pixel point corresponding to the target position coordinate in the target texture map as the color of the vertex after rendering the material. And when the color of each vertex of the preset primitive model is determined, finishing the material rendering of the preset primitive model.
S104, generating the target volume fog model based on the primitive model after material rendering.
When the method is implemented, after each primitive model after material rendering is obtained, the target volume fog model can be generated through all primitive models after material rendering. Referring to fig. 6, for a certain generated target volume fog model, where the target game scene corresponding to fig. 6 is an annular maze, the corresponding generated target volume fog model is a section of fog in the annular maze, and it can be seen that the fog in fig. 6 is perfectly fit with the annular maze.
In some embodiments, after generating the target volumetric fog model based on the primitive model after material rendering, the method further comprises:
responding to the target volume fog model to trigger interaction in the target game scene, and determining a target preset primitive model for interaction;
and generating the interaction special effect based on the target preset primitive model.
In the implementation, because the target preset primitive model in the scheme is composed of a plurality of preset primitive models, when interaction is performed, interaction can be performed only with the target preset primitive model with interaction, and therefore regional detail interaction is achieved. Optionally, the interaction includes dissolution interaction, collision interaction, contact interaction, and the like, and specific interaction types can be set according to needs, and the comparison is not limited.
In some embodiments, generating the interaction special effect based on the target preset primitive model specifically includes:
determining an initial state texture and an interaction time curve of the interaction special effect;
and replacing the texture of the material of the target preset primitive model with the initial state texture, and adjusting the initial state texture based on the interaction time curve.
In the implementation, referring to fig. 7 and fig. 8, two different interaction time curves are provided, and the specific interaction time curves may be set according to needs without limitation. After a specific interaction time curve is determined, the texture of the material of the target preset primitive model can be replaced by the initial state texture at the moment of triggering interaction, and then the initial state texture is gradually adjusted according to the interaction time curve, so that other special effects which change along with time, such as recovery or dissipation, can be generated. Optionally, the initial state texture may be determined according to specific interactive content, which is not limited. For example, when the interaction is a dissolution interaction, the initial state texture may be a preset initial dissolution texture, and when the interaction is a collision interaction, the initial state texture may be a preset initial collision texture.
For example, in both fig. 7 and 8, the initial state texture gradually disappears over time. Optionally, in the process that the texture in the initial state gradually disappears along with time, the target preset primitive model can be gradually restored to the original texture, or the target preset primitive model can be directly changed into transparent color gradually, and the specific effect can be set according to the needs, so that the method is not limited.
In some embodiments, after generating the target volumetric fog model based on the primitive model after material rendering, the method further comprises:
generating integral deformation in the target game scene in response to the target volumetric fog model; wherein the global deformation comprises a scaling deformation and/or a displacement deformation;
the position of the preset primitive model in the target game scene is redetermined based on the target volume fog model after the integral deformation;
and rendering materials of the preset primitive model based on the redetermined position of the preset primitive model in the target game scene and the volume map.
When the whole target volume fog model is deformed such as displacement or scaling, the position of each preset primitive model in the target game scene can be redetermined, and then the material rendering is carried out on the preset primitive models again through the redetermined position of the preset primitive model in the target game scene and the volume map, so that a new target volume fog model does not need to be manufactured from the head. Optionally, in some embodiments, when the whole target volume fog model is deformed such as displacement or scaling, the spatial coordinates of each texture map in the volume map may be scaled or displaced in an equal proportion while the positions of the preset primitive models in the target game scene are redetermined, so that the whole deformed target volume fog model is still consistent with the initial three-dimensional volume fog model.
According to the method for generating the volume fog, a target game scene of a target volume fog model to be generated is determined; determining a volume space corresponding to the target volume fog model to be generated based on the target game scene; generating a plurality of preset primitive models in the volume space, and rendering materials of the preset primitive models based on the positions of the preset primitive models in a target game scene; the target volume fog model is generated based on the primitive model rendered by materials, and the generated target volume fog model is formed by each preset primitive model, so that regional interaction of volume fog can be realized, and meanwhile, a plurality of preset primitive models are generated in a determined volume space to form an integral target volume fog model, so that the method and the device can be suitable for various game scenes and can be well attached to the terrains of the various game scenes. In addition, the application provides a complete programmed method for generating the fitting topography, so that the manpower consumption for customizing the volume fog fitting the topography is reduced, the fluid operation of the volume fog is not used in the scheme of the application, a large amount of computation caused by the fluid operation interaction of the volume fog is reduced, and the regional volume fog interaction mode is simulated under the condition of not using the fluid algorithm, so that the interaction method of the volume fog with lower consumption is brought.
It should be noted that, the method of the embodiments of the present application may be performed by a single device, for example, a computer or a server. The method of the embodiment can also be applied to a distributed scene, and is completed by mutually matching a plurality of devices. In the case of such a distributed scenario, one of the devices may perform only one or more steps of the methods of embodiments of the present application, and the devices may interact with each other to complete the methods.
It should be noted that some embodiments of the present application are described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments described above and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Exemplary apparatus
Based on the same inventive concept, the application also provides a device for generating the volume mist, which corresponds to the method in any embodiment.
Referring to fig. 9, the volumetric fog generating apparatus includes:
a first determining module 201 determines a target game scene for which a target volumetric fog model is to be generated;
a second determining module 202, configured to determine a volume space corresponding to the target volumetric fog model to be generated based on the target game scene;
a rendering module 203, configured to generate a plurality of preset primitive models in the volume space, and perform material rendering on the preset primitive models based on positions of the preset primitive models in a target game scene;
the generating module 204 generates the target volume fog model based on the primitive model after material rendering.
In some embodiments, the rendering module 203 is specifically configured to:
generating an initial three-dimensional volume fog model with a preset form based on the target game scene;
slicing the initial three-dimensional volume fog model along a preset direction to obtain a sequence frame map; wherein the sequence frame map comprises a plurality of texture maps arranged in sequence;
converting the sequence frame map into a volume map; wherein the volume map comprises spatial coordinates of each of the texture maps;
and rendering materials of the preset primitive model based on the position of the preset primitive model in the target game scene and the volume map.
In some embodiments, the rendering module 203 is specifically configured to:
determining target position coordinates of each vertex of the preset primitive model based on the position of the preset primitive model in a target game scene;
determining a closest target texture map from the volume map based on the target position coordinates;
determining the color of the vertex after rendering the material from the target texture map based on the target position coordinates;
and rendering the material of the preset primitive model based on the color of the vertex after rendering the material.
The rendering module 203 is specifically configured to:
obtaining the target size of the initial three-dimensional volume fog model along the preset direction and the target number of the texture maps in the sequence frame maps;
determining component coordinates of the texture map in the preset direction based on the target size, the target number and the arrangement sequence of the texture map;
determining two-dimensional coordinates of each pixel point in the texture map;
determining spatial coordinates of the texture map based on the two-dimensional coordinates and the component coordinates;
the sequence of frame maps is converted to a volume map based on the spatial coordinates of the texture map.
In some embodiments, the second determining module 202 is specifically configured to:
determining a target datum point in the target game scene;
transmitting a plurality of target rays around the target game scene with the target datum point as a center;
obtaining the target positions of collision of the target rays and other virtual models in the target game scene;
the volume is determined based on the target position.
In some embodiments, the rendering module 203 is specifically configured to:
randomly generating a plurality of preset primitive models in the volume space;
or sequentially generating a plurality of preset primitive models according to a preset sequence, and overlapping the edge areas of two adjacent preset primitive models.
In some embodiments, the apparatus further comprises an interaction module for:
responding to the target volume fog model to trigger interaction in the target game scene, and determining a target preset primitive model for interaction;
and generating the interaction special effect based on the target preset primitive model.
In some embodiments, the interaction module is specifically configured to:
determining an initial state texture and an interaction time curve of the interaction special effect;
And replacing the texture of the material of the target preset primitive model with the initial state texture, and adjusting the initial state texture based on the interaction time curve.
In some embodiments, the apparatus further comprises a deformation module for:
generating integral deformation in the target game scene in response to the target volumetric fog model; wherein the global deformation comprises a scaling deformation and/or a displacement deformation;
the position of the preset primitive model in the target game scene is redetermined based on the target volume fog model after the integral deformation;
and rendering materials of the preset primitive model based on the redetermined position of the preset primitive model in the target game scene and the volume map.
For convenience of description, the above system is described as being functionally divided into various modules, respectively. Of course, the functions of each module may be implemented in the same piece or pieces of software and/or hardware when implementing the present application.
The system of the foregoing embodiment is used to implement the corresponding method for generating the volumetric fog in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which are not described herein.
Based on the same inventive concept, the application also provides an electronic device corresponding to the method of any embodiment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the method for generating the volume fog according to any embodiment when executing the program.
Fig. 10 shows a more specific hardware architecture of an electronic device according to this embodiment, where the device may include: a processor 1010, a memory 1020, an input/output interface 1030, a communication interface 1040, and a bus 1050. Wherein processor 1010, memory 1020, input/output interface 1030, and communication interface 1040 implement communication connections therebetween within the device via a bus 1050.
The processor 1010 may be implemented by a general-purpose CPU (Central Processing Unit ), microprocessor, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. for executing relevant programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 1020 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory ), static storage device, dynamic storage device, or the like. Memory 1020 may store an operating system and other application programs, and when the embodiments of the present specification are implemented in software or firmware, the associated program code is stored in memory 1020 and executed by processor 1010.
The input/output interface 1030 is used to connect with an input/output module for inputting and outputting information. The input/output module may be configured as a component in a device (not shown) or may be external to the device to provide corresponding functionality. Wherein the input devices may include a keyboard, mouse, touch screen, microphone, various types of sensors, etc., and the output devices may include a display, speaker, vibrator, indicator lights, etc.
Communication interface 1040 is used to connect communication modules (not shown) to enable communication interactions of the present device with other devices. The communication module may implement communication through a wired manner (such as USB, network cable, etc.), or may implement communication through a wireless manner (such as mobile network, WIFI, bluetooth, etc.).
Bus 1050 includes a path for transferring information between components of the device (e.g., processor 1010, memory 1020, input/output interface 1030, and communication interface 1040).
It should be noted that although the above-described device only shows processor 1010, memory 1020, input/output interface 1030, communication interface 1040, and bus 1050, in an implementation, the device may include other components necessary to achieve proper operation. Furthermore, it will be understood by those skilled in the art that the above-described apparatus may include only the components necessary to implement the embodiments of the present description, and not all the components shown in the drawings.
The electronic device of the foregoing embodiment is configured to implement the method for generating the volumetric fog corresponding to any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which is not described herein.
The memory 1020 stores machine readable instructions executable by the processor 1010, which when the electronic device is running, communicate between the processor 1010 and the memory 1020 over the bus 1050 such that the processor 1010 performs the following instructions when running: determining a target game scene of a target volume fog model to be generated; determining a volume space corresponding to the target volume fog model to be generated based on the target game scene; generating a plurality of preset primitive models in the volume space, and rendering materials of the preset primitive models based on the positions of the preset primitive models in a target game scene; and generating the target volume fog model based on the primitive model after the material rendering.
In a possible implementation manner, in the instructions executed by the processor 1010, the rendering of the material for the preset primitive model based on the position of the preset primitive model in the target game scene specifically includes:
generating an initial three-dimensional volume fog model with a preset form based on the target game scene;
Slicing the initial three-dimensional volume fog model along a preset direction to obtain a sequence frame map; wherein the sequence frame map comprises a plurality of texture maps arranged in sequence;
converting the sequence frame map into a volume map; wherein the volume map comprises spatial coordinates of each of the texture maps;
and rendering materials of the preset primitive model based on the position of the preset primitive model in the target game scene and the volume map.
In a possible implementation manner, in the instructions executed by the processor 1010, the rendering the material of the preset primitive model based on the position of the preset primitive model in the target game scene and the volume map specifically includes:
determining target position coordinates of each vertex of the preset primitive model based on the position of the preset primitive model in a target game scene;
determining a closest target texture map from the volume map based on the target position coordinates;
determining the color of the vertex after rendering the material from the target texture map based on the target position coordinates;
and rendering the material of the preset primitive model based on the color of the vertex after rendering the material.
In a possible implementation manner, the instructions executed by the processor 1010 convert the sequence frame map into a volume map specifically include:
obtaining the target size of the initial three-dimensional volume fog model along the preset direction and the target number of the texture maps in the sequence frame maps;
determining component coordinates of the texture map in the preset direction based on the target size, the target number and the arrangement sequence of the texture map;
determining two-dimensional coordinates of each pixel point in the texture map;
determining spatial coordinates of the texture map based on the two-dimensional coordinates and the component coordinates;
the sequence of frame maps is converted to a volume map based on the spatial coordinates of the texture map.
In a possible implementation manner, in the instructions executed by the processor 1010, determining, based on the target game scene, a volume space corresponding to the target volumetric fog model to be generated specifically includes:
determining a target datum point in the target game scene;
transmitting a plurality of target rays around the target game scene with the target datum point as a center;
obtaining the target positions of collision of the target rays and other virtual models in the target game scene;
The volume is determined based on the target position.
In a possible implementation manner, in the instructions executed by the processor 1010, a plurality of preset primitive models are generated in the volume space, and specifically include:
randomly generating a plurality of preset primitive models in the volume space;
or sequentially generating a plurality of preset primitive models according to a preset sequence, and overlapping the edge areas of two adjacent preset primitive models.
In a possible implementation, in the instructions executed by the processor 1010, after generating the target volumetric fog model based on the primitive model after rendering the material, the method further includes:
responding to the target volume fog model to trigger interaction in the target game scene, and determining a target preset primitive model for interaction;
and generating the interaction special effect based on the target preset primitive model.
In a possible implementation manner, the instructions executed by the processor 1010 generate the interactive special effects based on the target preset primitive model specifically include:
determining an initial state texture and an interaction time curve of the interaction special effect;
and replacing the texture of the material of the target preset primitive model with the initial state texture, and adjusting the initial state texture based on the interaction time curve.
In a possible implementation, in the instructions executed by the processor 1010, after generating the target volumetric fog model based on the primitive model after rendering the material, the method further includes:
generating integral deformation in the target game scene in response to the target volumetric fog model; wherein the global deformation comprises a scaling deformation and/or a displacement deformation;
the position of the preset primitive model in the target game scene is redetermined based on the target volume fog model after the integral deformation;
and rendering materials of the preset primitive model based on the redetermined position of the preset primitive model in the target game scene and the volume map.
By the method, when the electronic equipment runs, a target game scene of a target volume fog model to be generated is determined; then determining a volume space corresponding to the target volume fog model to be generated based on the target game scene; generating a plurality of preset primitive models in the volume space, and rendering materials of the preset primitive models based on the positions of the preset primitive models in a target game scene; the target volume fog model is generated based on the primitive model rendered by materials, and the generated target volume fog model is formed by each preset primitive model, so that regional interaction of volume fog can be realized, and meanwhile, a plurality of preset primitive models are generated in a determined volume space to form an integral target volume fog model, so that the method and the device can be suitable for various game scenes and can be well attached to the terrains of the various game scenes.
Exemplary Programming
Based on the same inventive concept, corresponding to any of the above embodiments of the method, the present application further provides a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of generating a volumetric fog as described in any of the above embodiments.
The computer readable media of the present embodiments, including both permanent and non-permanent, removable and non-removable media, may be used to implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The storage medium of the foregoing embodiments stores computer instructions for causing the computer to execute the method for generating the volumetric fog according to any of the foregoing embodiments, and has the advantages of the corresponding method embodiments, which are not described herein.
Based on the same inventive concept, the present application also provides a computer program product, corresponding to the method of any of the embodiments described above, comprising a computer program. In some embodiments, the computer program instructions may be executable by one or more processors of a computer to cause the computer and/or the processor to perform the method of generating a volumetric fog of the above embodiments. Corresponding to the execution subject corresponding to each step in each embodiment of the scene editing method, the processor executing the corresponding step may belong to the corresponding execution subject.
The computer program product of the above embodiment is configured to cause the computer and/or the processor to perform the method for generating the volumetric fog according to any of the above embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein.
It can be appreciated that before using the technical solutions of the embodiments in the present application, the user is informed about the type, the use range, the use scenario, etc. of the related personal information in an appropriate manner, and the authorization of the user is obtained.
For example, in response to receiving an active request from a user, a prompt is sent to the user to explicitly prompt the user that the operation it is requesting to perform will require personal information to be obtained and used with the user. Therefore, the user can select whether to provide personal information to the software or hardware such as the electronic equipment, the application program, the server or the storage medium for executing the operation of the technical scheme according to the prompt information.
As an alternative but non-limiting implementation, in response to receiving an active request from a user, the manner in which the prompt information is sent to the user may be, for example, a popup, in which the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide personal information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window.
It will be appreciated that the above-described notification and user authorization acquisition process is merely illustrative, and not limiting of the implementation of the present application, and that other ways of satisfying relevant legal regulations may be applied to the implementation of the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be implemented as a system, method, or computer program product. Thus, the present application may be embodied in the form of: all hardware, all software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software, is generally referred to herein as a "circuit," module, "or" system. Furthermore, in some embodiments, the present application may also be embodied in the form of a computer program product in one or more computer-readable media, which contain computer-readable program code.
Any combination of one or more computer readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive example) of the computer-readable storage medium could include, for example: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present application may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer, for example, through the internet using an internet service provider.
It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Where each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, in accordance with embodiments of the present application. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Those of ordinary skill in the art will appreciate that: the discussion of any of the embodiments above is merely exemplary and is not intended to suggest that the scope of the application (including the claims) is limited to these examples; the technical features of the above embodiments or in the different embodiments may also be combined within the idea of the present application, the steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the present application as described above, which are not provided in detail for the sake of brevity.
Additionally, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown within the provided figures, in order to simplify the illustration and discussion, and so as not to obscure the embodiments of the present application. Furthermore, the devices may be shown in block diagram form in order to avoid obscuring the embodiments of the present application, and this also takes into account the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform on which the embodiments of the present application are to be implemented (i.e., such specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the application, it should be apparent to one skilled in the art that embodiments of the application can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative in nature and not as restrictive.
While the present application has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of those embodiments will be apparent to those skilled in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic RAM (DRAM)) may use the embodiments discussed.
The present embodiments are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Accordingly, any omissions, modifications, equivalents, improvements and/or the like which are within the spirit and principles of the embodiments are intended to be included within the scope of the present application.

Claims (12)

1. A method of generating a volumetric fog model, comprising:
determining a target game scene of a target volume fog model to be generated;
determining a volume space corresponding to the target volume fog model to be generated based on the target game scene;
generating a plurality of preset primitive models in the volume space, and rendering materials of the preset primitive models based on the positions of the preset primitive models in a target game scene;
and generating the target volume fog model based on the primitive model after the material rendering.
2. The method according to claim 1, wherein rendering the material of the preset primitive model based on the position of the preset primitive model in the target game scene specifically comprises:
generating an initial three-dimensional volume fog model with a preset form based on the target game scene;
slicing the initial three-dimensional volume fog model along a preset direction to obtain a sequence frame map; wherein the sequence frame map comprises a plurality of texture maps arranged in sequence;
converting the sequence frame map into a volume map; wherein the volume map comprises spatial coordinates of each of the texture maps;
and rendering materials of the preset primitive model based on the position of the preset primitive model in the target game scene and the volume map.
3. The method according to claim 2, wherein rendering the material of the preset primitive model based on the position of the preset primitive model in the target game scene and the volume map, specifically comprises:
determining target position coordinates of each vertex of the preset primitive model based on the position of the preset primitive model in a target game scene;
Determining a closest target texture map from the volume map based on the target position coordinates;
determining the color of the vertex after rendering the material from the target texture map based on the target position coordinates;
and rendering the material of the preset primitive model based on the color of the vertex after rendering the material.
4. The method according to claim 2, characterized in that converting the sequence of frame maps into volume maps, in particular comprises:
obtaining the target size of the initial three-dimensional volume fog model along the preset direction and the target number of the texture maps in the sequence frame maps;
determining component coordinates of the texture map in the preset direction based on the target size, the target number and the arrangement sequence of the texture map;
determining two-dimensional coordinates of each pixel point in the texture map;
determining spatial coordinates of the texture map based on the two-dimensional coordinates and the component coordinates;
the sequence of frame maps is converted to a volume map based on the spatial coordinates of the texture map.
5. The method of claim 1, wherein determining a volume space corresponding to the target volumetric fog model to be generated based on the target game scene, specifically comprises:
Determining a target datum point in the target game scene;
transmitting a plurality of target rays around the target game scene with the target datum point as a center;
obtaining the target positions of collision of the target rays and other virtual models in the target game scene;
the volume is determined based on the target position.
6. The method according to claim 1, characterized in that generating a plurality of preset primitive models in the volume space, in particular comprises:
randomly generating a plurality of preset primitive models in the volume space;
or sequentially generating a plurality of preset primitive models according to a preset sequence, and overlapping the edge areas of two adjacent preset primitive models.
7. The method of claim 1, wherein after generating the target volumetric fog model based on the primitive model after material rendering, the method further comprises:
responding to the target volume fog model to trigger interaction in the target game scene, and determining a target preset primitive model for interaction;
and generating the interaction special effect based on the target preset primitive model.
8. The method according to claim 7, wherein generating interactive special effects based on the target preset primitive model specifically comprises:
Determining an initial state texture and an interaction time curve of the interaction special effect;
and replacing the texture of the material of the target preset primitive model with the initial state texture, and adjusting the initial state texture based on the interaction time curve.
9. The method of claim 2, wherein after generating the target volumetric fog model based on the primitive model after material rendering, the method further comprises:
generating integral deformation in the target game scene in response to the target volumetric fog model; wherein the global deformation comprises a scaling deformation and/or a displacement deformation;
the position of the preset primitive model in the target game scene is redetermined based on the target volume fog model after the integral deformation;
and rendering materials of the preset primitive model based on the redetermined position of the preset primitive model in the target game scene and the volume map.
10. A volumetric fog generating device, comprising:
the first determining module is used for determining a target game scene of a target volume fog model to be generated;
the second determining module is used for determining a volume space corresponding to the to-be-generated target volume fog model based on the target game scene;
The rendering module is used for generating a plurality of preset primitive models in the volume space and rendering materials of the preset primitive models based on the positions of the preset primitive models in a target game scene;
and the generating module is used for generating the target volume fog model based on the primitive model after the rendering of the materials.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable by the processor, the processor implementing the method of any one of claims 1 to 9 when the program is executed.
12. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 9.
CN202311658939.2A 2023-12-05 2023-12-05 Method and device for generating volume fog, electronic equipment and storage medium Pending CN117599420A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311658939.2A CN117599420A (en) 2023-12-05 2023-12-05 Method and device for generating volume fog, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311658939.2A CN117599420A (en) 2023-12-05 2023-12-05 Method and device for generating volume fog, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117599420A true CN117599420A (en) 2024-02-27

Family

ID=89956030

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311658939.2A Pending CN117599420A (en) 2023-12-05 2023-12-05 Method and device for generating volume fog, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117599420A (en)

Similar Documents

Publication Publication Date Title
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
US9852544B2 (en) Methods and systems for providing a preloader animation for image viewers
CN113661471B (en) Hybrid rendering
US8243061B2 (en) Image processing apparatus and method of controlling operation of same
US20150123968A1 (en) Occlusion render mechanism for point clouds
US20140184596A1 (en) Image based rendering
CN105913481B (en) Shadow rendering apparatus and control method thereof
US8854392B2 (en) Circular scratch shader
CN114494328B (en) Image display method, device, electronic equipment and storage medium
CN116051713B (en) Rendering method, electronic device, and computer-readable storage medium
CN110930492B (en) Model rendering method, device, computer readable medium and electronic equipment
KR20140000170A (en) Method for estimating the quantity of light received by a participating media, and corresponding device
CN114742931A (en) Method and device for rendering image, electronic equipment and storage medium
CN113282741A (en) Knowledge graph visualization method, device, equipment and computer readable medium
KR102551914B1 (en) Method and system for generating interactive object viewer
US10754498B2 (en) Hybrid image rendering system
KR101951225B1 (en) Method and system for real-time rendering object representation without physics engine
CN117599420A (en) Method and device for generating volume fog, electronic equipment and storage medium
CA2866589C (en) Method for representing a participating media in a scene and corresponding device
CN112907741B (en) Terrain scene generation method and device, electronic equipment and storage medium
Gutbell et al. Web-based visualization component for geo-information
CN114020390A (en) BIM model display method and device, computer equipment and storage medium
Li et al. Semantic volume texture for virtual city building model visualisation
CN117252976A (en) Model rendering order determining method and device, electronic equipment and storage medium
US20240046546A1 (en) Method and system for creating virtual spaces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination