CN117085318A - Smoke generation method, device, computer-readable storage medium, and electronic device - Google Patents

Smoke generation method, device, computer-readable storage medium, and electronic device Download PDF

Info

Publication number
CN117085318A
CN117085318A CN202310935535.7A CN202310935535A CN117085318A CN 117085318 A CN117085318 A CN 117085318A CN 202310935535 A CN202310935535 A CN 202310935535A CN 117085318 A CN117085318 A CN 117085318A
Authority
CN
China
Prior art keywords
strip
mapping
smoke
shaped
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310935535.7A
Other languages
Chinese (zh)
Inventor
孟庆宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202310935535.7A priority Critical patent/CN117085318A/en
Publication of CN117085318A publication Critical patent/CN117085318A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/603D [Three Dimensional] animation of natural phenomena, e.g. rain, snow, water or plants

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The application discloses a smoke generation method, a smoke generation device, a computer readable storage medium and an electronic device, and relates to the technical field of image processing. The method comprises the following steps: acquiring a first strip-shaped surface patch, a material ball, a first mapping and a second mapping, wherein the first mapping is a mask mapping, and the second mapping is a noise mapping; rendering the first strip-shaped dough piece according to the material ball to obtain a second strip-shaped dough piece; obtaining a third strip-shaped dough piece according to the second strip-shaped dough piece, the first mapping and the second mapping, wherein the vertex coordinates of the third strip-shaped dough piece are changed based on time; and obtaining the smoke drifting special effect according to the third strip-shaped dough piece, the first mapping and the second mapping, wherein the vertex coordinates and the texture coordinates of the smoke drifting special effect are changed based on time. The application solves the technical problems of high display memory consumption and high bandwidth consumption caused by adopting a light stepping equal volume rendering means to render the smoke in the related technology.

Description

Smoke generation method, device, computer-readable storage medium, and electronic device
Technical Field
The present application relates to the field of image processing technology, and in particular, to an aerosol generating method, an aerosol generating device, a computer readable storage medium, and an electronic device.
Background
In the image rendering of games, there is often a need to render smoke fluid effects, such as making a special effect of smoke-turning skills, or generating a long and long stream of errant green smoke. Currently, in the image rendering process of games, smoke rendering in three-dimensional (3D) space is generally performed by using a volume rendering means such as a ray stepping (Raymarching) method.
However, the smoke rendering by adopting the method requires higher memory consumption and bandwidth consumption, has certain requirements on the performance of the terminal, and cannot be used on the terminal with low performance.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
At least some embodiments of the present application provide a method, an apparatus, a computer-readable storage medium, and an electronic device for generating smoke, so as to at least solve the technical problem in the related art that the consumption of a video memory and the consumption of a bandwidth are high due to the adoption of a volume rendering means such as light stepping for rendering smoke.
According to one embodiment of the present application there is provided a method of aerosol generation, the method comprising: acquiring a first strip-shaped dough piece, a material ball, a first mapping and a second mapping, wherein the first strip-shaped dough piece is a dough piece for simulating the shape of smoke, the first mapping is a mask mapping, and the second mapping is a noise mapping; rendering the first strip-shaped dough piece according to the material ball to obtain a second strip-shaped dough piece, wherein the second strip-shaped dough piece is used for simulating the shape and texture of smoke; obtaining a third strip-shaped dough piece according to the second strip-shaped dough piece, the first mapping and the second mapping, wherein the vertex coordinates of the third strip-shaped dough piece are changed based on time; and obtaining the smoke drifting special effect according to the third strip-shaped dough piece, the first mapping and the second mapping, wherein the vertex coordinates and the texture coordinates of the smoke drifting special effect are changed based on time.
There is also provided, in accordance with an embodiment of the present application, an aerosol generating device comprising: the device comprises an acquisition module, a first processing module and a second processing module, wherein the acquisition module is used for acquiring a first strip-shaped dough piece, a material ball, a first mapping and a second mapping, the first strip-shaped dough piece is a dough piece for simulating the shape of smoke, the first mapping is a mask mapping, and the second mapping is a noise mapping; the rendering module is used for rendering the first strip-shaped dough piece according to the material balls to obtain a second strip-shaped dough piece, wherein the second strip-shaped dough piece is used for simulating the shape and the texture of smoke; the first determining module is used for obtaining a third strip-shaped dough piece according to the second strip-shaped dough piece, the first mapping and the second mapping, wherein the vertex coordinates of the third strip-shaped dough piece are changed based on time; and the second determining module is used for obtaining the smoke drifting special effect according to the third strip-shaped dough sheet, the first mapping and the second mapping, wherein the vertex coordinates and the texture coordinates of the smoke drifting special effect are changed based on time.
According to one embodiment of the application there is also provided a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the aerosol generating method of the above embodiment when run on a computer or processor.
According to one embodiment of the application there is also provided an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the aerosol generating method of the above embodiments.
In at least some embodiments of the present application, a first strip-shaped panel, a material ball, a first map and a second map are obtained, where the first strip-shaped panel is a panel for simulating a smoke shape, the first map is a mask map, and the second map is a noise map; rendering the first strip-shaped dough piece according to the material ball to obtain a second strip-shaped dough piece, wherein the second strip-shaped dough piece is used for simulating the shape and texture of smoke; obtaining a third strip-shaped dough piece according to the second strip-shaped dough piece, the first mapping and the second mapping, wherein the vertex coordinates of the third strip-shaped dough piece are changed based on time; and obtaining the smoke drifting special effect according to the third strip-shaped dough piece, the first mapping and the second mapping, wherein the vertex coordinates and the texture coordinates of the smoke drifting special effect are changed based on time. The method achieves the purpose of generating a dynamic smoke fluid image by a simple mode of modeling and matching with the calculation of the shader, so that the technical effect of rendering the smoke fluid special effect in a low-performance consumption mode without using a light stepping technology can be achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a mobile terminal of an aerosol generating method according to an embodiment of the present application;
fig. 2 is a flow chart of a method of aerosol generation according to one embodiment of the application;
FIG. 3 is a schematic view of a first strip of dough sheet according to one embodiment of the present application;
FIG. 4 is a schematic diagram of a mask map according to one embodiment of the application;
FIG. 5 is a schematic diagram of a noise map according to one embodiment of the application;
FIG. 6 is a schematic view of a second strip of dough sheet according to one embodiment of the present application;
FIG. 7 is a schematic diagram of a screen coordinate system according to one embodiment of the application;
FIG. 8 is a schematic view of a third strip of dough sheet according to one embodiment of the present application;
FIG. 9 is a schematic illustration of a warped image, according to one embodiment of the application;
fig. 10 is a schematic illustration of a smoke distorted image according to one embodiment of the application;
FIG. 11 is a schematic illustration of a transparency image according to one embodiment of the application;
FIG. 12 is a schematic diagram of a smoke drift effect according to one embodiment of the present application;
fig. 13 is a block diagram of an aerosol generating device according to an alternative embodiment of the application;
fig. 14 is a schematic view of an electronic device according to an embodiment of the application.
Detailed Description
For ease of understanding, a description of some of the concepts related to the embodiments of the application are given by way of example for reference.
The following is shown:
light stepping (Raymarching): one common rendering method is to step according to the distance from the camera to determine the final rendered color value of the pixel.
Unity: a cross-platform game development engine for developing two-dimensional (2D) and three-dimensional (3D) games.
Real-time rendering texture (render texture): for representing image textures being rendered in real-time in the background in the game engine.
Vertex shader: is an important stage in the graphics rendering pipeline and is one of the most basic shader types. The main function of the method is to process the input geometry (such as triangle, quadrangle, etc.), and output the position and color information of each vertex on the screen. Vertices may contain many other attributes, such as texture, normals, etc., in addition to the most basic location attributes. Through the vertex shader, the graphics card knows where the vertex should be drawn in a particular location.
A pixel shader: also known as a fragment shader, is a program code running on a graphics processor to describe how to apply information such as color or texture coordinates to each pixel. The pixel shader typically receives input parameters including vertex information, texture coordinates, user-defined variables, etc., and, when calculated using these input parameters, it outputs a determined color value as the color that the pixel ultimately exhibits.
Material ball: an object containing texture, color, highlighting, etc. information is used to apply these properties to the model. Patches are the basic units that make up a model, and can be given different material balls to exhibit different effects, each patch can be designated as which material ball to use for rendering.
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In one possible implementation, aiming at the technical field of image processing, when rendering a smoke fluid special effect in the image rendering process of a game, the smoke fluid special effect is generated by constructing 3D space coordinates for a screen space and stepping and integrating illumination intensity towards a light source by adopting Raymarray technology.
The inventor has found that, after practice and careful study, the above method still has the problems that the application of renderTexture is required, which results in large memory consumption, and the need of performing mapping sampling multiple times, which results in large bandwidth consumption, namely, high memory consumption and bandwidth consumption are required, and the technology cannot be used on low-performance terminals. Based on the above, the game scene applied in the embodiment of the present application may be the field of image processing in a game, and an aerosol generating method is provided, by acquiring a first strip-shaped surface patch, a material ball, a first mapping and a second mapping, where the first strip-shaped surface patch is a surface patch for simulating an aerosol shape, the first mapping is a mask mapping, and the second mapping is a noise mapping; rendering the first strip-shaped dough piece according to the material ball to obtain a second strip-shaped dough piece, wherein the second strip-shaped dough piece is used for simulating the shape and texture of smoke; obtaining a third strip-shaped dough piece according to the second strip-shaped dough piece, the first mapping and the second mapping, wherein the vertex coordinates of the third strip-shaped dough piece are changed based on time; and obtaining the smoke drifting special effect according to the third strip-shaped dough piece, the first mapping and the second mapping, wherein the vertex coordinates and the texture coordinates of the smoke drifting special effect are changed based on time. The method achieves the purpose of generating a dynamic smoke fluid image by a simple mode of modeling and matching with the calculation of the shader, so that the technical effect of rendering the smoke fluid special effect in a low-performance consumption mode without using a light stepping technology can be achieved.
The above-described method embodiments according to the present application may be implemented in a mobile terminal, a computer terminal or similar computing device. Taking the example of running on a mobile terminal, the mobile terminal can be a smart phone, a palm computer, a mobile internet device, a tablet computer (Personal Access Display, PAD), a game machine and other terminal devices. Fig. 1 is a block diagram of a hardware configuration of a mobile terminal of an aerosol generating method according to an embodiment of the present application. As shown in fig. 1, the mobile terminal may include one or more (only one is shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a central processing unit (central processing unit, CPU), a graphics processor (graphics processing unit, GPU), a digital signal processing (digital signal processing, DSP) chip, a microprocessor (microcontroller unit, MCU), a programmable logic device (field-programmable gate array, FPGA), a neural network processor (neural network processing unit, NPU), a tensor processor (tensor processing unit, TPU), an artificial intelligence (artificial intelligent, AI) type processor, etc.), and a memory 104 for storing data, and in one embodiment of the present application, may further include: a transmission device 106, an input output device 108, and a display device 110.
In some optional embodiments, which are based on game scenes, the device may further provide a human-machine interaction interface with a touch-sensitive surface, the human-machine interaction interface may sense finger contacts and/or gestures to interact with a graphical user interface (Graphical User Interface, GUI), the human-machine interaction functions may include the following interactions: executable instructions for performing the above-described human-machine interaction functions, such as creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, sending and receiving electronic mail, talking interfaces, playing digital video, playing digital music, and/or web browsing, are configured/stored in a computer program product or readable storage medium executable by one or more processors.
It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
According to one embodiment of the application there is provided an embodiment of an aerosol generating method, it being noted that the steps shown in the flowchart of the figures may be performed in a computer system, such as a set of computer executable instructions, and that although a logical sequence is shown in the flowchart, in some cases the steps shown or described may be performed in a different order than what is shown or described herein.
In one possible implementation, the embodiment of the present application provides a smoke generating method, fig. 2 is a flowchart of a smoke generating method according to one embodiment of the present application, as shown in fig. 2, the method comprising the steps of:
step S20, a first strip-shaped dough piece, a material ball, a first mapping and a second mapping are obtained.
The first strip-shaped surface piece is used for simulating the shape of smoke, the first mapping is a mask mapping, and the second mapping is a noise mapping.
When smoke rendering is performed, firstly, a smoke model needs to be manufactured, and the first strip-shaped dough sheet in step S20 can be understood as a dough sheet manufactured for simulating a smoke shape, which can be formed by combining a preset number of strip-shaped dough sheets.
For example, a first strip of dough matched to a smoke path may be produced in modeling software such as 3Dmax/Maya, as shown in fig. 3, where fig. 3 is a schematic diagram of the first strip of dough according to one embodiment of the present application, and in fig. 3, the first strip of dough is formed by combining 3 strips of dough, and texture coordinates of the first strip of dough may be spread out from bottom to top to edge when the first strip of dough is produced, so as to prepare for subsequent up-down translation of texture animation. The texture coordinates of the first strip of patches shown in fig. 3 may be arranged in 3 columns from left to right, providing for different sampling parameters for distinguishing different strip of patches for subsequent sampling mask mapping.
It can be understood that the preset number of strip-shaped dough sheets can be set according to actual art requirements, and the embodiment of the application is not limited.
A texture ball may be understood as an object for imparting texture to a first strip of dough, and illustratively, a custom shader "Smoke" may be created in the Unity engine and used to create a texture ball, embodiments of the present application are not limited.
The first mapping is a mask mapping, which may be denoted as MaskTex, and in the embodiment of the present application is used to determine a texture display range and a texture color intensity of the first strip-shaped patch, so as to more truly simulate smoke.
As shown in fig. 4, fig. 4 is a schematic diagram of a mask map according to an embodiment of the present application, an R channel of MaskTex may be used as an overall shape mask of smoke, and a G channel and a B channel of MaskTex are used to distinguish 3 columns of texture coordinates arranged from left to right of a first strip-shaped patch, so as to determine corresponding sampling parameters for different strip-shaped patches subsequently, thereby achieving richer random sampling results and optimizing visual performance.
It can be understood that the sampling parameters corresponding to different strip-shaped patches may be the same or different, and the embodiment of the application is not limited according to the actual art requirement setting.
The second mapping is a noise mapping, which can be recorded as noise tex, and is a pattern which looks like random and irregularly distributed and is generated through a pseudo-random algorithm.
Illustratively, as shown in fig. 5, fig. 5 is a schematic diagram of a noise map according to an embodiment of the present application, where R and G channels of NoiseTex may be used as patterns for warping offset vertex coordinates or texture coordinates in a shader, and B channels may be used as smoke shape texture patterns, and the embodiment of the present application is not limited.
The first map and the second map can be manufactured in drawing software according to actual art requirements.
And S22, rendering the first strip-shaped dough piece according to the material balls to obtain a second strip-shaped dough piece.
Wherein the second strip-shaped dough piece is a dough piece for simulating the shape and texture of smoke.
The texture ball may be a texture ball having a smoke property, and the first strip of dough sheet is rendered based on the acquired texture ball, such that the second strip of dough sheet is given a smoke effect. Illustratively, as shown in fig. 6, fig. 6 is a schematic view of a second strip of dough sheet according to one embodiment of the present application.
Step S24, obtaining a third strip-shaped dough piece according to the second strip-shaped dough piece, the first mapping and the second mapping.
Wherein the vertex coordinates of the third strip of dough sheet are changed based on time.
This step S24 may be implemented in a vertex shader, where the first and second maps are sampled based on the second strip-shaped patch, resulting in a third strip-shaped patch where the vertex coordinates change based on time. That is, in the vertex shader, by shifting the vertices of the smoke model in the clipping space, a vertex animation of the body fluctuation is formed, simulating the effect of the smoke fluctuation.
And S26, obtaining the special effect of the smoke drift according to the third strip-shaped dough sheet, the first mapping and the second mapping.
Wherein, the vertex coordinates and texture coordinates of the smoke drift special effect are changed based on time.
This step S26 may be implemented in a pixel shader, where the first map and the second map are sampled based on the third strip-shaped patch for which the vertex offset has been implemented, so as to obtain a smoke drift special effect in which both the vertex coordinates and the texture coordinates change based on time. Namely, in the pixel shader, the texture coordinates are shifted to form texture animation flowing in the smoke image, and the effects of vertex animation and texture animation are overlapped, so that the animation for simulating the smoke fluid special effect is finally obtained, and the low-consumption simulated smoke fluid special effect is realized.
Through the steps, a first strip-shaped dough piece, a material ball, a first mapping and a second mapping are obtained, wherein the first strip-shaped dough piece is a dough piece for simulating the shape of smoke, the first mapping is a mask mapping, and the second mapping is a noise mapping; rendering the first strip-shaped dough piece according to the material ball to obtain a second strip-shaped dough piece, wherein the second strip-shaped dough piece is used for simulating the shape and texture of smoke; obtaining a third strip-shaped dough piece according to the second strip-shaped dough piece, the first mapping and the second mapping, wherein the vertex coordinates of the third strip-shaped dough piece are changed based on time; and obtaining the smoke drifting special effect according to the third strip-shaped dough piece, the first mapping and the second mapping, wherein the vertex coordinates and the texture coordinates of the smoke drifting special effect are changed based on time. The method achieves the purpose of generating a dynamic smoke fluid image by a simple mode of modeling and matching with the calculation of the shader, so that the technical effect of rendering the smoke fluid special effect in a low-performance consumption mode without using a light stepping technology can be achieved.
In one possible embodiment, the first strip of dough sheet is composed of a preset number of strip of dough sheets, and in step S24, obtaining the third strip of dough sheet according to the second strip of dough sheet, the first map and the second map may include the following steps:
in step S241, in the vertex shader, the first vertex coordinates of the second strip-shaped patch are offset by using the trigonometric function and the time variable, so as to obtain the screen offset.
The first vertex coordinates are vertex coordinates of the second strip-shaped surface sheet in the screen space, and the screen offset can be understood as an image with the vertex coordinates in a shaking state.
And in the screen texture coordinates, performing offset processing on the first vertex coordinates through a trigonometric function and a time variable to obtain screen offset, wherein the screen offset is generated by using the trigonometric function based on the plane space position.
Step S242, sampling the second map based on at least one first sampling parameter, time variable and first texture coordinates of the second strip-shaped patches corresponding to the preset number of strip-shaped patches, to obtain a noise offset.
Wherein the at least one first sampling parameter is determined from the first map.
Each of the preset number of strip-shaped patches constituting the first strip-shaped patch has a corresponding first sampling parameter, which can be determined by sampling and interpolating the first map. The first sampling parameter of each strip of dough may be the same or different.
And sampling the second mapping according to at least one first sampling parameter, the time variable and the first texture coordinate of the second strip-shaped patch, thereby obtaining a noise offset, wherein the noise offset is obtained based on the translational sampling NoiseTex.
Step S243, determining a third strip-shaped dough piece according to the screen offset, the noise offset and the second strip-shaped dough piece.
By combining both the screen offset and the noise offset and outputting the combined screen offset and the combined noise offset together with the second strip-shaped dough piece, a third strip-shaped dough piece with more vivid and natural drifting effect can be obtained.
It can be appreciated that in actual use, the intensities of the screen offset and the noise offset can be adjusted independently, which is convenient for the flexible control effect of the art producer. If the intensity of a certain offset is determined to be 0, the calculation process of the offset can be skipped in practical application, so that the calculation force is saved.
In a possible implementation manner, in step S241, performing offset processing on the first vertex coordinates of the second strip-shaped patch using a trigonometric function and a time variable to obtain a screen offset may include the following steps:
step S2411, converting the second vertex coordinates of the second strip-shaped dough piece from the object space to the clipping space to obtain third vertex coordinates;
Step S2412, converting the third vertex coordinates from the clipping space to the screen space to obtain fourth vertex coordinates;
step S2413, performing offset processing on the fourth vertex coordinates by using the sinusoidal trigonometric function and the time variable to obtain a screen offset.
Wherein the screen offset is offset in the lateral and longitudinal directions.
The vertex coordinates are shifted in the screen space, the vertex coordinates are firstly required to be converted into the screen space, the second vertex coordinates of the second strip-shaped surface piece of the object space are obtained, then the second vertex coordinates are converted into the clipping space from the object space, the third vertex coordinates are obtained, and then the third vertex coordinates of the clipping space are converted into the screen space, so that the first vertex coordinates are obtained. And finally, performing offset processing on the first vertex coordinates of the screen space by utilizing a sine trigonometric function and a Time variable Time to obtain the screen offset.
Illustratively, the second vertex coordinate may be converted from the object space to the clipping space by a float4 pos=unityobjecttoclippos (v.vertex) statement, resulting in a third vertex coordinate.
And converting the third vertex coordinate of the clipping space into the screen space through the float2 Screen UV=pos.xy/pos.w 0.5+0.5 statement, so as to obtain the first vertex coordinate.
Finally, a distortion effect is achieved on the screen by float ScreenPattern =sin (screen uv.y. 20+ _ time.y. 2) sin (screen uv.x. 10+ _ time.y. 2) and float4 screen twist=float 4 (ScreenPattern, screenPattern, 0) statements. It can be seen that the sin function and the Time variable Time are utilized to generate the wavy texture ScreenPattern, in order to ensure that the depth is correct without shifting the Z axis, the finally generated screen shift amount ScreenTwist is only distorted in the X direction and the Y direction, and the Z and the W are respectively assigned to be 0.
Illustratively, as shown in fig. 7, fig. 7 is a schematic view of a screen coordinate system according to an embodiment of the present application, in which an origin of the screen coordinate system is generally located at an upper left corner of the screen, an X-axis direction is a horizontal direction (i.e., a lateral direction), extends from a left side to a right side of the screen, and an X-axis positive direction is directed to a right side of the screen. The Y-axis direction is a vertical direction (i.e., longitudinal direction), and the Y-axis positive direction is directed to the lower side of the screen, extending from the upper side of the screen to the lower side.
It should be noted that the origin and axis direction of the screen coordinate system may vary depending on the requirements of a particular platform, graphics library, or rendering engine. For example, some platforms or tools may place the origin of the screen coordinate system in the lower left corner, while the positive Y-axis direction is directed upward, and embodiments of the present application are not limited.
In the embodiment of the present application, the shift of the screen offset in the lateral direction may be understood as that the vertex coordinates are distorted in the X-axis direction in fig. 7, and the shift of the screen offset in the longitudinal direction may be understood as that the vertex coordinates are distorted in the Y-axis direction in fig. 7, that is, the vertex coordinates in the embodiment of the present application can achieve the distortion effect in the X-axis and Y-axis directions at the same time, so as to more actually simulate the effect of smoke distortion.
In a possible implementation manner, in step S242, based on at least one first sampling parameter, a time variable, and a first texture coordinate of a second strip-shaped patch corresponding to a preset number of strip-shaped patches, sampling the second map to obtain a noise offset may include the following implementation steps:
step S2421, sampling the first map based on the first texture coordinates to obtain a first mask strength;
step S2422, determining at least one first sampling parameter according to the linear interpolation function and the first channel value of the first mask intensity;
in step S2423, the second map is sampled based on the at least one first sampling parameter, the time variable and the first texture coordinate to obtain a noise offset.
Wherein the first sampling parameters include a tiling parameter, a translational speed parameter, and an intensity coefficient of texture coordinates, and the noise offset is offset in the lateral and longitudinal directions.
Firstly, based on a first texture coordinate of a second strip-shaped dough piece, sampling the mask Tex to obtain first mask Strength, and then, giving each strip-shaped dough piece different texture coordinate tiling parameters, translation speed parameters TileSped (XY is used for representing tiling parameters, ZW is used for representing translation speed parameters) and different Strength coefficients Strength through a G channel and a B channel of the mask Tex as distinction. And finally, sampling the NoiseTex by using different sampling parameters and matching with a Time variable Time to obtain a noise offset.
Illustratively, mask tex may be sampled by a float4 mask=tex2dlod (Mask tex, float4 (v.uv.xy, 0)) statement, resulting in a first Mask intensity Mask.
Then, by using the float4 TileSped=lerp (lerp (float 4 (1.0,2.0,0.1, -0.2), float4 (1.0,1.0,0.3, -0.15), mask. G), float4 (1.0,2.0,0.2, -0.3), mask. B) and float Strength=lerp (lerp (4.0, 1.5, mask. G), -2.0, mask. B) statements, two interpolations between three float4 vectors are performed by using the G channel and B channel of the first Mask intensity Mask by using the linear interpolation function lerp to obtain TileSped for representing the tiling parameters of texture coordinates and translational speed parameters. And performing interpolation twice by using a G channel and a B channel of the first Mask Strength Mask through a float strength=lerp (lerp (4.0, 1.5, mask. G), -2.0, mask. B) statement, and performing interpolation twice by using a linear interpolation function lerp to obtain a Strength coefficient Strength.
Finally, the value Noise on uv coordinates is obtained by controlling the tiling speed of Noise textures on the XY axis and the rolling speed on the ZW axis through float4 noise=tex2Dlod (v.uv.xy+TileSpeed.zw_Time.y, 0)) and float4 Noise twist=float 4 (noise.g. Strength, noise.r. Strength, 0) statements and sampling the Noise tex by matching with time variables. And multiplying the noise.g and the noise.r by Strength to obtain noise offset NoiseTwist.
Taking the screen coordinate system shown in fig. 7 as an example, the offset of the noise in the lateral direction may be understood as that the noise texture is distorted in the X-axis direction in fig. 7, and the offset of the noise in the longitudinal direction may be understood as that the noise texture is distorted in the Y-axis direction in fig. 7, that is, the noise texture in the embodiment of the present application can achieve the distortion effect in the X-axis and Y-axis directions at the same time, so as to more truly simulate the effect of smoke distortion.
In a possible embodiment, in step S243, determining the third strip of dough sheet according to the screen offset, the noise offset, and the second strip of dough sheet may include performing the steps of:
step S2431, superposing the screen offset and the noise offset to obtain a target offset;
Step S2432, obtaining a third strip dough piece according to the target offset and the second strip dough piece.
And obtaining a total target offset by superposing the screen offset and the noise offset, and then outputting the total target offset and the second strip-shaped dough piece together to obtain a third strip-shaped dough piece.
For example, the target offset may be obtained by superimposing the screen offset and the noise offset by a float4 output= (screen twist+noise twist) statement 0.01, and multiplying the superimposed screen offset and noise offset by a custom strength weakening factor.
And then, the vertex coordinates pos of the original clipping space are overlapped with the target offset output to be used as output through an o.vertex=pos+output statement, so that a new vertex coordinate o.vertex is obtained, namely a third strip-shaped dough piece is obtained, and the third strip-shaped dough piece can realize the twisting effect of the vertex coordinates.
As shown in fig. 8, fig. 8 is a schematic view of a third strip-shaped dough sheet according to an embodiment of the present application, and it can be understood that fig. 8 is a dynamic image with a vertex in a twisted state.
In a possible implementation manner, in step S26, obtaining the special effect of smoke drift according to the third strip-shaped dough sheet, the first map and the second map may include the following implementation steps:
In step S261, in the pixel shader, the second map is sampled based on at least one second sampling parameter, a time variable and a second texture coordinate of the third strip-shaped patch corresponding to the preset number of strip-shaped patches, so as to obtain a warped image.
The at least one second sampling parameter is determined according to the first mapping, and the distorted image is an image obtained by superposing a distortion effect on the third strip-shaped patch.
The determination of the second sampling parameter may be referred to in the description of the determination of the first sampling parameter, which is not repeated here. It will be appreciated that the second sampling parameter and the first sampling parameter are both sampling parameters determined from the first map, the only difference being that different sampling parameter values are obtained by varying the component values of the float4 type vector.
And based on the second sampling parameter, the time variable and the second texture coordinate of the third strip-shaped dough piece of each strip-shaped dough piece, sampling the second mapping to obtain a distorted image, wherein the distorted image can truly simulate the twisting state of smoke.
In addition, a plurality of twisting images with different twisting degrees can be obtained by adjusting the parameter value of the second sampling parameter, so that the generated smoke flow special effect has more reality.
Step S262, based on at least one third sampling parameter, time variable, second texture coordinates and distortion image corresponding to the preset number of strip-shaped patches, the second mapping is sampled, and a smoke distortion image is obtained.
Wherein the at least one third sampling parameter is determined from the first map, and the smoke distortion image is an image of a superimposed smoke effect on the distortion image.
The determination of the third sampling parameter may be referred to in the foregoing description of the determination of the first sampling parameter and the second sampling parameter, which is not repeated here.
And sampling the second mapping based on the third sampling parameter, the time variable, the second texture coordinate and the obtained distorted image of each strip-shaped dough piece to obtain a smoke distorted image, namely, superposing the distorted images on distorted smoke.
In addition, a plurality of smoke distortion images can be obtained by adjusting the parameter value of the third sampling parameter, so that the generated smoke flowing special effect is more realistic.
Step S263, determining the smoke drift special effect based on the smoke distortion image and the transparency image.
Wherein the transparency image is used to determine the transparency of the smoke distortion image.
By determining the transparency of the smoke distortion image, the smoke drifting special effect is determined based on the smoke distortion image and the transparency image, so that the display effect of the generated smoke drifting special effect on a screen is the combination of 'vertex movement' and 'texture flow+disturbance', and the generated smoke drifting special effect is more realistic.
In a possible implementation manner, in step S261, based on at least one second sampling parameter, a time variable, and a second texture coordinate of a third strip-shaped patch corresponding to the preset number of strip-shaped patches, sampling the second map to obtain a warped image may include the following implementation steps:
step S2611, sampling the first map based on the second texture coordinates to obtain a second mask strength;
step S2612, determining at least one second sampling parameter according to the linear interpolation function and the second channel value of the second mask intensity;
in step S2613, the second map is sampled based on at least one second sampling parameter, a time variable and a second texture coordinate to obtain a warped image.
Wherein the second sampling parameter comprises a tiling parameter and a translational speed parameter of texture coordinates.
Firstly, sampling Mask tex based on a second texture coordinate i.uv of a third strip-shaped patch to obtain a second Mask intensity Mask, then determining at least one second sampling parameter TileSped according to a linear interpolation function lerp and a second channel value of the second Mask intensity Mask, and finally sampling a second mapping NoiseTex based on the second sampling parameter TileSped, a Time variable Time and the second texture coordinate i.uv to obtain a distorted image Twist.
Illustratively, the Mask tex may be sampled by a float4 mask=tex2d (_mask tex, i.uv) statement, resulting in a second Mask intensity Mask.
According to the float4 TileSpeed 1=lerp (float 4 (2.0,1.0,0.05, -0.2), float4 (1.0,1.0,0.1, -0.02), mask. G), float4 (1.0,2.0,0.1, -0.1), mask. B) statement, two interpolations between three float4 vectors are performed by a linear interpolation function, lerp, with G-channel and B-channel of the second Mask intensity Mask, resulting in a second sampling parameter TileSpeed1 comprising tiling parameters and translational velocity parameters of texture coordinates.
And sampling the second picture noise tex based on a second sampling parameter TileSpeed1, a Time variable Time and a second texture coordinate i.uv by a float4 Twist 1=tex2d (i.uv. Tilespeed1.Xy+tilespeed1. Zw. Time. Y) sentence to obtain a distorted image Twist1.
It will be appreciated that different warped images may be acquired by varying the second sampling parameter. For example by float4 tilesped2=lerp (lerp (float 4 (1.0,3.0,0.05, -0.1), float4 (1.0,2.0,0.1, -0.05), mask. G), float4 (1.0,1.0,0.1, -0.2), mask. B); the float4 Twist 2=tex2d (_noisetex, i.uv. Tilespeed2.Xy+tilespeed2. Zw. Time. Y) statement gives another distorted image Twist2.
As shown in fig. 9, fig. 9 is a schematic diagram of a distorted image according to an embodiment of the present application, and fig. 9 (a) may be a distorted image Twist1 and fig. 9 (b) may be a distorted image Twist2.
In a possible embodiment, in step S262, based on at least one third sampling parameter, a time variable, a second texture coordinate and a warped image corresponding to a preset number of strip-shaped patches, sampling the second map to obtain a smoke warped image may include the following steps:
s2621, determining at least one third sampling parameter according to the linear interpolation function and the second channel value;
s2622, sampling the second map based on the at least one third sampling parameter, the time variable, the second texture coordinates, and the warped image, to obtain a smoke warped image.
And determining at least one third sampling parameter TileSpeed according to the linear interpolation function lerp and a second channel value of the second Mask intensity Mask, and sampling the second mapping NoiseTex based on the at least one third sampling parameter TileSpeed, the Time variable Time, the second texture coordinate i.uv and the warped image Twist to obtain a Smoke warped image Smoke.
Illustratively, a second sampling parameter tilesped 3, which includes tiling parameters of texture coordinates and translational speed parameters, may be obtained by interpolating twice between three float4 vectors by a linear interpolation function lerp using a float4 tilesped3=lerp (float 4 (1.0,3.0,0.05, -0.1), float4 (1.0,4.0,0.1, -0.0), mask. G), float4 (1.0,3.0,0.05, -0.0), mask. B) statement.
And sampling the second paste NoiseTex by a float4 Smoke1=tex2D (TileSpeed3. Xy+TileSpeed3. Zw_Time. Y+Twist1. G0.3) statement based on a third sampling parameter TileSpeed3, a Time variable Time, a second texture coordinate i.uv and a distorted image Twist1 to obtain a Smoke distorted image Smoke1.
It will be appreciated that a different warped image may be acquired by varying the third sampling parameter. For example by float4 tilesped4=lerp (lerp (float 4 (3.0,8.0,0.15, -0.5), float4 (2.0,6.0,0.1, -0.4), mask. G), float4 (4.0,7.0,0.2, -0.3), mask. B); the float4 smok2=tex2d (_noisetex, i.uv. Tilespeed4.Xy+tilespeed4. Zw. Time. Y+twist2. G. 0.3) statement, another Smoke distorted image Smoke2 is obtained, wherein the Smoke2 is based on the previously generated Twist2.
As shown in fig. 10, fig. 10 is a schematic diagram of a Smoke distortion image according to an embodiment of the present application, and fig. 10 (a) may be a Smoke distortion image Smoke1 and fig. 10 (b) may be a Smoke distortion image Smoke2.
In one possible embodiment, the smoke distortion image includes a first smoke distortion image and a second smoke distortion image, and determining the smoke drift special effect based on the smoke distortion image and the transparency image in step S263 may include performing the steps of:
Step S2631, superposing the first smoke distortion image and the second smoke distortion image to obtain a target smoke distortion image;
step S2632, determining a transparency image according to the target smoke distortion image and the second channel value;
step S2633, determining the smoke drift special effect based on the transparency image of the target smoke distortion image.
And generating two Smoke distortion images, namely a first Smoke distortion image Smoke1 and a second Smoke distortion image Smoke2, superposing the first Smoke distortion image Smoke1 and the second Smoke distortion image Smoke2 to obtain a target Smoke distortion image, determining transparency Opacity of the Smoke distortion image according to second channel values of the target Smoke distortion image and a second Mask intensity Mask, and finally obtaining a Smoke drifting special effect based on the transparency image Opacity of the target Smoke distortion image.
Illustratively, the first Smoke distortion image Smoke1 and the second Smoke distortion image Smoke2 may be superimposed by a float opacity= (Smoke 1.B+smoke2. B) Mask. R statement and multiplied by the R-channel of the second Mask intensity Mask to obtain transparency Opacity.
And outputting the smoke drifting special effect through a float4 OutputColor=float 4 (1, opacity) statement. It can be seen that here the colors red, green and blue of the OutputColor are all set to 1, i.e. the color representing the output is white, while the transparency is determined by the variable Opacity.
As shown in fig. 11 and 12, fig. 11 is a schematic view of a transparency image according to an embodiment of the present application, and fig. 12 is a schematic view of a smoke drift special effect according to an embodiment of the present application. By superimposing the first Smoke distorted image Smoke1 and the second Smoke distorted image Smoke2 in fig. 10, and setting the color values and adjusting the transparency through the transparency image shown in fig. 11, the Smoke drift special effect shown in fig. 12 is finally determined, and it can be understood that fig. 12 is a dynamic Smoke fluid special effect.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
In this embodiment, an aerosol generating device is further provided, and the aerosol generating device is used to implement the foregoing embodiments and preferred embodiments, and will not be described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 13 is a block diagram of an aerosol generating device according to an embodiment of the present application, as shown in fig. 13, taking an aerosol generating device 1300 as an example, the aerosol generating device 1300 includes an acquisition module 1301, where the acquisition module 1301 is configured to acquire a first strip-shaped patch, a material ball, a first map and a second map, where the first strip-shaped patch is a patch for simulating an aerosol shape, the first map is a mask map, and the second map is a noise map; the rendering module 1302 is configured to render the first strip-shaped dough piece according to the material ball to obtain a second strip-shaped dough piece, where the second strip-shaped dough piece is a dough piece for simulating a smoke shape and a smoke texture; the first determining module 1303, where the first determining module 1303 is configured to obtain a third strip-shaped dough piece according to the second strip-shaped dough piece, the first mapping, and the second mapping, and a vertex coordinate of the third strip-shaped dough piece changes based on time; and a second determining module 1304, where the second determining module 1304 is configured to obtain a special effect of smoke drift according to the third strip-shaped patch, the first map and the second map, and the vertex coordinates and the texture coordinates of the special effect of smoke drift are changed based on time.
Optionally, the first strip-shaped dough sheet is composed of a preset number of strip-shaped dough sheets, and the first determining module 1303 is further configured to: in the vertex shader, performing offset processing on the first vertex coordinates of the second strip-shaped patches by using a trigonometric function and a time variable to obtain screen offset; sampling the second mapping based on at least one first sampling parameter, time variable and first texture coordinates of the second strip-shaped patches, which correspond to the preset number of strip-shaped patches respectively, to obtain a noise offset, wherein the at least one first sampling parameter is determined according to the first mapping; and determining a third strip-shaped dough piece according to the screen offset, the noise offset and the second strip-shaped dough piece.
Optionally, the first determining module 1303 is further configured to: converting the second vertex coordinates of the second strip-shaped surface piece from the object space to the clipping space to obtain third vertex coordinates; converting the third vertex coordinates from the clipping space to the screen space to obtain first vertex coordinates; and performing offset processing on the first vertex coordinates by using a sine trigonometric function and a time variable to obtain screen offset, wherein the screen offset is offset in the transverse direction and the longitudinal direction.
Optionally, the first determining module 1303 is further configured to: sampling the first mapping based on the first texture coordinates to obtain a first mask strength; determining at least one first sampling parameter according to the linear interpolation function and a first channel value of the first mask intensity, wherein the first sampling parameter comprises a tiling parameter, a translation speed parameter and an intensity coefficient of texture coordinates; and sampling the second mapping based on at least one first sampling parameter, a time variable and the first texture coordinate to obtain a noise offset, wherein the noise offset is offset in the transverse direction and the longitudinal direction.
Optionally, the first determining module 1303 is further configured to: superposing the screen offset and the noise offset to obtain a target offset; and obtaining a third strip-shaped dough piece according to the target offset and the second strip-shaped dough piece.
Optionally, the first strip of dough sheet is composed of a preset number of strip of dough sheets, and the second determining module 1304 is further configured to: in the pixel shader, sampling the second mapping based on at least one second sampling parameter, time variable and second texture coordinates of a third strip-shaped patch corresponding to a preset number of strip-shaped patches respectively to obtain a distorted image, wherein the at least one second sampling parameter is determined according to the first mapping, and the distorted image is an image with a superimposed distortion effect on the third strip-shaped patch; sampling the second mapping based on at least one third sampling parameter, time variable, second texture coordinates and a distortion image which are respectively corresponding to a preset number of strip-shaped patches to obtain a smoke distortion image, wherein the at least one third sampling parameter is determined according to the first mapping, and the smoke distortion image is an image for overlapping a smoke effect on the distortion image; the smoke drift special effects are determined based on the smoke distortion image and the transparency image, wherein the transparency image is used to determine the transparency of the smoke distortion image.
Optionally, the second determining module 1304 is further configured to: sampling the first mapping based on the second texture coordinates to obtain second mask strength; determining at least one second sampling parameter according to the linear interpolation function and a second channel value of the second mask intensity, wherein the second sampling parameter comprises a tiling parameter and a translational speed parameter of texture coordinates; and sampling the second mapping based on at least one second sampling parameter, a time variable and a second texture coordinate to obtain a distorted image.
Optionally, the second determining module 1304 is further configured to: determining at least one third sampling parameter according to the linear interpolation function and the second channel value; and sampling the second map based on at least one third sampling parameter, the time variable, the second texture coordinates and the warped image to obtain a smoke warped image.
Optionally, the smoke distortion image comprises a first smoke distortion image and a second smoke distortion image, the second determination module 1304 is further configured to: superposing the first smoke distortion image and the second smoke distortion image to obtain a target smoke distortion image; determining a transparency image according to the target smoke distortion image and the second channel value; the smoke drift special effect is determined based on the transparency image of the target smoke distorted image.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
Alternatively, in this embodiment, the above-mentioned computer-readable storage medium may be located in any one of the computer terminals in the computer terminal group in the computer network, or in any one of the mobile terminals in the mobile terminal group.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may be configured to store a computer program for performing the steps of:
Step S20, a first strip-shaped dough piece, a material ball, a first mapping and a second mapping are obtained, wherein the first strip-shaped dough piece is a dough piece for simulating the shape of smoke, the first mapping is a mask mapping, and the second mapping is a noise mapping;
step S22, rendering the first strip-shaped dough piece according to the material balls to obtain a second strip-shaped dough piece, wherein the second strip-shaped dough piece is used for simulating the shape and texture of smoke;
step S24, obtaining a third strip-shaped dough piece according to the second strip-shaped dough piece, the first mapping and the second mapping, wherein the vertex coordinates of the third strip-shaped dough piece are changed based on time;
and S26, obtaining the smoke drifting special effect according to the third strip-shaped dough sheet, the first mapping and the second mapping, wherein the vertex coordinates and the texture coordinates of the smoke drifting special effect are changed based on time.
Optionally, the first strip of dough consists of a preset number of strips of dough, the above computer readable storage medium being further arranged to store program code for performing the steps of: in the vertex shader, performing offset processing on the first vertex coordinates of the second strip-shaped patches by using a trigonometric function and a time variable to obtain screen offset; sampling the second mapping based on at least one first sampling parameter, time variable and first texture coordinates of the second strip-shaped patches, which correspond to the preset number of strip-shaped patches respectively, to obtain a noise offset, wherein the at least one first sampling parameter is determined according to the first mapping; and determining a third strip-shaped dough piece according to the screen offset, the noise offset and the second strip-shaped dough piece.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: converting the second vertex coordinates of the second strip-shaped surface piece from the object space to the clipping space to obtain third vertex coordinates; converting the third vertex coordinates from the clipping space to the screen space to obtain first vertex coordinates; and performing offset processing on the first vertex coordinates by using a sine trigonometric function and a time variable to obtain screen offset, wherein the screen offset is offset in the transverse direction and the longitudinal direction.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: sampling the first mapping based on the first texture coordinates to obtain a first mask strength; determining at least one first sampling parameter according to the linear interpolation function and a first channel value of the first mask intensity, wherein the first sampling parameter comprises a tiling parameter, a translation speed parameter and an intensity coefficient of texture coordinates; and sampling the second mapping based on at least one first sampling parameter, a time variable and the first texture coordinate to obtain a noise offset, wherein the noise offset is offset in the transverse direction and the longitudinal direction.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: superposing the screen offset and the noise offset to obtain a target offset; and obtaining a third strip-shaped dough piece according to the target offset and the second strip-shaped dough piece.
Optionally, the first strip of dough consists of a preset number of strips of dough, the above computer readable storage medium being further arranged to store program code for performing the steps of: in the pixel shader, sampling the second mapping based on at least one second sampling parameter, time variable and second texture coordinates of a third strip-shaped patch corresponding to a preset number of strip-shaped patches respectively to obtain a distorted image, wherein the at least one second sampling parameter is determined according to the first mapping, and the distorted image is an image with a superimposed distortion effect on the third strip-shaped patch; sampling the second mapping based on at least one third sampling parameter, time variable, second texture coordinates and a distortion image which are respectively corresponding to a preset number of strip-shaped patches to obtain a smoke distortion image, wherein the at least one third sampling parameter is determined according to the first mapping, and the smoke distortion image is an image for overlapping a smoke effect on the distortion image; the smoke drift special effects are determined based on the smoke distortion image and the transparency image, wherein the transparency image is used to determine the transparency of the smoke distortion image.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: sampling the first mapping based on the second texture coordinates to obtain second mask strength; determining at least one second sampling parameter according to the linear interpolation function and a second channel value of the second mask intensity, wherein the second sampling parameter comprises a tiling parameter and a translational speed parameter of texture coordinates; and sampling the second mapping based on at least one second sampling parameter, a time variable and a second texture coordinate to obtain a distorted image.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: determining at least one third sampling parameter according to the linear interpolation function and the second channel value; and sampling the second map based on at least one third sampling parameter, the time variable, the second texture coordinates and the warped image to obtain a smoke warped image.
Optionally, the smoke warp image comprises a first smoke warp image and a second smoke warp image, the computer readable storage medium being further arranged to store program code for performing the steps of: superposing the first smoke distortion image and the second smoke distortion image to obtain a target smoke distortion image; determining a transparency image according to the target smoke distortion image and the second channel value; the smoke drift special effect is determined based on the transparency image of the target smoke distorted image.
In the computer readable storage medium of this embodiment, a solution for generating smoke is provided by obtaining a first strip-shaped patch, a material ball, a first map and a second map, where the first strip-shaped patch is a patch for simulating a smoke shape, the first map is a mask map, and the second map is a noise map; rendering the first strip-shaped dough piece according to the material ball to obtain a second strip-shaped dough piece, wherein the second strip-shaped dough piece is used for simulating the shape and texture of smoke; obtaining a third strip-shaped dough piece according to the second strip-shaped dough piece, the first mapping and the second mapping, wherein the vertex coordinates of the third strip-shaped dough piece are changed based on time; and obtaining the smoke drifting special effect according to the third strip-shaped dough piece, the first mapping and the second mapping, wherein the vertex coordinates and the texture coordinates of the smoke drifting special effect are changed based on time. The method achieves the purpose of generating a dynamic smoke fluid image by a simple mode of modeling and matching with the calculation of the shader, so that the technical effect of rendering the smoke fluid special effect in a low-performance consumption mode without using a light stepping technology can be achieved.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a computer readable storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present application.
In an exemplary embodiment of the present application, a computer-readable storage medium stores thereon a program product capable of implementing the method described above in this embodiment. In some possible implementations, the various aspects of the embodiments of the application may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the application as described in the "exemplary methods" section of this embodiment, when the program product is run on the terminal device.
A program product for implementing the above-mentioned method according to an embodiment of the present application may employ a portable compact disc read Only Memory (CD-ROM) and include a program code, and may be run on a terminal device such as a personal computer. However, the program product of the embodiments of the present application is not limited thereto, and in the embodiments of the present application, the computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Any combination of one or more computer readable media may be employed by the program product described above. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (EPROM) or flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that the program code embodied on the computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, radio Frequency (RF), etc., or any suitable combination of the foregoing.
An embodiment of the application also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
step S20, a first strip-shaped dough piece, a material ball, a first mapping and a second mapping are obtained, wherein the first strip-shaped dough piece is a dough piece for simulating the shape of smoke, the first mapping is a mask mapping, and the second mapping is a noise mapping;
step S22, rendering the first strip-shaped dough piece according to the material balls to obtain a second strip-shaped dough piece, wherein the second strip-shaped dough piece is used for simulating the shape and texture of smoke;
Step S24, obtaining a third strip-shaped dough piece according to the second strip-shaped dough piece, the first mapping and the second mapping, wherein the vertex coordinates of the third strip-shaped dough piece are changed based on time;
and S26, obtaining the smoke drifting special effect according to the third strip-shaped dough sheet, the first mapping and the second mapping, wherein the vertex coordinates and the texture coordinates of the smoke drifting special effect are changed based on time.
Optionally, the first strip of dough is made up of a preset number of strips of dough, and the processor may be further configured to execute the following steps by means of a computer program: in the vertex shader, performing offset processing on the first vertex coordinates of the second strip-shaped patches by using a trigonometric function and a time variable to obtain screen offset; sampling the second mapping based on at least one first sampling parameter, time variable and first texture coordinates of the second strip-shaped patches, which correspond to the preset number of strip-shaped patches respectively, to obtain a noise offset, wherein the at least one first sampling parameter is determined according to the first mapping; and determining a third strip-shaped dough piece according to the screen offset, the noise offset and the second strip-shaped dough piece.
Optionally, the above processor may be further configured to perform the following steps by a computer program: converting the second vertex coordinates of the second strip-shaped surface piece from the object space to the clipping space to obtain third vertex coordinates; converting the third vertex coordinates from the clipping space to the screen space to obtain first vertex coordinates; and performing offset processing on the first vertex coordinates by using a sine trigonometric function and a time variable to obtain screen offset, wherein the screen offset is offset in the transverse direction and the longitudinal direction.
Optionally, the above processor may be further configured to perform the following steps by a computer program: sampling the first mapping based on the first texture coordinates to obtain a first mask strength; determining at least one first sampling parameter according to the linear interpolation function and a first channel value of the first mask intensity, wherein the first sampling parameter comprises a tiling parameter, a translation speed parameter and an intensity coefficient of texture coordinates; and sampling the second mapping based on at least one first sampling parameter, a time variable and the first texture coordinate to obtain a noise offset, wherein the noise offset is offset in the transverse direction and the longitudinal direction.
Optionally, the above processor may be further configured to perform the following steps by a computer program: superposing the screen offset and the noise offset to obtain a target offset; and obtaining a third strip-shaped dough piece according to the target offset and the second strip-shaped dough piece.
Optionally, the first strip of dough is made up of a preset number of strips of dough, and the processor may be further configured to execute the following steps by means of a computer program: in the pixel shader, sampling the second mapping based on at least one second sampling parameter, time variable and second texture coordinates of a third strip-shaped patch corresponding to a preset number of strip-shaped patches respectively to obtain a distorted image, wherein the at least one second sampling parameter is determined according to the first mapping, and the distorted image is an image with a superimposed distortion effect on the third strip-shaped patch; sampling the second mapping based on at least one third sampling parameter, time variable, second texture coordinates and a distortion image which are respectively corresponding to a preset number of strip-shaped patches to obtain a smoke distortion image, wherein the at least one third sampling parameter is determined according to the first mapping, and the smoke distortion image is an image for overlapping a smoke effect on the distortion image; the smoke drift special effects are determined based on the smoke distortion image and the transparency image, wherein the transparency image is used to determine the transparency of the smoke distortion image.
Optionally, the above processor may be further configured to perform the following steps by a computer program: sampling the first mapping based on the second texture coordinates to obtain second mask strength; determining at least one second sampling parameter according to the linear interpolation function and a second channel value of the second mask intensity, wherein the second sampling parameter comprises a tiling parameter and a translational speed parameter of texture coordinates; and sampling the second mapping based on at least one second sampling parameter, a time variable and a second texture coordinate to obtain a distorted image.
Optionally, the above processor may be further configured to perform the following steps by a computer program: determining at least one third sampling parameter according to the linear interpolation function and the second channel value; and sampling the second map based on at least one third sampling parameter, the time variable, the second texture coordinates and the warped image to obtain a smoke warped image.
Optionally, the smoke warp image comprises a first smoke warp image and a second smoke warp image, the processor may be further arranged to perform the following steps by a computer program: superposing the first smoke distortion image and the second smoke distortion image to obtain a target smoke distortion image; determining a transparency image according to the target smoke distortion image and the second channel value; the smoke drift special effect is determined based on the transparency image of the target smoke distorted image.
In the electronic device of this embodiment, a technical solution for generating smoke is provided by obtaining a first strip-shaped panel, a material ball, a first mapping and a second mapping, where the first strip-shaped panel is a panel for simulating a smoke shape, the first mapping is a mask mapping, and the second mapping is a noise mapping; rendering the first strip-shaped dough piece according to the material ball to obtain a second strip-shaped dough piece, wherein the second strip-shaped dough piece is used for simulating the shape and texture of smoke; obtaining a third strip-shaped dough piece according to the second strip-shaped dough piece, the first mapping and the second mapping, wherein the vertex coordinates of the third strip-shaped dough piece are changed based on time; and obtaining the smoke drifting special effect according to the third strip-shaped dough piece, the first mapping and the second mapping, wherein the vertex coordinates and the texture coordinates of the smoke drifting special effect are changed based on time. The method achieves the purpose of generating a dynamic smoke fluid image by a simple mode of modeling and matching with the calculation of the shader, so that the technical effect of rendering the smoke fluid special effect in a low-performance consumption mode without using a light stepping technology can be achieved.
Fig. 14 is a schematic view of an electronic device according to an embodiment of the application. As shown in fig. 14, the electronic device 1400 is only one example and should not be construed as limiting the functionality and scope of use of the embodiments of the application.
As shown in fig. 14, the electronic apparatus 1400 is embodied in the form of a general purpose computing device. Components of electronic device 1400 may include, but are not limited to: the at least one processor 1410, the at least one memory 1420, a bus 1430 connecting the different system components (including the memory 1420 and the processor 1410), and a display 1440.
Wherein the memory 1420 stores program code that can be executed by the processor 1410, such that the processor 1410 performs the steps according to various exemplary embodiments of the present application described in the above method section of the embodiment of the present application.
Memory 1420 may include readable media in the form of volatile memory units, such as Random Access Memory (RAM) 14201 and/or cache memory 14202, and may further include Read Only Memory (ROM) 14203, and may also include nonvolatile memory, such as one or more magnetic storage devices, flash memory, or other nonvolatile solid state memory.
In some examples, memory 1420 can also include a program/utility 14204 having a set (at least one) of program modules 14205, such program modules 14205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Memory 1420 may further include memory located remotely from processor 1410, which may be connected to electronic device 1400 through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Bus 1430 may be a bus representing one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processor 1410, or a local bus using any of a variety of bus architectures.
The display 1440 may be, for example, a touch screen type liquid crystal display (Liquid Crystal Display, LCD) that may enable a user to interact with a user interface of the electronic device 1400.
Optionally, the electronic apparatus 1400 may also communicate with one or more external devices 1500 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic apparatus 1400, and/or any device (e.g., router, modem, etc.) that enables the electronic apparatus 1400 to communicate with one or more other computing devices. Such communication may occur through an Input/Output (I/O) interface 1450. Also, electronic device 1400 may communicate with one or more networks such as a local area network (Local Area Network, LAN), a wide area network (Wide Area Network, WAN) and/or a public network such as the internet via network adapter 1460. As shown in fig. 14, the network adapter 1460 communicates with other modules of the electronic device 1400 via the bus 1430. It should be appreciated that although not shown in fig. 14, other hardware and/or software modules may be used in connection with the electronic device 1400, which may include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, disk array (Redundant Array of Independent Disks, RAID) systems, tape drives, data backup storage systems, and the like.
The electronic device 1400 may further include: a keyboard, a cursor control device (e.g., a mouse), an input/output interface (I/O interface), a network interface, a power supply, and/or a camera.
It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 14 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, electronic device 1400 may also include more or fewer components than shown in fig. 14, or have a different configuration than shown in fig. 1. The memory 1420 may be used to store computer programs and corresponding data, such as those corresponding to the aerosol generating method in embodiments of the present application. The processor 1410 executes various functional applications and data processing by running a computer program stored in the memory 1420, i.e. implements the aerosol generating method described above.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (12)

1. A method of aerosol generation, the method comprising:
acquiring a first strip-shaped dough piece, a material ball, a first mapping and a second mapping, wherein the first strip-shaped dough piece is a dough piece for simulating the shape of smoke, the first mapping is a mask mapping, and the second mapping is a noise mapping;
rendering the first strip-shaped dough piece according to the material balls to obtain a second strip-shaped dough piece, wherein the second strip-shaped dough piece is a dough piece for simulating the shape and texture of smoke;
obtaining a third strip-shaped dough piece according to the second strip-shaped dough piece, the first mapping and the second mapping, wherein the vertex coordinates of the third strip-shaped dough piece are changed based on time;
and obtaining the smoke drifting special effect according to the third strip-shaped dough sheet, the first mapping and the second mapping, wherein the vertex coordinates and the texture coordinates of the smoke drifting special effect are changed based on time.
2. The method of claim 1, wherein the first strip of dough is comprised of a predetermined number of strips of dough, and wherein deriving the third strip of dough from the second strip of dough, the first map and the second map comprises:
in the vertex shader, performing offset processing on the first vertex coordinates of the second strip-shaped patches by using a trigonometric function and a time variable to obtain screen offset;
sampling the second mapping based on at least one first sampling parameter, the time variable and a first texture coordinate of the second strip-shaped patches, which correspond to the preset number of strip-shaped patches respectively, to obtain a noise offset, wherein the at least one first sampling parameter is determined according to the first mapping;
and determining the third strip-shaped dough piece according to the screen offset, the noise offset and the second strip-shaped dough piece.
3. The method of claim 2, wherein the performing the offset processing on the first vertex coordinates of the second strip-shaped patch using the trigonometric function and the time variable to obtain the screen offset comprises:
converting the second vertex coordinates of the second strip-shaped surface piece from the object space to a clipping space to obtain third vertex coordinates;
Converting the third vertex coordinates from a clipping space to a screen space to obtain the first vertex coordinates;
and performing offset processing on the first vertex coordinates by using a sine trigonometric function and the time variable to obtain the screen offset, wherein the screen offset is offset in the transverse direction and the longitudinal direction.
4. The method of claim 2, wherein the sampling the second map based on the at least one first sampling parameter, the time variable, and the first texture coordinates of the second strip-shaped patches corresponding to the preset number of strip-shaped patches, respectively, to obtain a noise offset comprises:
sampling the first map based on the first texture coordinates to obtain a first mask strength;
determining the at least one first sampling parameter according to a linear interpolation function and a first channel value of the first mask intensity, wherein the first sampling parameter comprises a tiling parameter, a translation speed parameter and an intensity coefficient of texture coordinates;
and sampling the second mapping based on the at least one first sampling parameter, the time variable and the first texture coordinate to obtain the noise offset, wherein the noise offset is offset in the transverse direction and the longitudinal direction.
5. The method of claim 2, wherein the determining the third strip of dough from the screen offset, the noise offset, and the second strip of dough comprises:
superposing the screen offset and the noise offset to obtain a target offset;
and obtaining the third strip-shaped dough piece according to the target offset and the second strip-shaped dough piece.
6. The method of claim 1, wherein the first strip of dough pieces is comprised of a predetermined number of strip of dough pieces, and wherein obtaining the smoke drift effect from the third strip of dough pieces, the first map, and the second map comprises:
in the pixel shader, sampling the second mapping based on at least one second sampling parameter, time variable and second texture coordinates of the third strip-shaped patches, which correspond to the preset number of strip-shaped patches respectively, to obtain a distorted image, wherein the at least one second sampling parameter is determined according to the first mapping, and the distorted image is an image with a superimposed distortion effect on the third strip-shaped patches;
sampling the second mapping based on at least one third sampling parameter, the time variable, the second texture coordinates and the distorted image, which correspond to the preset number of strip-shaped patches respectively, to obtain a smoke distorted image, wherein the at least one third sampling parameter is determined according to the first mapping, and the smoke distorted image is an image obtained by overlapping a smoke effect on the distorted image;
The smoke drift special effects are determined based on the smoke distortion image and a transparency image, wherein the transparency image is used to determine the transparency of the smoke distortion image.
7. The method of claim 6, wherein the sampling the second map based on the at least one second sampling parameter, the time variable, and the second texture coordinates of the third strip of patches, respectively, to obtain a warped image comprises:
sampling the first map based on the second texture coordinates to obtain second mask strength;
determining the at least one second sampling parameter according to a linear interpolation function and a second channel value of the second mask intensity, wherein the second sampling parameter comprises a tiling parameter and a translational speed parameter of texture coordinates;
and sampling the second mapping based on the at least one second sampling parameter, the time variable and the second texture coordinates to obtain the distorted image.
8. The method of claim 7, wherein the sampling the second map based on the at least one third sampling parameter, the time variable, the second texture coordinates, and the warped image, respectively, for the preset number of strip-shaped patches, comprises:
Determining the at least one third sampling parameter according to a linear interpolation function and the second channel value;
and sampling the second map based on the at least one third sampling parameter, the time variable, the second texture coordinates and the distorted image to obtain the smoke distorted image.
9. The method of claim 8, wherein the smoke warp image comprises a first smoke warp image and a second smoke warp image, and wherein determining the smoke drift effect based on the smoke warp image and the transparency image comprises:
superposing the first smoke distortion image and the second smoke distortion image to obtain a target smoke distortion image;
determining the transparency image according to the target smoke distortion image and the second channel value;
a smoke drift special effect is determined based on the transparency image of the target smoke distortion image.
10. An aerosol generating device, the device comprising:
the device comprises an acquisition module, a first processing module and a second processing module, wherein the acquisition module is used for acquiring a first strip-shaped panel, a material ball, a first mapping and a second mapping, the first strip-shaped panel is a panel for simulating the shape of smoke, the first mapping is a mask mapping, and the second mapping is a noise mapping;
The rendering module is used for rendering the first strip-shaped dough piece according to the material balls to obtain a second strip-shaped dough piece, wherein the second strip-shaped dough piece is used for simulating the shape and texture of smoke;
the first determining module is used for obtaining a third strip-shaped dough piece according to the second strip-shaped dough piece, the first mapping and the second mapping, wherein the vertex coordinates of the third strip-shaped dough piece are changed based on time;
and the second determining module is used for obtaining the smoke drifting special effect according to the third strip-shaped dough piece, the first mapping and the second mapping, wherein the vertex coordinates and the texture coordinates of the smoke drifting special effect are changed based on time.
11. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program, wherein the computer program is arranged to perform the aerosol generating method according to any of the preceding claims 1 to 9 when run on a computer or processor.
12. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the aerosol generating method of any of the preceding claims 1 to 9.
CN202310935535.7A 2023-07-27 2023-07-27 Smoke generation method, device, computer-readable storage medium, and electronic device Pending CN117085318A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310935535.7A CN117085318A (en) 2023-07-27 2023-07-27 Smoke generation method, device, computer-readable storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310935535.7A CN117085318A (en) 2023-07-27 2023-07-27 Smoke generation method, device, computer-readable storage medium, and electronic device

Publications (1)

Publication Number Publication Date
CN117085318A true CN117085318A (en) 2023-11-21

Family

ID=88782761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310935535.7A Pending CN117085318A (en) 2023-07-27 2023-07-27 Smoke generation method, device, computer-readable storage medium, and electronic device

Country Status (1)

Country Link
CN (1) CN117085318A (en)

Similar Documents

Publication Publication Date Title
JP7386153B2 (en) Rendering methods and terminals that simulate lighting
KR101636808B1 (en) Dynamic graphical interface shadows
CN110196746B (en) Interactive interface rendering method and device, electronic equipment and storage medium
CN109448137B (en) Interaction method, interaction device, electronic equipment and storage medium
Guha Computer Graphics Through OpenGL®: from theory to experiments
US20070139408A1 (en) Reflective image objects
RU2427918C2 (en) Metaphor of 2d editing for 3d graphics
Li et al. Multivisual animation character 3D model design method based on VR technology
Gortler Foundations of 3D computer graphics
CN109544674B (en) Method and device for realizing volume light
Choromański et al. Development of virtual reality application for cultural heritage visualization from multi-source 3D data
Han 3D graphics for game programming
CN116243831B (en) Virtual cloud exhibition hall interaction method and system
CN117085318A (en) Smoke generation method, device, computer-readable storage medium, and electronic device
CN115131489A (en) Cloud layer rendering method and device, storage medium and electronic device
CN114299203A (en) Processing method and device of virtual model
US20180005432A1 (en) Shading Using Multiple Texture Maps
Nordahl Enhancing the hpc-lab snow simulator with more realistic terrains and other interactive features
Inzerillo et al. Optimization of cultural heritage virtual environments for gaming applications
CN116452704A (en) Method and device for generating lens halation special effect, storage medium and electronic device
CN101561935A (en) GoCAD software-oriented implanted true three-dimensional display method
CN117496035A (en) Texture map generation method and device, storage medium and electronic device
Romanov ON THE DEVELOPMENT OF SOFTWARE WITH A GRAPHICAL INTERFACE THAT SIMULATES THE ASSEMBLY OF THE CONSTRUCTOR
CN118079373A (en) Model rendering method and device, storage medium and electronic device
Lei et al. Immersive display and interactive techniques for the modeling and rendering of virtual heritage environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination