CN116385609A - Method and device for processing special effect animation, computer equipment and storage medium - Google Patents
Method and device for processing special effect animation, computer equipment and storage medium Download PDFInfo
- Publication number
- CN116385609A CN116385609A CN202310004908.9A CN202310004908A CN116385609A CN 116385609 A CN116385609 A CN 116385609A CN 202310004908 A CN202310004908 A CN 202310004908A CN 116385609 A CN116385609 A CN 116385609A
- Authority
- CN
- China
- Prior art keywords
- target
- virtual grid
- grid body
- virtual
- special effect
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000000694 effects Effects 0.000 title claims abstract description 155
- 238000012545 processing Methods 0.000 title claims abstract description 78
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000003860 storage Methods 0.000 title claims abstract description 18
- 239000002245 particle Substances 0.000 claims abstract description 208
- 125000004122 cyclic group Chemical group 0.000 claims description 28
- 239000000463 material Substances 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 10
- 230000004044 response Effects 0.000 claims description 7
- 238000004519 manufacturing process Methods 0.000 abstract description 25
- 239000000306 component Substances 0.000 description 36
- 238000003672 processing method Methods 0.000 description 16
- 230000003068 static effect Effects 0.000 description 12
- 230000006870 function Effects 0.000 description 9
- 230000001105 regulatory effect Effects 0.000 description 8
- 238000001514 detection method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000013461 design Methods 0.000 description 4
- 238000007726 management method Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000001276 controlling effect Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 238000012384 transportation and delivery Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 239000008358 core component Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000013439 planning Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/80—2D [Two Dimensional] animation, e.g. using sprites
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T90/00—Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the application discloses a method, a device, a computer device and a storage medium for processing special effect animation, comprising the following steps: generating a virtual grid body combination model according to the initial reference spline line and the plurality of virtual grid bodies; acquiring the space position information of each virtual grid body in the virtual grid body combination model to obtain a plurality of space position information; adjusting particle emission parameters of the particle emitter based on the plurality of spatial position information and the specified texture information to obtain a target particle emitter; and transmitting a plurality of target particles with target textures at each spatial position by a target particle transmitter so as to form a target special effect animation according to the plurality of target particles. According to the embodiment of the application, the special effect animation can be produced only in the virtual engine, the special effect animation production time can be shortened, and the virtual grid body combination model can be adjusted in real time, so that the target special effect animation can be adjusted in real time as required, and the multiplexing rate and the production efficiency of the special effect animation are improved.
Description
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and apparatus for processing a special effect animation, a computer device, and a storage medium.
Background
Entertainment applications, such as gaming applications, that can be operated on terminals have been developed in order to meet the pursuit of mental life by people. In order to make a player obtain better game experience, many terminal games are often constructed based on real scenes and phenomena in real scenes, so that the implementation of game resources such as virtual scenes and virtual animations in the game during game design is expected to be closer to the real environment.
In actual game design engineering, frequent art and planning can propose game resource production, for example, various special effect animations capable of moving, which simulate real life, can be realized through game resource design, or special effect animations capable of moving, which are displayed when game skills are released, can move along with paths designed by game producers. At present, a common method for producing special effects of sports is to edit a curve in a digital content generating tool, so that a source of the special effects animation produces skeleton animation along the curve, and then the skeleton animation is imported into a game engine, thereby realizing the sports of the special effects animation in the game engine. However, the conventional process of the existing special effect manufacturing can generate a large number of import and export redundancy steps, a plurality of software is required for manufacturing, some error problems caused by different configuration environments and platforms can occur, and the real-time preview of the motion path corresponding to the special effect cannot be performed due to the manufacturing in the plurality of software, so that the adjustment and modification are inconvenient, the steps of special effect manufacturing are numerous, and the efficiency of special effect manufacturing is low.
Disclosure of Invention
The embodiment of the application provides a processing method, a device, computer equipment and a storage medium for special effect animation, which can realize the production of special effect animation only in a virtual engine, can shorten the time for producing the special effect animation, and can adjust the virtual grid body combination model in real time by adopting a particle emitter to form a target special effect animation according to the virtual grid body combination model after generating the virtual grid body combination model by adopting a spline line component and a grid body component, thereby adjusting the target special effect animation in real time, further improving the multiplexing rate of the special effect animation and the production efficiency of the special effect animation.
The embodiment of the application provides a processing method of special effect animation, which is applied to a virtual engine, and comprises the following steps:
generating a virtual grid body combination model according to the initial reference spline line and the plurality of virtual grid bodies;
acquiring the space position information of each virtual grid body in the virtual grid body combination model to obtain a plurality of space position information;
adjusting particle emission parameters of the particle emitter based on the plurality of spatial position information and the specified texture information to obtain a target particle emitter;
And transmitting a plurality of target particles with target textures at each spatial position through the target particle transmitter so as to form a target special effect animation according to the plurality of target particles.
Correspondingly, the embodiment of the application also provides a processing device of the special effect animation, which comprises:
the generating unit is used for generating a virtual grid body combination model according to the initial reference spline line and the plurality of virtual grid bodies;
the acquisition unit is used for acquiring the space position information of each virtual grid body in the virtual grid body combination model to obtain a plurality of space position information;
the adjusting unit is used for adjusting particle emission parameters of the particle emitter based on the plurality of spatial position information and the specified texture information so as to obtain a target particle emitter;
and the processing unit is used for transmitting a plurality of target particles with target textures at each space position through the target particle transmitter so as to form target special effect animation according to the plurality of target particles.
Correspondingly, the embodiment of the application also provides computer equipment, which comprises a processor, a memory and a computer program stored on the memory and capable of running on the processor, wherein the computer program is executed by the processor to realize the processing method of any special effect animation.
Accordingly, the embodiments of the present application further provide a computer readable storage medium, where a computer program is stored, where the computer program when executed by a processor implements a method for processing any of the special effects animations described above.
The embodiment of the application provides a processing method, a device, computer equipment and a storage medium for special effect animation, which generate a virtual grid body combination model according to an initial reference spline line and a plurality of virtual grid bodies; then, acquiring the space position information of each virtual grid body in the virtual grid body combination model to obtain a plurality of space position information; then, adjusting particle emission parameters of the particle emitter based on the plurality of spatial position information and the specified texture information to obtain a target particle emitter; finally, a plurality of target particles with target textures are emitted at each spatial position through the target particle emitter, so that a target special effect animation is formed according to the plurality of target particles. According to the embodiment of the application, the special effect animation can be produced only in the virtual engine, the special effect animation production time can be shortened, and after the virtual grid body combination model is produced through the spline line and the grid body, the target special effect animation is formed by adopting the particle emitter according to the virtual grid body combination model, and the virtual grid body combination model can be adjusted in real time, so that the target special effect animation can be adjusted in real time, the multiplexing rate of the special effect animation is improved, and the production efficiency of the special effect animation is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a system schematic diagram of a processing apparatus for special effect animation according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of a method for processing special effect animation according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of a processing device for special effect animation according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The embodiment of the application provides a method and a device for processing special effect animation, computer equipment and a storage medium. Specifically, the method for processing the special effect animation according to the embodiment of the application may be performed by a computer device, where the computer device may be a device such as a terminal or a server. The terminal may be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game console, a personal computer (PC, personal Computer), a personal digital assistant (Personal Digital Assistant, PDA), and the like, and the terminal may further include a client, which may be a game application client, a browser client carrying a game program, or an instant messaging client, and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligent platforms, and the like.
For example, when the processing method of the special effect animation is run on the terminal, the terminal device stores a game application program and is used for presenting a virtual scene in a game screen. The terminal device is used for interacting with a user through a graphical user interface, for example, the terminal device downloads and installs a game application program and runs the game application program. The way in which the terminal device presents the graphical user interface to the user may include a variety of ways, for example, the graphical user interface may be rendered for display on a display screen of the terminal device, or presented by holographic projection. For example, the terminal device may include a touch display screen for presenting a graphical user interface including game screens and receiving operation instructions generated by a user acting on the graphical user interface, and a processor for running the game, generating the graphical user interface, responding to the operation instructions, and controlling the display of the graphical user interface on the touch display screen.
For example, when the processing method of the special effect animation runs on a server, it may be a cloud game. Cloud gaming refers to a game style based on cloud computing. In the running mode of the cloud game, a running main body of the game application program and a game picture presentation main body are separated, and the storage and the running of the processing method of the special effect animation are completed on a cloud game server. The game image presentation is completed at a cloud game client, which is mainly used for receiving and sending game data and presenting game images, for example, the cloud game client may be a display device with a data transmission function, such as a mobile terminal, a television, a computer, a palm computer, a personal digital assistant, etc., near a user side, but the terminal device for processing game data is a cloud game server in the cloud. When playing the game, the user operates the cloud game client to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the cloud game client through a network, and finally decodes the data through the cloud game client and outputs the game pictures.
Referring to fig. 1, fig. 1 is a schematic view of a scene of a special effect animation processing system according to an embodiment of the present application. The system may include at least one terminal, at least one server, at least one database, and a network. The terminal held by the user can be connected to the server of different games through the network. A terminal is any device having computing hardware capable of supporting and executing a software product corresponding to a game. In addition, when the system includes a plurality of terminals, a plurality of servers, and a plurality of networks, different terminals may be connected to each other through different networks, through different servers. The network may be a wireless network or a wired network, such as a wireless local area network (Wireless Local Area Network, WLAN), a local area network (local area network, LAN), a cellular network, a 2G network, a 3G network, a 4G network, a 5G network, etc. In addition, the different terminals may be connected to other terminals or to a server or the like using their own bluetooth network or hotspot network. For example, multiple users may be online through different terminals to connect and synchronize with each other through an appropriate network to support multiplayer games. In addition, the system may include multiple databases coupled to different servers and information related to the gaming environment may be continuously stored in the databases as different users play multiplayer games online.
It should be noted that, the schematic view of the scene of the special effect animation processing system shown in fig. 1 is only an example, and the special effect animation processing system and the scene described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application, and as a person of ordinary skill in the art can know that, with the appearance of a new service scene, the technical solutions provided by the embodiments of the present application are equally applicable to similar technical problems.
According to the special effect animation processing method, after the spline line component and the grid body component are adopted in the virtual engine to generate the virtual grid body combination model, the target special effect animation is formed by adopting the particle emitter according to the virtual grid body combination model, and the target special effect animation which can be previewed and adjusted in real time in the animation manufacturing process is obtained. Based on the method, game makers can adjust the target special effect animation according to specific requirements to obtain the special effect animation meeting the requirements, so that various target special effect animations can be rapidly generated, the multiplexing rate of the target special effect animations is improved, and the production efficiency of the target special effect animations is improved.
The method for processing the special effect animation provided by the embodiment of the application can be applied to a virtual Engine (UE). The illusion engine is a game engine, which refers to the core component of a well-written editable computer game system or an interactive real-time image application program. The illusion engine can provide game designers with various tools required to write games so that game development can be quickly achieved.
The embodiment of the application provides a processing method, a device, computer equipment and a storage medium for special effect animation, which can improve the multiplexing rate of the special effect animation and improve the production efficiency of the special effect animation. The following will describe in detail. The following description of the embodiments is not intended to limit the preferred embodiments.
Referring to fig. 2, fig. 2 is a schematic flow chart of a specific animation processing method according to an embodiment of the present application, where a specific flow of the specific animation processing method may be shown in steps 101 to 104 as follows:
101, generating a virtual grid body combination model according to the initial reference spline line and a plurality of virtual grid bodies.
In embodiments of the present application, a computer device may create a target virtual scene in a virtual engine in response to a virtual scene creation instruction. Then, the game designer can create an initial reference spline in the target virtual scene through the spline component in the virtual engine and add to the target virtual scene; the game designer may also create a plurality of virtual grid bodies in the virtual engine through the grid body component and add the plurality of virtual grid bodies to the target virtual scene, wherein the number of virtual grid bodies may be set according to the specific design requirements of the game designer.
Specifically, a game designer may create a blueprint of an Actor class in the illusion engine, and then add spline components and static grid components to the blueprint of the Actor class to add spline and multiple virtual grid bodies to the blueprint. Further, adding a static mesh component to the blueprint may add a static mesh to the spline for the purpose of visualizing the spline, the visualized spline being made up of multiple meshes.
Where actors refer to all objects in the illusion engine that can be placed into a checkpoint, such as a camera, a static grid body, and a player starting position. The Actor supports three-dimensional transformations, such as panning, rotating, and zooming, and may also be created (generated) or destroyed by game logic code (c++ or blueprint). A Spline Component (Spline Component) is one path for defining and using position data. Spline lines added by the spline line component can be completely edited in a blueprint viewport and a checkpoint editor, spline points can be added/removed/copied, the tangent types of the spline points can be changed, and even animation can be set for the spline points according to a tick. In addition, spline lines can also be edited using the blueprint construction script, and the spline lines can be accepted for editing in the blueprint viewport or checkpoint editor and further modified.
To visualize the virtual grid volumes for game designer to make in the virtual engine, the step of generating a virtual grid volume combination model from the initial reference spline and the plurality of virtual grid volumes may comprise:
and based on the line information of the initial reference spline, arranging and combining each virtual grid body in the plurality of virtual grid bodies on the initial reference spline to generate a virtual grid body combination model, wherein the line information characterizes the line attribute of the initial reference spline.
The virtual grid body combination model can be an ordered set or whole formed by a specified number of virtual grid bodies according to the space position information of each virtual grid body. The virtual grid body provided in the embodiment of the application is an instantiated static grid body (ISM) generated by an instantiated static grid body component, and because grid body rendering can be performed through one rendering instruction for a plurality of instantiated static grid bodies, the operation is convenient and fast when the instantiated static grid bodies are rendered by adopting the instantiated static grid body. Specifically, the virtual grid body combination model generated by arranging and combining the virtual grid bodies in the plurality of virtual grid bodies on the initial reference spline based on the line information of the initial reference spline may be a linear virtual grid body combination model.
Specifically, the line attribute of the initial reference spline line at least comprises a line length. The step of arranging and combining each virtual grid body in the plurality of virtual grid bodies on the initial reference spline line based on the line information of the initial reference spline line to generate a virtual grid body combination model, and the method may include:
determining a target distance based on the line length of the initial reference spline and the grid number of the plurality of virtual grid bodies, wherein the target distance is a distance distributed and set on the initial reference spline by each virtual grid body in the plurality of virtual grid bodies;
and according to the target distance, arranging and combining the plurality of virtual grid bodies on the initial reference spline line to generate a virtual grid body combination model.
Further, in the step of "the arranging and combining the plurality of virtual grid bodies on the initial reference spline line according to the target pitch to generate a virtual grid body combination model", the method may include:
dividing the initial reference spline according to the target distance to obtain a target reference spline with a plurality of line points, wherein each line point is used for indicating the arrangement position of a virtual grid body on the initial reference spline;
And based on the setting positions of all line points on the target reference spline, arranging all virtual grid bodies in the plurality of virtual grid bodies on the target reference spline to generate a virtual grid body combination model.
For example, a game designer creates an initial reference spline in a target virtual scene through a spline component at a virtual engine and adds to the target virtual scene, the game designer can set a line length for the initial reference spline through the spline component; also, the game designer may create a specified number of virtual grid bodies in the virtual engine through the grid body component and add the specified number of virtual grid bodies to the target virtual scene. Then, the computer device can determine the target distance between two adjacent virtual grid bodies when each virtual grid body is arranged and distributed on the initial reference spline line based on the line length and the specified number, and the target distance between the two adjacent virtual grid bodies is obtained by dividing the line length by the specified number. Then, the initial reference spline can be divided according to two endpoints of the initial reference spline, the line length and the target distance to obtain a target reference spline with a plurality of line points, wherein each line point has a corresponding setting position on the target reference spline. Finally, each virtual grid body in the specified number of virtual grid bodies can be respectively associated with a corresponding line point so as to arrange each virtual grid body in the specified number of virtual grid bodies on the target reference spline line, and a virtual grid body combination model can be generated.
102, acquiring the space position information of each virtual grid body in the virtual grid body combination model, and obtaining a plurality of space position information.
In order to determine the spatial position information of the virtual grid body, the step of acquiring the spatial position information of each virtual grid body in the virtual grid body combination model to obtain a plurality of spatial position information may include:
and sequentially performing cyclic processing on each line point of the target reference spline based on the number of the grid bodies, the target distance and the current cyclic times to obtain the spatial position information of each line point of the target reference spline, and taking the spatial position information as the spatial position information of the virtual grid body corresponding to the line point.
For example, in the embodiment of the application, the computer device may perform a cyclic calculation on the target distance between two adjacent virtual grid bodies through the program programming interface, so as to obtain the spatial position information of each line point of the target reference spline line. Specifically, the computer device uses a circulation function in the virtual engine, takes the designated number as circulation times, sequentially carries out circulation processing on each line point on the target reference spline, takes the length of each circulation as input to obtain spatial position information corresponding to the target reference point, wherein the target reference spline comprises a starting reference point, an ending reference point and an intermediate reference point between the starting reference point and the ending reference point, the length of each circulation is determined based on the circulation times, the starting reference point and the intermediate reference point corresponding to the current circulation times, the target length used in the next circulation can be obtained by multiplying the current circulation times by the target distance, and then the target length is taken as input of the next circulation to obtain the spatial position information corresponding to the reference point, wherein the current circulation times are the circulation times corresponding to the current circulation processing after the circulation processing is started.
Specifically, the step of sequentially performing cyclic processing on each line point of the target reference spline based on the number of mesh bodies, the target distance and the current cyclic times to obtain spatial position information of each line point of the target reference spline, and using the spatial position information as spatial position information of a virtual mesh body corresponding to the line point, where the method may include:
sequentially performing cyclic processing on each line point on the target reference spline based on the number of the grid bodies, the target distance and the current cyclic times to obtain the spatial position information of each line point on the target reference spline;
and binding the spatial position information of each line point of the target reference spline with the virtual grid body corresponding to the line point respectively to serve as the spatial position information of the virtual grid body corresponding to the line point.
In order to facilitate the game maker to preview the length relation comparison of spline in real time, the method provided by the embodiment of the application may further include:
determining a material color value of each virtual grid body in the virtual grid body combination model based on the current circulation times of each virtual grid body in the virtual grid body combination model and the number of the grid bodies;
And associating each virtual grid body in the virtual grid body combination model with a corresponding material color value, so that each virtual grid body in the virtual grid body combination model has the material effect of the corresponding material color value.
For example, the computer device may declare a texture instance for each virtual grid in the virtual grid combination model in the virtual engine, so as to increase the texture effect for each virtual grid, and use the result obtained by dividing the number of current cycles corresponding to each virtual grid by the specified number of virtual grids as the gray value corresponding to each virtual grid, and use the program programming interface to transfer the gray value corresponding to each virtual grid into the shader, so that the texture color of each virtual grid has a gradual effect from light to dark or from dark to light from the starting reference point to the ending reference point of the target spline line, thereby facilitating the game maker to preview the effect of the spline line in real time.
Optionally, in order to enable the game maker to perform personalized adjustment on the target reference spline, the game maker may perform position adjustment and length adjustment on the target reference spline in the virtual engine, and may further perform adjustment on the target reference spline motion track by copying the end points of the target reference spline, for example, may implement turning of the target reference spline motion track. For example, the game maker may adjust the coordinate positions of the reference points in the target reference spline in the virtual scene, thereby adjusting the position of the target reference spline in the virtual scene. Or, copying an end point in the target reference spline line to generate a new end point obtained by copying, and carrying out interpolation processing according to the end point and the new end point obtained by copying to obtain a line segment between the two end points, thereby realizing turning of the target reference spline line motion trail.
And 103, adjusting the particle emission parameters of the particle emitter based on the plurality of spatial position information and the specified texture information to obtain the target particle emitter.
In an embodiment, the step of adjusting the particle emission parameters of the particle emitter based on the plurality of spatial location information and the specified texture information to obtain the target particle emitter may include:
adjusting particle emission parameters of the particle emitter based on the plurality of spatial position information to obtain a processed particle emitter;
acquiring specified texture information, and adjusting particle emission parameters of the processed particle emitter based on the specified texture information to obtain the target particle emitter.
104, emitting a plurality of target particles with target textures at each space position by the target particle emitter so as to form target special effect animation according to the plurality of target particles.
In this embodiment of the present application, a game maker may call the target particle emitter obtained in the above step in the virtual engine, and emit, by using the target particle emitter, a plurality of target particles having a target texture at each spatial position, so as to form a target special effect animation according to the plurality of target particles, where the target special effect animation is a special effect animation with a motion track, and the motion track is a line track of an initial reference spline.
In order to enable a game maker to perform personalized adjustment on target special effect animation and generate special effect animation meeting the requirements of the game maker, the method provided by the embodiment of the application can further comprise the following steps:
determining an adjusted position of the virtual mesh body on the initial reference spline in response to a model adjustment instruction for the virtual mesh body combination model;
based on the adjusted position of the virtual grid body, updating the virtual grid body combination model to obtain an updated virtual grid body combination model;
adjusting the target particle emitter based on the updated virtual grid body combination model to obtain a processed target particle emitter;
and calling the processed target particle emitter to generate a new special effect animation.
The model adjustment instruction is used for adjusting the arrangement position of the virtual grid body on the initial reference spline, specifically, the arrangement position of the virtual grid body on the initial reference spline can be adjusted according to the position deviation parameter, carried by the model adjustment instruction, of each virtual grid body on the initial reference spline, so that a plurality of adjusted virtual grid bodies are obtained, and an updated virtual grid body combination model is obtained according to the plurality of adjusted virtual grid bodies and the initial reference spline.
Further, in the step of "the adjusting the target particle emitter based on the updated virtual grid body combination model to obtain a processed target particle emitter", the method may include:
according to the adjusted spatial position information of each virtual grid body in the updated virtual grid body combination model, adjusting the particle emission parameters of the target particle emitter to obtain a processed target particle emitter;
and calling the processed target particle emitter, and emitting a plurality of particles with target textures at each adjusted spatial position through the processed target particle emitter so as to form a new special effect animation.
In order to further explain the processing method of the special effect animation provided in the embodiment of the present application, an application of the processing method of the special effect animation in a specific implementation scenario will be described below, where the specific application scenario is as follows:
(1) The game designer may create a blueprint of the Actor class in the illusion engine and then add spline components and static grid components to the blueprint of the Actor class to obtain the initial reference spline and a plurality of virtual grid bodies. Adding a static mesh body component in the blueprint may add a static mesh body to the initial reference spline for the purpose of visualizing the initial reference spline, the visualized spline being composed of a plurality of virtual mesh bodies.
(2) The virtual engine may determine a target pitch based on a line length of the initial reference spline and a number of meshes of the plurality of virtual meshes, wherein the target pitch is a pitch at which each virtual mesh of the plurality of virtual meshes is distributed on the initial reference spline. And then, according to the target distance, arranging and combining the plurality of virtual grid bodies on the initial reference spline line to generate a linear virtual grid body combination model. Specifically, the virtual engine divides the initial reference spline according to the target space to obtain a target reference spline with a plurality of line points, wherein each line point is used for indicating the arrangement position of a virtual grid body on the initial reference spline; and then, based on the setting positions of all line points on the target reference spline, arranging all virtual grid bodies in the plurality of virtual grid bodies on the target reference spline to generate a linear virtual grid body combination model.
(3) In order to facilitate the comparison of the length relation of the real-time preview spline, the computer equipment can declare a material instance for each virtual grid in the virtual grid combination model in a virtual engine so as to increase the material effect for each virtual grid, and the result obtained by dividing the number of current loops corresponding to each virtual grid by the designated number of the virtual grids is used as the gray value corresponding to each virtual grid, and the gray value corresponding to each virtual grid is transmitted into the shader by using a programming interface, so that the material color of each virtual grid has the gradual change effect from light to deep or from deep to shallow from the initial reference point of the target spline to the final reference point, thereby facilitating the real-time preview of the motion trail of the initial reference spline by game making personnel.
(4) The virtual engine can sequentially perform cyclic processing on each line point of the target reference spline based on the number of the grid bodies, the target distance and the current cyclic times to obtain the spatial position information of each line point of the target reference spline, and the spatial position information is used as the spatial position information of the virtual grid body corresponding to the line point. Specifically, the virtual engine uses the number of generated grid bodies as the circulation times, circulation processing is used, the current circulation times is multiplied by the target distance to obtain the length when each circulation is performed, then the obtained length of each circulation is used as input through a program programming interface to obtain the space position information of the line points with the corresponding lengths on the target reference spline line, and the space position information is used as the space position information of the virtual grid bodies corresponding to the line points.
(5) The virtual engine may adjust the particle emission parameters of the particle emitter based on the plurality of spatial location information to obtain a processed particle emitter. The virtual engine can acquire the appointed texture information, and adjusts the particle emission parameters of the processed particle emitter based on the appointed texture information to obtain the target particle emitter.
(6) The virtual engine may invoke the target particle emitter, through which a plurality of target particles having a target texture are emitted at respective spatial locations to form a target special effect animation from the plurality of target particles.
(7) The virtual engine may also determine an adjusted position of the virtual mesh body on the initial reference spline in response to a model adjustment instruction for the virtual mesh body combination model; based on the adjusted position of the virtual grid body, updating the virtual grid body combination model to obtain an updated virtual grid body combination model; adjusting the target particle emitter based on the updated virtual grid body combination model to obtain a processed target particle emitter; and calling the processed target particle emitter to generate a new special effect animation. Specifically, according to the adjusted spatial position information of each virtual grid body in the updated virtual grid body combination model, adjusting the particle emission parameters of the target particle emitter to obtain a processed target particle emitter; and calling the processed target particle emitter, and emitting a plurality of particles with target textures at each adjusted spatial position through the target particle emitter so as to form a new special effect animation.
In summary, the embodiment of the application provides a method for processing a special effect animation, which generates a virtual grid body combination model according to an initial reference spline line and a plurality of virtual grid bodies; then, acquiring the space position information of each virtual grid body in the virtual grid body combination model to obtain a plurality of space position information; then, adjusting particle emission parameters of the particle emitter based on the plurality of spatial position information and the specified texture information to obtain a target particle emitter; finally, a plurality of target particles with target textures are emitted at each spatial position through the target particle emitter, so that a target special effect animation is formed according to the plurality of target particles. According to the embodiment of the application, the special effect animation can be produced only in the virtual engine, the special effect animation production time can be shortened, and after the virtual grid body combination model is produced by adopting the spline line component and the grid body component, the target special effect animation is formed by adopting the particle emitter according to the virtual grid body combination model, and the virtual grid body combination model can be regulated in real time, so that the target special effect animation is regulated in real time, the multiplexing rate of the special effect animation is improved, and the production efficiency of the special effect animation is improved.
In order to better implement the method, the embodiment of the application can also provide a processing device of the special effect animation, and the processing device of the special effect animation can be integrated in a computer device, for example, a computer device such as a terminal.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a special effect animation processing device according to an embodiment of the present application, where the device includes:
a generating unit 201, configured to generate a virtual grid body combination model according to an initial reference spline line and a plurality of virtual grid bodies;
an obtaining unit 202, configured to obtain spatial position information of each virtual grid body in the virtual grid body combination model, so as to obtain multiple pieces of spatial position information;
an adjusting unit 203, configured to adjust a particle emission parameter of the particle emitter based on the plurality of spatial position information and the specified texture information, so as to obtain a target particle emitter;
a processing unit 204, configured to emit, by the target particle emitter, a plurality of target particles having a target texture at each spatial position, so as to form a target special effect animation according to the plurality of target particles.
In some embodiments, the processing apparatus of the special effects animation includes:
the first generation subunit is configured to perform permutation and combination on each virtual grid body in the plurality of virtual grid bodies on the initial reference spline based on line information of the initial reference spline, and generate a virtual grid body combination model, where the line information characterizes line properties of the initial reference spline.
In some embodiments, the processing apparatus of the special effects animation includes:
a first determining subunit, configured to determine a target pitch based on a line length of the initial reference spline and the number of meshes of the plurality of virtual meshes, where the target pitch is a pitch that is set by distributing each virtual mesh in the plurality of virtual meshes on the initial reference spline;
and the second generation subunit is used for arranging and combining the plurality of virtual grid bodies on the initial reference spline according to the target distance to generate a virtual grid body combination model.
In some embodiments, the processing apparatus of the special effects animation includes:
the dividing subunit is used for dividing the initial reference spline according to the target distance to obtain a target reference spline with a plurality of line points, wherein each line point is used for indicating the arrangement position of a virtual grid body on the initial reference spline;
and a third generation subunit, configured to arrange each virtual grid body in the plurality of virtual grid bodies on the target reference spline line based on the setting position of each line point on the target reference spline line, so as to generate a virtual grid body combination model.
In some embodiments, the processing apparatus of the special effects animation includes:
the first processing subunit is configured to sequentially perform cyclic processing on each line point of the target reference spline based on the number of mesh bodies, the target distance and the current cyclic times, so as to obtain spatial position information of each line point of the target reference spline, and use the spatial position information as spatial position information of a virtual mesh body corresponding to the line point.
In some embodiments, the processing apparatus of the special effects animation includes:
the second processing subunit is used for sequentially performing cyclic processing on each line point on the target reference spline based on the number of the grid bodies, the target distance and the current cyclic times to obtain the spatial position information of each line point on the target reference spline;
and the second processing subunit is further configured to bind the spatial position information of each line point of the target reference spline with the virtual grid body corresponding to the line point, so as to use the spatial position information of the virtual grid body corresponding to the line point.
In some embodiments, the processing apparatus of the special effects animation includes:
the second determining subunit is used for determining the material color value of each virtual grid body in the virtual grid body combination model based on the current circulation times of each virtual grid body in the virtual grid body combination model and the number of the grid bodies;
And the third processing subunit is used for carrying out association processing on each virtual grid body in the virtual grid body combination model and the corresponding material color value so as to enable each virtual grid body in the virtual grid body combination model to have the material effect of the corresponding material color value.
In some embodiments, the processing apparatus of the special effects animation includes:
a fourth processing subunit, configured to adjust particle emission parameters of the particle emitter based on the plurality of spatial location information, so as to obtain a processed particle emitter;
and the fourth processing subunit is further used for acquiring specified texture information, and adjusting the particle emission parameters of the processed particle emitter based on the specified texture information to obtain the target particle emitter.
In some embodiments, the processing apparatus of the special effects animation includes:
a third determination subunit configured to determine an adjusted position of the virtual grid body on the initial reference spline in response to a model adjustment instruction for the virtual grid body combination model;
a fifth processing subunit, configured to update the virtual grid body combination model based on the adjusted position of the virtual grid body, to obtain an updated virtual grid body combination model;
The fifth processing subunit is further configured to perform adjustment processing on the target particle emitter based on the updated virtual grid body combination model, so as to obtain a processed target particle emitter;
and the fourth generation subunit is used for calling the processed target particle emitter to generate a new special effect animation.
In some embodiments, the processing apparatus of the special effects animation includes:
a sixth processing subunit, configured to adjust, according to the adjusted spatial position information of each virtual grid in the updated virtual grid combination model, a particle emission parameter of the target particle emitter, so as to obtain a processed target particle emitter;
and the sixth processing subunit is further used for calling the processed target particle emitter, and emitting a plurality of particles with target textures at each adjusted spatial position through the processed target particle emitter so as to form a new special effect animation.
The embodiment of the application discloses a processing device for special effect animation, which generates a virtual grid body combination model according to an initial reference spline line and a plurality of virtual grid bodies through a generating unit 201; the obtaining unit 202 obtains spatial position information of each virtual grid body in the virtual grid body combination model, so as to obtain a plurality of pieces of spatial position information; the adjusting unit 203 adjusts the particle emission parameters of the particle emitter based on the plurality of spatial position information and the specified texture information to obtain a target particle emitter; the processing unit 204 emits a plurality of target particles having a target texture at each spatial position by the target particle emitter to form a target special effect animation from the plurality of target particles. According to the embodiment of the application, the special effect animation can be produced in the model production application software, the special effect animation production time can be shortened, and after the virtual grid body combination model is produced by adopting the spline line component and the grid body component, the target special effect animation is formed by adopting the particle emitter according to the virtual grid body combination model, and the virtual grid body combination model can be regulated in real time, so that the target special effect animation is regulated in real time, the multiplexing rate of the special effect animation is improved, and the production efficiency of the special effect animation is improved.
Correspondingly, the embodiment of the application also provides a computer device, which can be a terminal or a server, wherein the terminal can be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game console, a personal computer (PC, personal Computer), a personal digital assistant (Personal Digital Assistant, PDA) and the like. Fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application, as shown in fig. 4. The computer device 300 includes a processor 301 having one or more processing cores, a memory 302 having one or more computer readable storage media, and a computer program stored on the memory 302 and executable on the processor. The processor 301 is electrically connected to the memory 302. It will be appreciated by those skilled in the art that the computer device structure shown in the figures is not limiting of the computer device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
In the embodiment of the present application, the processor 301 in the computer device 300 loads the instructions corresponding to the processes of one or more application programs into the memory 302 according to the following steps, and the processor 301 executes the application programs stored in the memory 302, so as to implement various functions:
generating a virtual grid body combination model according to the initial reference spline line and the plurality of virtual grid bodies;
acquiring the space position information of each virtual grid body in the virtual grid body combination model to obtain a plurality of space position information;
adjusting particle emission parameters of the particle emitter based on the plurality of spatial position information and the specified texture information to obtain a target particle emitter;
and transmitting a plurality of target particles with target textures at each spatial position through the target particle transmitter so as to form a target special effect animation according to the plurality of target particles.
In an embodiment, the generating a virtual mesh body combination model according to the initial reference spline and the plurality of virtual mesh bodies includes:
and based on the line information of the initial reference spline, arranging and combining each virtual grid body in the plurality of virtual grid bodies on the initial reference spline to generate a virtual grid body combination model, wherein the line information characterizes the line attribute of the initial reference spline.
In an embodiment, the line properties of the initial reference spline line include at least a line length;
the generating a virtual grid body combination model based on the line information of the initial reference spline line by arranging and combining each virtual grid body in the plurality of virtual grid bodies on the initial reference spline line comprises the following steps:
determining a target distance based on the line length of the initial reference spline and the grid number of the plurality of virtual grid bodies, wherein the target distance is a distance distributed and set on the initial reference spline by each virtual grid body in the plurality of virtual grid bodies;
and according to the target distance, arranging and combining the plurality of virtual grid bodies on the initial reference spline line to generate a virtual grid body combination model.
In an embodiment, the generating a virtual mesh combination model by arranging and combining the plurality of virtual meshes on the initial reference spline according to the target pitch includes:
dividing the initial reference spline according to the target distance to obtain a target reference spline with a plurality of line points, wherein each line point is used for indicating the arrangement position of a virtual grid body on the initial reference spline;
And based on the setting positions of all line points on the target reference spline, arranging all virtual grid bodies in the plurality of virtual grid bodies on the target reference spline to generate a virtual grid body combination model.
In an embodiment, the obtaining spatial position information of each virtual grid body in the virtual grid body combination model to obtain a plurality of spatial position information includes:
and sequentially performing cyclic processing on each line point of the target reference spline based on the number of the grid bodies, the target distance and the current cyclic times to obtain the spatial position information of each line point of the target reference spline, and taking the spatial position information as the spatial position information of the virtual grid body corresponding to the line point.
In an embodiment, the sequentially performing a cyclic process on each line point of the target reference spline based on the number of mesh bodies, the target distance and the current cycle number to obtain spatial position information of each line point of the target reference spline, and using the spatial position information as spatial position information of a virtual mesh body corresponding to the line point, where the method includes:
sequentially performing cyclic processing on each line point on the target reference spline based on the number of the grid bodies, the target distance and the current cyclic times to obtain the spatial position information of each line point on the target reference spline;
And binding the spatial position information of each line point of the target reference spline with the virtual grid body corresponding to the line point respectively to serve as the spatial position information of the virtual grid body corresponding to the line point.
In an embodiment, further comprising:
determining a material color value of each virtual grid body in the virtual grid body combination model based on the current circulation times of each virtual grid body in the virtual grid body combination model and the number of the grid bodies;
and associating each virtual grid body in the virtual grid body combination model with a corresponding material color value, so that each virtual grid body in the virtual grid body combination model has the material effect of the corresponding material color value.
In an embodiment, the adjusting the particle emission parameter of the particle emitter based on the plurality of spatial location information and the specified texture information to obtain the target particle emitter includes:
adjusting particle emission parameters of the particle emitter based on the plurality of spatial position information to obtain a processed particle emitter;
acquiring specified texture information, and adjusting particle emission parameters of the processed particle emitter based on the specified texture information to obtain the target particle emitter.
In an embodiment, the method further comprises:
determining an adjusted position of the virtual mesh body on the initial reference spline in response to a model adjustment instruction for the virtual mesh body combination model;
based on the adjusted position of the virtual grid body, updating the virtual grid body combination model to obtain an updated virtual grid body combination model;
adjusting the target particle emitter based on the updated virtual grid body combination model to obtain a processed target particle emitter;
and calling the processed target particle emitter to generate a new special effect animation.
In an embodiment, the adjusting the target particle emitter based on the updated virtual grid body combination model to obtain a processed target particle emitter includes:
according to the adjusted spatial position information of each virtual grid body in the updated virtual grid body combination model, adjusting the particle emission parameters of the target particle emitter to obtain a processed target particle emitter;
and calling the processed target particle emitter, and emitting a plurality of particles with target textures at each adjusted spatial position through the processed target particle emitter so as to form a new special effect animation.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Optionally, as shown in fig. 4, the computer device 300 further includes: a touch display 303, a radio frequency circuit 304, an audio circuit 305, an input unit 306, and a power supply 307. The processor 301 is electrically connected to the touch display 303, the radio frequency circuit 304, the audio circuit 305, the input unit 306, and the power supply 307, respectively. Those skilled in the art will appreciate that the computer device structure shown in FIG. 4 is not limiting of the computer device and may include more or fewer components than shown, or may be combined with certain components, or a different arrangement of components.
The touch display 303 may be used to display a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface. The touch display 303 may include a display panel and a touch panel. Wherein the display panel may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of a computer device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a liquid crystal display (LCD, liquid Crystal Display), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 301, and can receive and execute commands sent from the processor 301. The touch panel may overlay the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is passed to the processor 301 to determine the type of touch event, and the processor 301 then provides a corresponding visual output on the display panel in accordance with the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 303 to implement the input and output functions. In some embodiments, however, the touch panel and the touch panel may be implemented as two separate components to perform the input and output functions. I.e. the touch-sensitive display 303 may also implement an input function as part of the input unit 306.
In the embodiment of the present application, the processor 301 executes an application program to generate a graphical interface on the touch display screen 303. The touch display 303 is used for presenting a graphical interface and receiving an operation instruction generated by a user acting on the graphical interface.
The radio frequency circuitry 304 may be used to transceive radio frequency signals to establish wireless communications with a network device or other computer device via wireless communications.
The audio circuit 305 may be used to provide an audio interface between a user and a computer device through a speaker, microphone. The audio circuit 305 may transmit the received electrical signal after audio data conversion to a speaker, and convert the electrical signal into a sound signal for output by the speaker; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 305 and converted into audio data, which are processed by the audio data output processor 301 for transmission to, for example, another computer device via the radio frequency circuit 304, or which are output to the memory 302 for further processing. The audio circuit 305 may also include an ear bud jack to provide communication of the peripheral ear bud with the computer device.
The input unit 306 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 307 is used to power the various components of the computer device 300. Alternatively, the power supply 307 may be logically connected to the processor 301 through a power management system, so as to perform functions of managing charging, discharging, and power consumption management through the power management system. The power supply 307 may also include one or more of any components, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown in fig. 4, the computer device 300 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
As can be seen from the above, the computer apparatus provided in this embodiment generates a virtual mesh body combination model by generating a virtual mesh body combination model from an initial reference spline line and a plurality of virtual mesh bodies; then, acquiring the space position information of each virtual grid body in the virtual grid body combination model to obtain a plurality of space position information; then, adjusting particle emission parameters of the particle emitter based on the plurality of spatial position information and the specified texture information to obtain a target particle emitter; finally, a plurality of target particles with target textures are emitted at each spatial position through the target particle emitter, so that a target special effect animation is formed according to the plurality of target particles. According to the embodiment of the application, the special effect animation can be produced only in the virtual engine, the special effect animation production time can be shortened, and after the virtual grid body combination model is produced by adopting the spline line component and the grid body component, the target special effect animation is formed by adopting the particle emitter according to the virtual grid body combination model, and the virtual grid body combination model can be regulated in real time, so that the target special effect animation is regulated in real time, the multiplexing rate of the special effect animation is improved, and the production efficiency of the special effect animation is improved.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, the embodiments of the present application provide a computer readable storage medium having stored therein a plurality of computer programs that can be loaded by a processor to perform steps in any of the special effects animation processing methods provided by the embodiments of the present application. For example, the computer program may perform the steps of:
generating a virtual grid body combination model according to the initial reference spline line and the plurality of virtual grid bodies;
acquiring the space position information of each virtual grid body in the virtual grid body combination model to obtain a plurality of space position information;
adjusting particle emission parameters of the particle emitter based on the plurality of spatial position information and the specified texture information to obtain a target particle emitter;
and transmitting a plurality of target particles with target textures at each spatial position through the target particle transmitter so as to form a target special effect animation according to the plurality of target particles.
In an embodiment, the generating a virtual mesh body combination model according to the initial reference spline and the plurality of virtual mesh bodies includes:
and based on the line information of the initial reference spline, arranging and combining each virtual grid body in the plurality of virtual grid bodies on the initial reference spline to generate a virtual grid body combination model, wherein the line information characterizes the line attribute of the initial reference spline.
In an embodiment, the line properties of the initial reference spline line include at least a line length;
the generating a virtual grid body combination model based on the line information of the initial reference spline line by arranging and combining each virtual grid body in the plurality of virtual grid bodies on the initial reference spline line comprises the following steps:
determining a target distance based on the line length of the initial reference spline and the grid number of the plurality of virtual grid bodies, wherein the target distance is a distance distributed and set on the initial reference spline by each virtual grid body in the plurality of virtual grid bodies;
and according to the target distance, arranging and combining the plurality of virtual grid bodies on the initial reference spline line to generate a virtual grid body combination model.
In an embodiment, the generating a virtual mesh combination model by arranging and combining the plurality of virtual meshes on the initial reference spline according to the target pitch includes:
dividing the initial reference spline according to the target distance to obtain a target reference spline with a plurality of line points, wherein each line point is used for indicating the arrangement position of a virtual grid body on the initial reference spline;
and based on the setting positions of all line points on the target reference spline, arranging all virtual grid bodies in the plurality of virtual grid bodies on the target reference spline to generate a virtual grid body combination model.
In an embodiment, the obtaining spatial position information of each virtual grid body in the virtual grid body combination model to obtain a plurality of spatial position information includes:
and sequentially performing cyclic processing on each line point of the target reference spline based on the number of the grid bodies, the target distance and the current cyclic times to obtain the spatial position information of each line point of the target reference spline, and taking the spatial position information as the spatial position information of the virtual grid body corresponding to the line point.
In an embodiment, the sequentially performing a cyclic process on each line point of the target reference spline based on the number of mesh bodies, the target distance and the current cycle number to obtain spatial position information of each line point of the target reference spline, and using the spatial position information as spatial position information of a virtual mesh body corresponding to the line point, where the method includes:
sequentially performing cyclic processing on each line point on the target reference spline based on the number of the grid bodies, the target distance and the current cyclic times to obtain the spatial position information of each line point on the target reference spline;
and binding the spatial position information of each line point of the target reference spline with the virtual grid body corresponding to the line point respectively to serve as the spatial position information of the virtual grid body corresponding to the line point.
In an embodiment, further comprising:
determining a material color value of each virtual grid body in the virtual grid body combination model based on the current circulation times of each virtual grid body in the virtual grid body combination model and the number of the grid bodies;
and associating each virtual grid body in the virtual grid body combination model with a corresponding material color value, so that each virtual grid body in the virtual grid body combination model has the material effect of the corresponding material color value.
In an embodiment, the adjusting the particle emission parameter of the particle emitter based on the plurality of spatial location information and the specified texture information to obtain the target particle emitter includes:
adjusting particle emission parameters of the particle emitter based on the plurality of spatial position information to obtain a processed particle emitter;
acquiring specified texture information, and adjusting particle emission parameters of the processed particle emitter based on the specified texture information to obtain the target particle emitter.
In an embodiment, the method further comprises:
determining an adjusted position of the virtual mesh body on the initial reference spline in response to a model adjustment instruction for the virtual mesh body combination model;
based on the adjusted position of the virtual grid body, updating the virtual grid body combination model to obtain an updated virtual grid body combination model;
adjusting the target particle emitter based on the updated virtual grid body combination model to obtain a processed target particle emitter;
and calling the processed target particle emitter to generate a new special effect animation.
In an embodiment, the adjusting the target particle emitter based on the updated virtual grid body combination model to obtain a processed target particle emitter includes:
according to the adjusted spatial position information of each virtual grid body in the updated virtual grid body combination model, adjusting the particle emission parameters of the target particle emitter to obtain a processed target particle emitter;
and calling the processed target particle emitter, and emitting a plurality of particles with target textures at each adjusted spatial position through the processed target particle emitter so as to form a new special effect animation.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
Because of the computer program stored in the storage medium, the steps in the processing method of any special effect animation provided by the embodiment of the application can be executed, and a virtual grid body combination model is generated according to the initial reference spline line and a plurality of virtual grid bodies; then, acquiring the space position information of each virtual grid body in the virtual grid body combination model to obtain a plurality of space position information; then, adjusting particle emission parameters of the particle emitter based on the plurality of spatial position information and the specified texture information to obtain a target particle emitter; finally, a plurality of target particles with target textures are emitted at each spatial position through the target particle emitter, so that a target special effect animation is formed according to the plurality of target particles. According to the embodiment of the application, the special effect animation can be produced only in the virtual engine, the special effect animation production time can be shortened, and after the virtual grid body combination model is produced by adopting the spline line component and the grid body component, the target special effect animation is formed by adopting the particle emitter according to the virtual grid body combination model, and the virtual grid body combination model can be regulated in real time, so that the target special effect animation is regulated in real time, the multiplexing rate of the special effect animation is improved, and the production efficiency of the special effect animation is improved.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
The foregoing describes in detail the processing method, apparatus, computer device and storage medium for special effect animation provided in the embodiments of the present application, and specific examples are applied to illustrate the principles and embodiments of the present application, where the foregoing description of the embodiments is only used to help understand the technical solution and core idea of the present application; those of ordinary skill in the art will appreciate that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.
Claims (13)
1. A method for processing a special effect animation, the method comprising:
generating a virtual grid body combination model according to the initial reference spline line and the plurality of virtual grid bodies;
acquiring the space position information of each virtual grid body in the virtual grid body combination model to obtain a plurality of space position information;
Adjusting particle emission parameters of the particle emitter based on the plurality of spatial position information and the specified texture information to obtain a target particle emitter;
and transmitting a plurality of target particles with target textures at each spatial position through the target particle transmitter so as to form a target special effect animation according to the plurality of target particles.
2. The method for processing the special effects animation according to claim 1, wherein generating a virtual mesh body combination model according to the initial reference spline and the plurality of virtual mesh bodies comprises:
and based on the line information of the initial reference spline, arranging and combining each virtual grid body in the plurality of virtual grid bodies on the initial reference spline to generate a virtual grid body combination model, wherein the line information characterizes the line attribute of the initial reference spline.
3. The method of claim 2, wherein the line properties of the initial reference spline include at least a line length;
the generating a virtual grid body combination model based on the line information of the initial reference spline line by arranging and combining each virtual grid body in the plurality of virtual grid bodies on the initial reference spline line comprises the following steps:
Determining a target distance based on the line length of the initial reference spline and the grid number of the plurality of virtual grid bodies, wherein the target distance is a distance distributed and set on the initial reference spline by each virtual grid body in the plurality of virtual grid bodies;
and according to the target distance, arranging and combining the plurality of virtual grid bodies on the initial reference spline line to generate a virtual grid body combination model.
4. The method for processing a special effect animation according to claim 3, wherein the step of generating a virtual mesh combination model by arranging and combining the plurality of virtual meshes on the initial reference spline according to the target pitch comprises the steps of:
dividing the initial reference spline according to the target distance to obtain a target reference spline with a plurality of line points, wherein each line point is used for indicating the arrangement position of a virtual grid body on the initial reference spline;
and based on the setting positions of all line points on the target reference spline, arranging all virtual grid bodies in the plurality of virtual grid bodies on the target reference spline to generate a virtual grid body combination model.
5. The method for processing a special effect animation according to claim 3, wherein the obtaining spatial position information of each virtual grid body in the virtual grid body combination model to obtain a plurality of spatial position information includes:
and sequentially performing cyclic processing on each line point of the target reference spline based on the number of the grid bodies, the target distance and the current cyclic times to obtain the spatial position information of each line point of the target reference spline, and taking the spatial position information as the spatial position information of the virtual grid body corresponding to the line point.
6. The method for processing special effect animation according to claim 5, wherein the sequentially performing cyclic processing on each line point of the target reference spline based on the number of meshes, the target distance and the current cycle number to obtain spatial position information of each line point of the target reference spline, and using the spatial position information as the spatial position information of the virtual mesh corresponding to the line point comprises:
sequentially performing cyclic processing on each line point on the target reference spline based on the number of the grid bodies, the target distance and the current cyclic times to obtain the spatial position information of each line point on the target reference spline;
And binding the spatial position information of each line point of the target reference spline with the virtual grid body corresponding to the line point respectively to serve as the spatial position information of the virtual grid body corresponding to the line point.
7. The method for processing a special effect animation according to claim 5, further comprising:
determining a material color value of each virtual grid body in the virtual grid body combination model based on the current circulation times of each virtual grid body in the virtual grid body combination model and the number of the grid bodies;
and associating each virtual grid body in the virtual grid body combination model with a corresponding material color value, so that each virtual grid body in the virtual grid body combination model has the material effect of the corresponding material color value.
8. The method for processing a special effect animation according to claim 1, wherein adjusting the particle emission parameters of the particle emitter based on the plurality of spatial location information and the specified texture information to obtain the target particle emitter comprises:
adjusting particle emission parameters of the particle emitter based on the plurality of spatial position information to obtain a processed particle emitter;
Acquiring specified texture information, and adjusting particle emission parameters of the processed particle emitter based on the specified texture information to obtain the target particle emitter.
9. The method for processing a special effect animation according to claim 1, wherein the method further comprises:
determining an adjusted position of the virtual mesh body on the initial reference spline in response to a model adjustment instruction for the virtual mesh body combination model;
based on the adjusted position of the virtual grid body, updating the virtual grid body combination model to obtain an updated virtual grid body combination model;
adjusting the target particle emitter based on the updated virtual grid body combination model to obtain a processed target particle emitter;
and calling the processed target particle emitter to generate a new special effect animation.
10. The method for processing the special effect animation according to claim 9, wherein the adjusting the target particle emitter based on the updated virtual grid body combination model to obtain a processed target particle emitter comprises:
According to the adjusted spatial position information of each virtual grid body in the updated virtual grid body combination model, adjusting the particle emission parameters of the target particle emitter to obtain a processed target particle emitter;
and calling the processed target particle emitter, and emitting a plurality of particles with target textures at each adjusted spatial position through the processed target particle emitter so as to form a new special effect animation.
11. A processing apparatus for special effects animation, the apparatus comprising:
the generating unit is used for generating a virtual grid body combination model according to the initial reference spline line and the plurality of virtual grid bodies;
the acquisition unit is used for acquiring the space position information of each virtual grid body in the virtual grid body combination model to obtain a plurality of space position information;
the adjusting unit is used for adjusting particle emission parameters of the particle emitter based on the plurality of spatial position information and the specified texture information so as to obtain a target particle emitter;
and the processing unit is used for transmitting a plurality of target particles with target textures at each space position through the target particle transmitter so as to form target special effect animation according to the plurality of target particles.
12. A computer device comprising a processor, a memory and a computer program stored on the memory and capable of running on the processor, which when executed by the processor implements the method of processing a special effect animation as claimed in any one of claims 1 to 10.
13. A computer-readable storage medium, on which a computer program is stored, which when being executed by a processor implements the method of processing a special effect animation according to any of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310004908.9A CN116385609A (en) | 2023-01-03 | 2023-01-03 | Method and device for processing special effect animation, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310004908.9A CN116385609A (en) | 2023-01-03 | 2023-01-03 | Method and device for processing special effect animation, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116385609A true CN116385609A (en) | 2023-07-04 |
Family
ID=86968186
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310004908.9A Pending CN116385609A (en) | 2023-01-03 | 2023-01-03 | Method and device for processing special effect animation, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116385609A (en) |
-
2023
- 2023-01-03 CN CN202310004908.9A patent/CN116385609A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112037311B (en) | Animation generation method, animation playing method and related devices | |
US11494993B2 (en) | System and method to integrate content in real time into a dynamic real-time 3-dimensional scene | |
AU2018257944B2 (en) | Three-dimensional environment authoring and generation | |
CN109885367B (en) | Interactive chat implementation method, device, terminal and storage medium | |
CN112233211B (en) | Animation production method, device, storage medium and computer equipment | |
KR102374307B1 (en) | Modification of animated characters | |
CN112891943B (en) | Lens processing method and device and readable storage medium | |
CN116109737A (en) | Animation generation method, animation generation device, computer equipment and computer readable storage medium | |
WO2018049682A1 (en) | Virtual 3d scene production method and related device | |
CN113436346A (en) | Distance measuring method and device in three-dimensional space and storage medium | |
CN116385609A (en) | Method and device for processing special effect animation, computer equipment and storage medium | |
CN113362435B (en) | Virtual component change method, device, equipment and medium of virtual object model | |
CN115526967A (en) | Animation generation method and device for virtual model, computer equipment and storage medium | |
CN112587915B (en) | Lighting effect presentation method and device, storage medium and computer equipment | |
CN114146418B (en) | Game resource processing method and device, computer equipment and storage medium | |
WO2024179178A1 (en) | Virtual object control method and apparatus, electronic device, and storage medium | |
CN115082600B (en) | Animation production method, animation production device, computer equipment and computer-readable storage medium | |
CN118001740A (en) | Virtual model processing method and device, computer equipment and storage medium | |
KR100817506B1 (en) | Method for producing intellectual contents | |
CN115006847A (en) | Virtual scene updating method and device, electronic equipment and storage medium | |
CN116647733A (en) | Virtual model click event processing method and device, electronic equipment and storage medium | |
CN116109729A (en) | Curve processing method, curve processing device, computer equipment and computer readable storage medium | |
CN118121930A (en) | Virtual scene processing method and device, computer equipment and storage medium | |
CN118203827A (en) | Game scene construction method, game scene construction device, electronic equipment and readable storage medium | |
CN117689780A (en) | Animation generation method and device of virtual model, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |