CN115082600A - Animation production method, animation production device, computer equipment and computer readable storage medium - Google Patents

Animation production method, animation production device, computer equipment and computer readable storage medium Download PDF

Info

Publication number
CN115082600A
CN115082600A CN202210626173.9A CN202210626173A CN115082600A CN 115082600 A CN115082600 A CN 115082600A CN 202210626173 A CN202210626173 A CN 202210626173A CN 115082600 A CN115082600 A CN 115082600A
Authority
CN
China
Prior art keywords
group
combined
combined group
layer
new
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210626173.9A
Other languages
Chinese (zh)
Inventor
陈熙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210626173.9A priority Critical patent/CN115082600A/en
Publication of CN115082600A publication Critical patent/CN115082600A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses an animation production method, an animation production device, computer equipment and a computer readable storage medium, wherein a plurality of combined groups are created in special effect production software, and the specified combined groups are rendered by adopting particle plug-ins, so that the production of a fluid effect can be realized in one special effect production software, the production time of the fluid effect can be shortened, and the production efficiency of the fluid effect is improved; moreover, the manufactured fluid effect can be adapted to other material pictures, and the fluid effect corresponding to the new material picture can be generated only by replacing the material picture in the fluid effect with the new material picture, so that the time for manufacturing the fluid effect is saved, and the reuse rate of the fluid special effect is improved.

Description

Animation production method, animation production device, computer equipment and computer readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an animation method, an animation device, computer equipment, and a computer-readable storage medium.
Background
With the continuous development of computer communication technology, a great number of terminals such as smart phones, tablet computers and notebook computers are widely popularized and applied, the terminals are developed towards diversification and individuation directions and increasingly become indispensable terminals for people in life and work, and in order to meet the pursuit of people for spiritual life, entertainment applications capable of being operated on the terminals are produced, such as game applications, and with the vigorous development of network games, the reality requirements of people on virtual scenes displayed by computer equipment are higher and higher.
In the process of animation production, in order to realize the fluid effect of an object, a particle matrix is usually created in 3D modeling rendering and production software (such as 3D Studio Max), relevant numerical values of a particle system are adjusted, After a primary effect form of the particles is achieved, an expected fluid form effect is achieved by adopting the fluid rendering effect in the 3D modeling rendering and production software, and finally the expected fluid form effect is introduced into nonlinear special effect production software (Adobe After Effects, AE) to be adjusted to obtain a target fluid effect of the object; in the prior art, the fluid effect needs to be repeatedly adjusted and rendered in two pieces of software, so that the steps of the fluid effect manufacturing process are complicated, and the fluid effect needs to be resolved and rendered within a long time when the two pieces of software are switched and adjusted, so that the fluid effect manufacturing process consumes a long time, and the fluid effect manufacturing efficiency is low.
Disclosure of Invention
The embodiment of the application provides an animation production method, an animation production device, computer equipment and a computer readable storage medium, a plurality of combined groups can be created in special effect production software, and the specified combined groups are rendered by adopting particle plug-ins, so that the production of a fluid effect can be realized in one special effect production software, the production time of the fluid effect can be shortened, and the production efficiency of the fluid effect can be improved.
The embodiment of the application provides an animation production method, which comprises the following steps:
obtaining a shape mask layer, and performing mask processing on a specified material picture based on the shape mask layer to generate a first combined group, wherein the shape mask layer has a plurality of frames, and the mask area of each frame of the shape mask layer is different;
copying the first combined group, and performing reverse mask processing on the copied first combined group and the first combined group to obtain a second combined group;
performing a copying process on the shape mask layer, and performing an inversion mask process on the copied shape mask layer and the second combined group to obtain a third combined group;
setting a particle effect parameter required for realizing a target dynamic effect for the third combined group to obtain a dynamic effect combined group, wherein the dynamic effect combined group is the third combined group with the target dynamic effect;
and generating a target animation corresponding to the specified material picture based on the first combination group, the second combination group and the dynamic effect combination group, wherein the display level of the first combination group is lower than that of the second combination group, and the display level of the second combination group is lower than that of the dynamic effect combination group.
Correspondingly, the embodiment of the present application further provides an animation device, which includes:
an acquisition unit configured to acquire a shape mask layer having a plurality of frames, each frame having a different mask region, and perform mask processing on a specified material picture based on the shape mask layer to generate a first combined group;
a first processing unit, configured to perform copy processing on the first combined group, and perform inversion mask processing on the first combined group obtained through copy processing and the first combined group to obtain a second combined group;
a second processing unit configured to perform a copy process on the shape mask layer, and perform an inversion mask process on the shape mask layer obtained by the copy process and the second combined group to obtain a third combined group;
a setting unit, configured to set a particle effect parameter required for achieving a target dynamic effect for the third combined group, so as to obtain a dynamic effect combined group, where the dynamic effect combined group is the third combined group with the target dynamic effect;
a generating unit configured to generate a target animation corresponding to the specified material picture based on the first combination group, the second combination group, and the dynamic effect combination group, wherein a display hierarchy of the first combination group is lower than a display hierarchy of the second combination group, and the display hierarchy of the second combination group is lower than a display hierarchy of the dynamic effect combination group.
In some embodiments, the apparatus further comprises:
a response unit for forming the designated material pictures into a material composition group in response to the composition group creation instruction;
in some embodiments, the apparatus further comprises:
and the calling unit is used for calling the specified material picture from the material combination group according to the mask processing instruction when the mask processing instruction is detected to be received, and carrying out mask processing on the specified material picture based on the shape mask layer to generate a first combination group.
In some embodiments, the apparatus further comprises:
a replacement unit configured to replace the material combination group with a replacement material combination group when it is detected that a material replacement instruction is received, wherein the replacement material combination group is a combination group formed by a new specified material picture.
In some embodiments, the apparatus further comprises:
and the updating unit is used for respectively carrying out material updating processing on the first combination group, the second combination group and the third combination group based on the replacement material combination group to obtain a new first combination group, a new second combination group and a new third combination group.
In some embodiments, the apparatus further comprises:
a first setting subunit, configured to set, for the new third combined group, a particle effect parameter required to achieve a target dynamic effect, so as to obtain a new dynamic effect combined group, where the new dynamic effect combined group is a new third combined group with the target dynamic effect;
a first generation subunit, configured to generate a target animation corresponding to the new specified material picture based on the new first combination group, the new second combination group, and the new dynamic effect combination group, where a display level of the new first combination group is lower than a display level of the new second combination group, and the display level of the new second combination group is lower than the display level of the new dynamic effect combination group.
In some embodiments, the apparatus further comprises:
the creating unit is used for responding to the layer creating instruction and creating a shape layer with a specified shape according to the layer creating instruction;
and the second setting subunit is configured to, in response to a key frame setting instruction for the shape layer, perform size setting on specified shapes corresponding to multiple key frames of the shape layer to obtain a mask combination group, and combine the masks into a group serving as a shape mask layer, where the specified shapes of the shape layer undergo size change along with change of display time.
In some embodiments, the apparatus further comprises:
a first processing subunit, configured to perform frame staggering processing on the copied first combined group based on a display start time point of the first combined group to obtain a processed copied first combined group, where a display start time point of the processed copied first combined group is earlier than a display start time point of the first combined group;
in some embodiments, the apparatus further comprises:
and the second processing subunit is used for performing reverse mask processing on the first combined group obtained by copying after the processing and the first combined group to obtain a second combined group.
In some embodiments, the apparatus further comprises:
a third processing subunit, configured to perform a frame error processing on the copied shape mask layer based on a display start time point of the second combined group to obtain a processed copied shape mask layer, where a display start time point of the processed copied shape mask layer is later than a display start time point of the second combined group;
in some embodiments, the apparatus further comprises:
a fourth processing subunit, configured to perform an inversion masking process on the processed copied shape mask layer and the second combined group to obtain a third combined group.
In some embodiments, the apparatus further comprises:
a third setting subunit, configured to respond to the first effect addition instruction, and implement a target dynamic particle parameter for the third combined group setting, to obtain a first particle layer;
a fourth setting subunit, configured to respond to a second effect addition instruction, and implement a target diffusion particle parameter for the third combined group setting to obtain a second particle layer;
a fifth setting subunit, configured to respond to a third effect addition instruction, and implement a target light-sensitive particle parameter for the third combined group setting to obtain a third particle layer;
and a second generation subunit, configured to form a dynamic effect combination group based on the first particle map layer, the second particle map layer, and the third particle map layer, where a display level of the first particle map layer is lower than a display level of the second particle map layer, and a display level of the second particle map layer is lower than a display level of the third particle map layer.
Accordingly, embodiments of the present application further provide a computer device, including a processor, a memory, and a computer program stored on the memory and capable of running on the processor, where the computer program, when executed by the processor, implements the steps of any of the animation methods described above.
Furthermore, an embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of any one of the animation methods described above.
The embodiment of the application provides an animation production method, an animation production device, computer equipment and a computer readable storage medium, wherein a plurality of combined groups are created in special effect production software, and the specified combined groups are rendered by adopting particle plug-ins, so that the production of a fluid effect can be realized in one special effect production software, the production time of the fluid effect can be shortened, and the production efficiency of the fluid effect can be improved; moreover, the manufactured fluid effect can be adapted to other material pictures, and the fluid effect corresponding to the new material picture can be generated only by replacing the material picture in the fluid effect with the new material picture, so that the time for manufacturing the fluid effect is saved, and the reuse rate of the fluid special effect is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a system diagram of an animation device according to an embodiment of the present application.
Fig. 2 is a schematic flowchart of an animation method according to an embodiment of the present application.
FIG. 3 is a schematic diagram of a shape masking layer according to an embodiment of the present disclosure.
Fig. 4 is a scene schematic diagram of an animation method according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of an animation apparatus according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a computer device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides an animation production method, an animation production device, computer equipment and a computer readable storage medium. Specifically, the animation method of the embodiment of the application may be executed by a computer device, where the computer device may be a terminal. The terminal can be a terminal device such as a smart phone, a tablet Computer, a notebook Computer, a touch screen, a game machine, a Personal Computer (PC), a Personal Digital Assistant (PDA), and the like, and the terminal can also include a client, which can be a video application client, a music application client, a game application client, a browser client carrying a game program, or an instant messaging client, and the like.
The animation production method provided by the embodiment of the application can be implemented in nonlinear special effect production software (AE), the nonlinear special effect production software is graphics video processing software, can realize graphic design and production of video special Effects, and belongs to layer type post-stage software.
Referring to fig. 1, fig. 1 is a schematic view of a scenario of an animation system according to an embodiment of the present application, including a computer device, where the computer device may be connected to a network, and the network includes network entities such as a router and a gateway.
The computer equipment can obtain a shape mask layer, and mask a specified material picture based on the shape mask layer to generate a first combined group, wherein the shape mask layer has a plurality of frames, and the mask area of each frame of the shape mask layer is different; copying the first combined group, and performing reverse mask processing on the copied first combined group and the first combined group to obtain a second combined group; performing a copying process on the shape mask layer, and performing an inversion mask process on the copied shape mask layer and the second combined group to obtain a third combined group; setting particle effect parameters required for realizing a target dynamic effect for the third combined group through a particle plug-in to obtain a dynamic effect combined group, wherein the dynamic effect combined group is the third combined group with the target dynamic effect; and generating a target animation corresponding to the specified material picture based on the first combination group, the second combination group and the dynamic effect combination group, wherein the display level of the first combination group is lower than that of the second combination group, and the display level of the second combination group is lower than that of the dynamic effect combination group.
It should be noted that the scene schematic diagram of the animation system shown in fig. 1 is merely an example, and the animation system and the scene described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application, and it is known by a person skilled in the art that as the animation system evolves and a new service scene appears, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems.
Embodiments of the present invention provide an animation method, an apparatus, a computer device, and a computer-readable storage medium, where the animation method may be used in cooperation with a terminal, such as a smart phone, a tablet computer, a notebook computer, or a personal computer, and the animation method may be applied to animation application software, such as Adobe After Effects (AE). The animation creating method, apparatus, terminal, and storage medium will be described in detail below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
Referring to fig. 2, fig. 2 is a schematic flow chart of an animation production method according to an embodiment of the present application, and a specific flow may include the following steps 101 to 105:
and 101, obtaining a shape mask layer, and performing mask processing on the specified material picture based on the shape mask layer to generate a first combined group, wherein the shape mask layer has a plurality of frames, and the mask area of each frame of the shape mask layer is different.
The mask is used for protecting the selected area from operation, and the mask is used for applying operation to the non-masked part of the selected area. The place covered by the mask is the place which is not affected when the user edits the selected area, and the place needs to be displayed on the map layer. The mask has a protective area that is completely transparent, and a selected area that is opaque, with gray being in between. Referring to fig. 3, fig. 3 is a schematic diagram of a shape mask layer, which includes a display area with a specified shape for partially displaying a material picture and a mask area for masking a picture of the material picture beyond the specified shape.
In order to enable a quick replacement operation on the material picture, before the step of "masking the specified material picture based on the shape mask layer, generating the first combined group", the method may include:
forming a material composition group by the appointed material pictures in response to a composition group creation instruction;
the method for masking the designated material picture based on the shape mask layer can comprise the following steps:
when a mask processing instruction is detected to be received, the specified material picture is called from the material combination group according to the mask processing instruction, and the mask processing is carried out on the specified material picture based on the shape mask layer to generate a first combination group.
In an embodiment, after the step of "generating the target animation corresponding to the specified material picture based on the first combination group, the second combination group, and the dynamic effect combination group", the method may include:
and when detecting that a material replacing instruction is received, replacing the material combination group with a replacing material combination group, wherein the replacing material combination group is a combination group formed by a new specified material picture.
Optionally, after the step of replacing the material composition group with a replacement material composition group, the method may comprise:
and respectively carrying out material updating processing on the first combination group, the second combination group and the third combination group based on the replacement material combination group to obtain a new first combination group, a new second combination group and a new third combination group.
Specifically, after the step "obtaining a new first combined group, a new second combined group, and a new third combined group", the method may include:
setting particle effect parameters required for realizing a target dynamic effect for the new third combined group to obtain a new dynamic effect combined group, wherein the new dynamic effect combined group is a new third combined group with the target dynamic effect;
and generating a target animation corresponding to the new specified material picture based on the new first combined group, the new second combined group and the new dynamic effect combined group, wherein the display level of the new first combined group is lower than that of the new second combined group, and the display level of the new second combined group is lower than that of the new dynamic effect combined group.
And setting the particle effect parameters required for realizing the target dynamic effect for the new third combined group through the particle plug-in unit to obtain a new dynamic effect combined group.
For the user to set the shape of the shape mask layer autonomously, before the step "obtain shape mask layer", the method may comprise:
responding to a layer creating instruction, and creating a shape layer with a specified shape according to the layer creating instruction;
responding to a key frame setting instruction aiming at the shape layer, carrying out size setting on specified shapes corresponding to a plurality of key frames of the shape layer to obtain a mask combination group, and combining the masks into the group to be used as a shape mask layer, wherein the specified shapes of the shape layer are subjected to size change along with the change of display time.
And 102, copying the first combined group, and performing inversion mask processing on the copied first combined group and the first combined group to obtain a second combined group.
In order to obtain the effect of diffusion edge appearing in liquid state, before the step of "performing a copy process on the first combined group, and performing an inversion masking process on the copied first combined group and the first combined group to obtain a second combined group", the method may include:
performing frame error processing on the copied first combined group based on a display starting time point of the first combined group to obtain a processed copied first combined group, wherein the display starting time point of the processed copied first combined group is earlier than that of the first combined group;
performing inversion masking processing on the first combined group obtained by copying and the first combined group to obtain a second combined group, including:
and performing inversion mask processing on the first combined group obtained by copying after the processing and the first combined group to obtain a second combined group.
For example, referring to fig. 4, fig. 4 is a schematic view of a scene of an animation method, in an embodiment of the present application, a first composition group and a processed copied first composition group may be used to perform a reverse masking process to obtain a second composition group, where the first composition group includes a material picture and a shape mask layer, the shape mask layer includes a display area of a specified shape and a mask area, the display area of the specified shape is used to partially display the material picture, the mask area is used to mask a picture of the material picture beyond the specified shape, the first composition group includes three frames, the first frame is displayed at 0 second, the second frame is displayed at 1 second, the third frame is displayed at 2 second, and the size of the display area of the specified shape in each frame gradually increases according to an increase of display time; the processed copied first combined group has three frames, the first frame is displayed at the 1 st second, the second frame is displayed at the 0 th second, the third frame is displayed at the 1 st second, and the size of the display area of the specified shape in each frame is gradually increased according to the increase of the display time; the second synthesis group is obtained by carrying out inversion masking processing on the first synthesis group obtained by copying the first synthesis group and the processed second synthesis group, the second synthesis group comprises two frames, the first frame is displayed at the 0 th second, the second frame is displayed at the 1 st second, and the edge of the material picture can be displayed by the second synthesis group so as to produce the diffusion edge effect appearing in the liquid state convergence effect.
103, performing a copy process on the shape mask layer, and performing an inversion mask process on the copied shape mask layer and the second combined group to obtain a third combined group.
Further, before the step of performing a reverse mask process based on the copied shape mask layer and the second combined set to obtain a third combined set after performing the copy process on the shape mask layer, the method may include:
performing error frame processing on the copied shape mask layer based on the display start time point of the second combined group to obtain a processed copied shape mask layer, wherein the display start time point of the processed copied shape mask layer is later than the display start time point of the second combined group;
performing an inversion masking process on the copied shape mask layer and the second combined set to obtain a third combined set, including:
and performing reverse mask processing based on the processed copied shape mask layer and the second combined group to obtain a third combined group.
And 104, setting a particle effect parameter required for realizing a target dynamic effect for the third combined group to obtain a dynamic effect combined group, wherein the dynamic effect combined group is the third combined group with the target dynamic effect.
In order to optimize the liquid fluid effect of the material picture, the particle inserts may include dynamic effect particle inserts, diffusion effect particle inserts, and light sensation effect particle inserts. Specifically, the step of setting the particle effect parameters required for realizing the target dynamic effect for the third combined group through the particle plugin to obtain the dynamic effect combined group may include:
responding to a first effect adding instruction, and setting a target dynamic particle parameter for the third combined group to obtain a first particle layer;
responding to a second effect adding instruction, and setting a target diffusion particle parameter for the third combined group to obtain a second particle layer;
responding to a third effect adding instruction, and setting a third combined group to realize target light sensation particle parameters to obtain a third particle layer;
and forming a dynamic effect combination group based on the first particle layer, the second particle layer and the third particle layer, wherein the display level of the first particle layer is lower than that of the second particle layer, and the display level of the second particle layer is lower than that of the third particle layer.
In a specific embodiment, the particle plug-ins may include a dynamic effect particle plug-in, a diffusion effect particle plug-in, and a light sensation effect particle plug-in, where a target dynamic particle parameter may be implemented for the third combined group by the dynamic effect particle plug-in, so as to obtain a first particle layer; target diffusion particle parameters can be realized for the third combined group setting through the diffusion effect particle plug-in to obtain a second particle layer; and target light sensation particle parameters can be realized for the third combined group through the light sensation effect particle plug-in unit, so that a third particle layer is obtained.
And 105, generating a target animation corresponding to the specified material picture based on the first combination group, the second combination group and the dynamic effect combination group, wherein the display level of the first combination group is lower than that of the second combination group, and the display level of the second combination group is lower than that of the dynamic effect combination group.
For further explanation of the animation production method provided in the embodiment of the present application, an application of the animation production method in a specific implementation scenario is described as an example below, where the specific application scenario is as follows:
(1) and the computer equipment responds to the operation of a user in the AE software, creates a shape layer in the AE software, scales the shape layer from small to large according to the target fluid effect to set a key frame, and synthesizes a new synthesis group named as a shape mask layer.
(2) A user can introduce bottom layer material pictures required to be manufactured into AE software through computer equipment, the bottom layer material pictures are synthesized into a new synthesis group named as a material synthesis group, alpha masking is carried out on the new synthesis group and a shape masking layer, and the new synthesis group is synthesized into a layer obtained after masking and named as appearance 1 and serves as an initial effect of a fluid effect.
(3) Copying a layer of appearance 1, dragging a few frames forward from the copied appearance 1, displaying by frame numbers staggered from the appearance 1, performing alpha inversion masking on the copied appearance 1 and the copied appearance 1, and synthesizing a new synthesis group named as appearance 2; copying a shape mask layer, pulling the copied shape mask layer forward for several frames, performing alpha inversion masking on the shape mask layer based on appearance 2 and the copied shape mask layer, and synthesizing a new synthesis group named appearance 3 to make a diffusion edge effect appearing in a liquid state and serve as a particle emission template of the particulate layer.
(4) Newly building a solid layer named as particles, adding a particle plug-in for the solid layer of the particles, changing the emitter type into a layer mode, in the option of the emitter of the layer, identifying the material layer as 3, and adjusting the layer RBG and the number of particles per second.
(5) And adjusting the life of each particle, the size and the random value of the particle, and adjusting the curve change of the particle size and the transparency, thereby realizing the natural change process of the display effect of the particle from 0 to 1 to 0.
(6) The shadow reflection of the shader is turned on, so that the liquid effect formed by the particles has a shadow effect, and the texture of the whole effect is increased for the whole fluid effect.
(7) Adding random flow effect to the particles, adjusting the turbulence field value in the physical option, and adjusting the relevant value in the auxiliary system.
(8) Adding the plug-in CC vector blur and sharpening gives a slightly sticky effect.
(9) Newly building a layer of solid layer, named as dust particles, adding a particle plug-in, changing the emitter type into a layer mode, and in the layer emitter option, identifying a material layer as 3, at the moment, adjusting the layer RBG, the number of particles per second, the life and size of the particles, and the numerical value of a turbulent flow field, so that the flow of the particles is more random and beautiful.
(10) Newly-built one deck regulation picture layer, add the curve plug-in components, add S _ EmbossDistoret plug-in components, the volume sense effect of reinforcing relief (sculpture) and distortion, increase FEC Vector Blur plug-in components, make the edge some blurring, liquid limit effect is simulated, increase S _ MathOps plug-in components, the luminance of improvement effect, increase S _ Glow, strengthen the light sense effect, synthesize new synthetic group, name is the particle.
(11) And copying three particle layers, wherein the first layer is used as a highlight layer to further improve light sensation, the second layer is used as a shadow layer to increase the effect of radial shadows, and meanwhile, the first layer is pulled down to form a shadow layer 1, reversing the alpha mask, and synthesizing a new synthesis group-shadow. And a further layer of pull down appears 1 on the outer layer.
(12) And adding a replacement graph in the appearance 1 of the layer, carrying out turbulent flow replacement on the two plug-ins, and finally synthesizing a new layer-particle effect sum by K frames according to the appearance and disappearance of the particles.
(13) And 3, a pull-down layer appears on the particle effect total lower layer, and the mode is changed.
In summary, an embodiment of the present application provides an animation method, which includes obtaining a shape mask layer, and performing mask processing on a specified material picture based on the shape mask layer to generate a first combined group, where the shape mask layer has multiple frames, and a mask area of each frame of the shape mask layer is different; copying the first combined group, and performing reverse mask processing on the copied first combined group and the first combined group to obtain a second combined group; performing a copying process on the shape mask layer, and performing an inversion mask process on the copied shape mask layer and the second combined group to obtain a third combined group; setting particle effect parameters required for realizing a target dynamic effect for the third combined group through a particle plug-in to obtain a dynamic effect combined group, wherein the dynamic effect combined group is the third combined group with the target dynamic effect; and generating a target animation corresponding to the specified material picture based on the first combination group, the second combination group and the dynamic effect combination group, wherein the display level of the first combination group is lower than that of the second combination group, and the display level of the second combination group is lower than that of the dynamic effect combination group. According to the embodiment of the application, a plurality of combined groups can be created in special effect manufacturing software, and the specified combined groups are rendered by adopting the particle plug-in, so that the fluid effect can be manufactured in one special effect manufacturing software, the fluid effect manufacturing time can be shortened, and the fluid effect manufacturing efficiency can be improved; moreover, the manufactured fluid effect can be adapted to other material pictures, and the fluid effect corresponding to the new material picture can be generated only by replacing the material picture in the fluid effect with the new material picture, so that the time for manufacturing the fluid effect is saved, and the reuse rate of the fluid special effect is improved.
In order to better implement the above method, an animation apparatus may be further provided in an embodiment of the present application, and the animation apparatus may be specifically integrated in a computer device, for example, a computer device such as a terminal.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an animation device according to an embodiment of the present application, the animation device including:
an obtaining unit 201 configured to obtain a shape mask layer, which has a plurality of frames, each of which has a different mask region, and mask a specified material picture based on the shape mask layer to generate a first combined group;
a first processing unit 202, configured to perform copy processing on the first combined group, and perform inversion mask processing on the first combined group obtained through copy processing and the first combined group to obtain a second combined group;
a second processing unit 203, configured to perform a copy process on the shape mask layer, and perform an inversion mask process on the shape mask layer obtained by the copy process and the second combined group to obtain a third combined group;
a setting unit 204, configured to set a particle effect parameter required for implementing a target dynamic effect for the third combined group, so as to obtain a dynamic effect combined group, where the dynamic effect combined group is the third combined group with the target dynamic effect;
a generating unit 205 configured to generate a target animation corresponding to the specified material picture based on the first combination group, the second combination group, and the dynamic effect combination group, wherein a display hierarchy of the first combination group is lower than a display hierarchy of the second combination group, and the display hierarchy of the second combination group is lower than the display hierarchy of the dynamic effect combination group.
In some embodiments, the apparatus further comprises:
a response unit for forming the designated material pictures into a material composition group in response to the composition group creation instruction;
in some embodiments, the apparatus further comprises:
and the calling unit is used for calling the specified material picture from the material combination group according to the mask processing instruction when the mask processing instruction is detected to be received, and carrying out mask processing on the specified material picture based on the shape mask layer to generate a first combination group.
In some embodiments, the apparatus further comprises:
a replacement unit configured to replace the material combination group with a replacement material combination group when it is detected that a material replacement instruction is received, wherein the replacement material combination group is a combination group formed by a new specified material picture.
In some embodiments, the apparatus further comprises:
and the updating unit is used for respectively carrying out material updating processing on the first combination group, the second combination group and the third combination group based on the replacement material combination group to obtain a new first combination group, a new second combination group and a new third combination group.
In some embodiments, the apparatus further comprises:
a first setting subunit, configured to set, for the new third combined group, a particle effect parameter required to achieve a target dynamic effect, so as to obtain a new dynamic effect combined group, where the new dynamic effect combined group is a new third combined group with the target dynamic effect;
a first generation subunit, configured to generate a target animation corresponding to the new specified material picture based on the new first combination group, the new second combination group, and the new dynamic effect combination group, where a display level of the new first combination group is lower than a display level of the new second combination group, and the display level of the new second combination group is lower than the display level of the new dynamic effect combination group.
In some embodiments, the apparatus further comprises:
the creating unit is used for responding to the layer creating instruction and creating a shape layer with a specified shape according to the layer creating instruction;
and the second setting subunit is configured to, in response to a key frame setting instruction for the shape layer, perform size setting on specified shapes corresponding to a plurality of key frames of the shape layer to obtain a mask combination group, and combine the masks into the group as the shape mask layer, where the specified shapes of the shape layer undergo size change along with change of display time.
In some embodiments, the apparatus further comprises:
a first processing subunit, configured to perform frame staggering processing on the copied first combined group based on a display start time point of the first combined group to obtain a processed copied first combined group, where a display start time point of the processed copied first combined group is earlier than a display start time point of the first combined group;
in some embodiments, the apparatus further comprises:
and the second processing subunit is used for performing reverse mask processing on the first combined group obtained by copying after the processing and the first combined group to obtain a second combined group.
In some embodiments, the apparatus further comprises:
a third processing subunit, configured to perform a frame error processing on the copied shape mask layer based on a display start time point of the second combined group to obtain a processed copied shape mask layer, where a display start time point of the processed copied shape mask layer is later than a display start time point of the second combined group;
in some embodiments, the apparatus further comprises:
a fourth processing subunit, configured to perform an inversion masking process on the processed copied shape mask layer and the second combined group to obtain a third combined group.
In some embodiments, the apparatus further comprises:
a third setting subunit, configured to respond to the first effect addition instruction, and implement a target dynamic particle parameter for the third combined group setting, to obtain a first particle layer;
a fourth setting subunit, configured to respond to a second effect addition instruction, and implement a target diffusion particle parameter for the third combined group setting to obtain a second particle layer;
a fifth setting subunit, configured to respond to a third effect addition instruction, and implement a target light-sensitive particle parameter for the third combined group setting to obtain a third particle layer;
and a second generation subunit, configured to form a dynamic effect combination group based on the first particle map layer, the second particle map layer, and the third particle map layer, where a display level of the first particle map layer is lower than a display level of the second particle map layer, and a display level of the second particle map layer is lower than a display level of the third particle map layer.
The embodiment of the application discloses an animation production device, which comprises an obtaining unit 201, a generating unit and a generating unit, wherein the obtaining unit is used for obtaining a shape mask layer, the shape mask layer is used for carrying out mask processing on a specified material picture based on the shape mask layer, and a first combined group is generated; the first processing unit 202 performs copy processing on the first combined group, and performs inversion masking processing on the first combined group obtained by copy and the first combined group to obtain a second combined group; the second processing unit 203 performs a copy process on the shape mask layer, and performs an inversion mask process based on the copied shape mask layer and the second combined group to obtain a third combined group; the setting unit 204 sets a particle effect parameter required for realizing a target dynamic effect for the third combined group to obtain a dynamic effect combined group, wherein the dynamic effect combined group is the third combined group with the target dynamic effect; the generation unit 205 generates a target animation corresponding to the specified material picture based on the first combination group whose display hierarchy is lower than that of the second combination group whose display hierarchy is lower than that of the dynamic effect combination group, the second combination group, and the dynamic effect combination group. According to the embodiment of the application, a plurality of combined groups are created in special effect manufacturing software, and the specified combined groups are rendered by adopting the particle plug-in, so that the fluid effect can be manufactured in one special effect manufacturing software, the fluid effect manufacturing time can be shortened, and the fluid effect manufacturing efficiency can be improved; moreover, the manufactured fluid effect can be adapted to other material pictures, and the fluid effect corresponding to the new material picture can be generated only by replacing the material picture in the fluid effect with the new material picture, so that the time for manufacturing the fluid effect is saved, and the reuse rate of the fluid special effect is improved.
Correspondingly, the embodiment of the present application further provides a Computer device, where the Computer device may be a terminal or a server, and the terminal may be a terminal device such as a smart phone, a tablet Computer, a notebook Computer, a touch screen, a game machine, a Personal Computer (PC), a Personal Digital Assistant (PDA), and the like. As shown in fig. 6, fig. 6 is a schematic structural diagram of a computer device according to an embodiment of the present application. The computer apparatus 300 includes a processor 301 having one or more processing cores, a memory 302 having one or more computer-readable storage media, and a computer program stored on the memory 302 and executable on the processor. The processor 301 is electrically connected to the memory 302. Those skilled in the art will appreciate that the computer device configurations illustrated in the figures are not meant to be limiting of computer devices and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.
The processor 301 is a control center of the computer apparatus 300, connects various parts of the entire computer apparatus 300 by various interfaces and lines, performs various functions of the computer apparatus 300 and processes data by running or loading software programs and/or modules stored in the memory 302, and calling data stored in the memory 302, thereby monitoring the computer apparatus 300 as a whole.
In the embodiment of the present application, the processor 301 in the computer device 300 loads instructions corresponding to processes of one or more application programs into the memory 302, and the processor 301 executes the application programs stored in the memory 302 according to the following steps, so as to implement various functions:
obtaining a shape mask layer, and performing mask processing on a specified material picture based on the shape mask layer to generate a first combined group, wherein the shape mask layer has a plurality of frames, and the mask area of each frame of the shape mask layer is different;
copying the first combined group, and performing reverse mask processing on the copied first combined group and the first combined group to obtain a second combined group;
performing a copying process on the shape mask layer, and performing an inversion mask process on the copied shape mask layer and the second combined group to obtain a third combined group;
setting a particle effect parameter required for realizing a target dynamic effect for the third combined group to obtain a dynamic effect combined group, wherein the dynamic effect combined group is the third combined group with the target dynamic effect;
and generating a target animation corresponding to the specified material picture based on the first combination group, the second combination group and the dynamic effect combination group, wherein the display level of the first combination group is lower than that of the second combination group, and the display level of the second combination group is lower than that of the dynamic effect combination group.
In one embodiment, before performing the masking process on the designated material picture based on the shape mask layer to generate the first composition group, the method further includes:
forming a material composition group by the appointed material pictures in response to a composition group creation instruction;
the masking processing of the specified material picture based on the shape mask layer includes:
when a masking processing instruction is received, the specified material picture is called from the material combination group according to the masking processing instruction, and the specified material picture is masked based on the shape masking layer to generate a first combination group.
In an embodiment, after generating the target animation corresponding to the specified material picture based on the first combination group, the second combination group, and the dynamic effect combination group, the method further includes:
and when detecting that a material replacing instruction is received, replacing the material combination group with a replacing material combination group, wherein the replacing material combination group is a combination group formed by a new specified material picture.
In one embodiment, after replacing the material composition set with the replacement material composition set, the method further comprises:
and respectively carrying out material updating processing on the first combination group, the second combination group and the third combination group based on the replacement material combination group to obtain a new first combination group, a new second combination group and a new third combination group.
In an embodiment, after obtaining the new first combined group, the new second combined group, and the new third combined group, the method further includes:
setting particle effect parameters required for realizing a target dynamic effect for the new third combined group to obtain a new dynamic effect combined group, wherein the new dynamic effect combined group is a new third combined group with the target dynamic effect;
and generating a target animation corresponding to the new specified material picture based on the new first combined group, the new second combined group and the new dynamic effect combined group, wherein the display level of the new first combined group is lower than that of the new second combined group, and the display level of the new second combined group is lower than that of the new dynamic effect combined group.
In one embodiment, before obtaining the shape mask layer, the method further includes:
responding to a layer creating instruction, and creating a shape layer with a specified shape according to the layer creating instruction;
responding to a key frame setting instruction aiming at the shape layer, carrying out size setting on specified shapes corresponding to a plurality of key frames of the shape layer to obtain a mask combination group, and combining the masks into the group to be used as a shape mask layer, wherein the specified shapes of the shape layer are subjected to size change along with the change of display time.
In an embodiment, after the copying the first synthesized group, before performing an inversion masking process on the copied first synthesized group and the first synthesized group to obtain a second synthesized group, the method further includes:
performing frame error processing on the copied first combined group based on a display starting time point of the first combined group to obtain a processed copied first combined group, wherein the display starting time point of the processed copied first combined group is earlier than that of the first combined group;
performing inversion masking processing on the first combined group obtained by copying and the first combined group to obtain a second combined group, including:
and performing inversion mask processing on the first combined group obtained by copying after the processing and the first combined group to obtain a second combined group.
In an embodiment, after the copying the shape mask layer, before performing an inverse mask process based on the copied shape mask layer and the second combined set to obtain a third combined set, the method further includes:
performing error frame processing on the copied shape mask layer based on the display start time point of the second combined group to obtain a processed copied shape mask layer, wherein the display start time point of the processed copied shape mask layer is later than the display start time point of the second combined group;
performing an inversion masking process on the replicated shape mask layer and the second combined group to obtain a third combined group, including:
and performing reverse mask processing based on the processed copied shape mask layer and the second combined group to obtain a third combined group.
In an embodiment, the setting, for the third combined group, a particle effect parameter required to achieve a target dynamic effect to obtain a dynamic effect combined group includes:
responding to a first effect adding instruction, and setting a target dynamic particle parameter for the third combined group to obtain a first particle layer;
responding to a second effect adding instruction, and setting a target diffusion particle parameter for the third combined group to obtain a second particle layer;
responding to a third effect adding instruction, and setting a third combined group to realize target light sensation particle parameters to obtain a third particle layer;
and forming a dynamic effect combination group based on the first particle layer, the second particle layer and the third particle layer, wherein the display level of the first particle layer is lower than that of the second particle layer, and the display level of the second particle layer is lower than that of the third particle layer.
Optionally, as shown in fig. 6, the computer device 300 further includes: a touch display 303, a radio frequency circuit 304, an audio circuit 305, an input unit 306, and a power source 307. The processor 301 is electrically connected to the touch display 303, the radio frequency circuit 304, the audio circuit 305, the input unit 306, and the power source 307. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 6 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.
The touch display screen 303 may be used for displaying a graphical user interface and receiving operation instructions generated by a user acting on the graphical user interface. The touch display screen 303 may include a display panel and a touch panel. The display panel may be used, among other things, to display information entered by or provided to a user and various graphical user interfaces of the computer device, which may be made up of graphics, text, icons, video, and any combination thereof. Alternatively, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations of a user on or near the touch panel (for example, operations of the user on or near the touch panel using any suitable object or accessory such as a finger, a stylus pen, and the like), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 301, and can receive and execute commands sent by the processor 301. The touch panel may overlay the display panel, and when the touch panel detects a touch operation thereon or nearby, the touch panel transmits the touch operation to the processor 301 to determine the type of the touch event, and then the processor 301 provides a corresponding visual output on the display panel according to the type of the touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 303 to realize input and output functions. However, in some embodiments, the touch panel and the touch panel can be implemented as two separate components to perform the input and output functions. That is, the touch display screen 303 may also be used as a part of the input unit 306 to implement an input function.
In this embodiment, the processor 301 executes an application program to generate a graphical interface on the touch display screen 303. The touch display screen 303 is used for presenting a graphical interface and receiving an operation instruction generated by a user acting on the graphical interface.
The rf circuit 304 may be used for transceiving rf signals to establish wireless communication with a network device or other computer device via wireless communication, and for transceiving signals with the network device or other computer device.
The audio circuit 305 may be used to provide an audio interface between the user and the computer device through speakers, microphones. The audio circuit 305 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electric signal, which is received by the audio circuit 305 and converted into audio data, which is then processed by the audio data output processor 301, and then transmitted to, for example, another computer device via the radio frequency circuit 304, or output to the memory 302 for further processing. The audio circuit 305 may also include an earbud jack to provide communication of peripheral headphones with the computer device.
The input unit 306 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 307 is used to power the various components of the computer device 300. Optionally, the power supply 307 may be logically connected to the processor 301 through a power management system, so as to implement functions of managing charging, discharging, and power consumption management through the power management system. Power supply 307 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown in fig. 6, the computer device 300 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described in detail herein.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
As can be seen from the above, the computer device provided in this embodiment creates a plurality of combination groups in the special effect production software, and renders the specified combination groups by using the particle plug-in, so that the production of the fluid effect can be realized in only one special effect production software, the production time of the fluid effect can be shortened, and the production efficiency of the fluid effect can be improved; moreover, the manufactured fluid effect can be adapted to other material pictures, and the fluid effect corresponding to the new material picture can be generated only by replacing the material picture in the fluid effect with the new material picture, so that the time for manufacturing the fluid effect is saved, and the reuse rate of the fluid special effect is improved.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer-readable storage medium, in which a plurality of computer programs are stored, and the computer programs can be loaded by a processor to execute the steps in any animation method provided by the embodiments of the present application. For example, the computer program may perform the steps of:
obtaining a shape mask layer, and performing mask processing on a specified material picture based on the shape mask layer to generate a first combined group, wherein the shape mask layer has a plurality of frames, and the mask area of each frame of the shape mask layer is different;
copying the first combined group, and performing reverse mask processing on the copied first combined group and the first combined group to obtain a second combined group;
performing a copying process on the shape mask layer, and performing an inversion mask process on the copied shape mask layer and the second combined group to obtain a third combined group;
setting a particle effect parameter required for realizing a target dynamic effect for the third combined group to obtain a dynamic effect combined group, wherein the dynamic effect combined group is the third combined group with the target dynamic effect;
and generating a target animation corresponding to the specified material picture based on the first combination group, the second combination group and the dynamic effect combination group, wherein the display level of the first combination group is lower than that of the second combination group, and the display level of the second combination group is lower than that of the dynamic effect combination group.
In one embodiment, before performing the masking process on the designated material picture based on the shape mask layer to generate the first synthesis group, the method further includes:
forming a material composition group by the appointed material pictures in response to a composition group creation instruction;
the masking processing of the specified material picture based on the shape mask layer includes:
when a mask processing instruction is detected to be received, the specified material picture is called from the material combination group according to the mask processing instruction, and the mask processing is carried out on the specified material picture based on the shape mask layer to generate a first combination group.
In an embodiment, after generating the target animation corresponding to the specified material picture based on the first combination group, the second combination group, and the dynamic effect combination group, the method further includes:
and when detecting that a material replacing instruction is received, replacing the material combination group with a replacing material combination group, wherein the replacing material combination group is a combination group formed by a new specified material picture.
In one embodiment, after replacing the material composition set with the replacement material composition set, the method further comprises:
and respectively carrying out material updating processing on the first combination group, the second combination group and the third combination group based on the replacement material combination group to obtain a new first combination group, a new second combination group and a new third combination group.
In an embodiment, after obtaining the new first combined group, the new second combined group, and the new third combined group, the method further includes:
setting particle effect parameters required for realizing a target dynamic effect for the new third combined group to obtain a new dynamic effect combined group, wherein the new dynamic effect combined group is a new third combined group with the target dynamic effect;
and generating a target animation corresponding to the new specified material picture based on the new first combined group, the new second combined group and the new dynamic effect combined group, wherein the display level of the new first combined group is lower than that of the new second combined group, and the display level of the new second combined group is lower than that of the new dynamic effect combined group.
In one embodiment, before obtaining the shape mask layer, the method further includes:
responding to a layer creating instruction, and creating a shape layer with a specified shape according to the layer creating instruction;
responding to a key frame setting instruction aiming at the shape layer, carrying out size setting on specified shapes corresponding to a plurality of key frames of the shape layer to obtain a mask combination group, and combining the masks into the group to be used as a shape mask layer, wherein the specified shapes of the shape layer are subjected to size change along with the change of display time.
In an embodiment, after the copying the first synthesized group, before performing an inversion masking process on the copied first synthesized group and the first synthesized group to obtain a second synthesized group, the method further includes:
performing frame error processing on the copied first combined group based on a display starting time point of the first combined group to obtain a processed copied first combined group, wherein the display starting time point of the processed copied first combined group is earlier than that of the first combined group;
performing inversion masking processing on the first combined group obtained by copying and the first combined group to obtain a second combined group, including:
and performing inversion mask processing on the first combined group obtained by copying after the processing and the first combined group to obtain a second combined group.
In an embodiment, after the copying the shape mask layer, before performing an inverse mask process based on the copied shape mask layer and the second combined set to obtain a third combined set, the method further includes:
performing error frame processing on the copied shape mask layer based on the display start time point of the second combined group to obtain a processed copied shape mask layer, wherein the display start time point of the processed copied shape mask layer is later than the display start time point of the second combined group;
performing an inversion masking process on the copied shape mask layer and the second combined set to obtain a third combined set, including:
and performing reverse mask processing based on the processed copied shape mask layer and the second combined group to obtain a third combined group.
In one embodiment, the particle inserts include dynamic effect particle inserts, diffusion effect particle inserts, and light sensation effect particle inserts;
set up the particle effect parameter that realizes the target dynamic effect for third combination group obtains dynamic effect and synthesizes the group, includes:
responding to a first effect adding instruction, and setting a target dynamic particle parameter for the third combined group to obtain a first particle layer;
responding to a second effect adding instruction, and setting a target diffusion particle parameter for the third combined group to obtain a second particle layer;
responding to a third effect adding instruction, and setting a third combined group to realize target light sensation particle parameters to obtain a third particle layer;
and forming a dynamic effect combination group based on the first particle layer, the second particle layer and the third particle layer, wherein the display level of the first particle layer is lower than that of the second particle layer, and the display level of the second particle layer is lower than that of the third particle layer.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The computer program stored in the storage medium can execute the steps in any animation production method provided by the embodiment of the application, and the embodiment of the application creates a plurality of combined groups in special effect production software, and renders the specified combined groups by adopting the particle plug-in, so that the production of the fluid effect can be realized in one special effect production software, the production time of the fluid effect can be shortened, and the production efficiency of the fluid effect can be improved; moreover, the manufactured fluid effect can be adapted to other material pictures, and the fluid effect corresponding to the new material picture can be generated only by replacing the material picture in the fluid effect with the new material picture, so that the time for manufacturing the fluid effect is saved, and the reuse rate of the fluid special effect is improved.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The animation production method, the animation production device, the computer device and the computer readable storage medium provided by the embodiments of the present application are introduced in detail, and a specific example is applied in the description to explain the principle and the implementation of the present application, and the description of the embodiments is only used to help understanding the technical scheme and the core idea of the present application; those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the present disclosure as defined by the appended claims.

Claims (12)

1. A method of animation, comprising:
obtaining a shape mask layer, and performing mask processing on a specified material picture based on the shape mask layer to generate a first combined group, wherein the shape mask layer has a plurality of frames, and the mask area of each frame of the shape mask layer is different;
copying the first combined group, and performing reverse mask processing on the copied first combined group and the first combined group to obtain a second combined group;
performing a copying process on the shape mask layer, and performing an inversion mask process on the copied shape mask layer and the second combined group to obtain a third combined group;
setting a particle effect parameter required for realizing a target dynamic effect for the third combined group to obtain a dynamic effect combined group, wherein the dynamic effect combined group is the third combined group with the target dynamic effect;
and generating a target animation corresponding to the specified material picture based on the first combination group, the second combination group and the dynamic effect combination group, wherein the display level of the first combination group is lower than that of the second combination group, and the display level of the second combination group is lower than that of the dynamic effect combination group.
2. An animation method as claimed in claim 1, before masking the specified material pictures based on the shape mask layer to generate the first composite set, further comprising:
forming a material combination group by the appointed material pictures in response to a combination group creation instruction;
the masking processing of the specified material picture based on the shape mask layer includes:
when a mask processing instruction is detected to be received, the specified material picture is called from the material combination group according to the mask processing instruction, and the mask processing is carried out on the specified material picture based on the shape mask layer to generate a first combination group.
3. The animation production method according to claim 2, further comprising, after generating the target animation corresponding to the specified material picture based on the first combination group, the second combination group, and the dynamic effect combination group:
and when detecting that a material replacing instruction is received, replacing the material combination group with a replacing material combination group, wherein the replacing material combination group is a combination group formed by a new specified material picture.
4. The animation method of claim 3, further comprising, after replacing the material composition group with an alternative material composition group:
and respectively carrying out material updating processing on the first combination group, the second combination group and the third combination group based on the replacement material combination group to obtain a new first combination group, a new second combination group and a new third combination group.
5. The animation method of claim 4, further comprising, after obtaining the new first combined group, the new second combined group, and the new third combined group:
setting particle effect parameters required for realizing a target dynamic effect for the new third combined group to obtain a new dynamic effect combined group, wherein the new dynamic effect combined group is a new third combined group with the target dynamic effect;
and generating a target animation corresponding to the new specified material picture based on the new first combined group, the new second combined group and the new dynamic effect combined group, wherein the display level of the new first combined group is lower than that of the new second combined group, and the display level of the new second combined group is lower than that of the new dynamic effect combined group.
6. The animation method as claimed in claim 1, further comprising, before the obtaining of the shape mask layer:
responding to a layer creating instruction, and creating a shape layer with a specified shape according to the layer creating instruction;
responding to a key frame setting instruction aiming at the shape layer, carrying out size setting on specified shapes corresponding to a plurality of key frames of the shape layer to obtain a mask combination group, and combining the masks into the group to be used as a shape mask layer, wherein the specified shapes of the shape layer are subjected to size change along with the change of display time.
7. The animation method according to claim 1, wherein after the copying process is performed on the first composition group, before performing an inversion masking process based on the copied first composition group and the first composition group to obtain a second composition group, the method further comprises:
performing frame error processing on the copied first combined group based on a display starting time point of the first combined group to obtain a processed copied first combined group, wherein the display starting time point of the processed copied first combined group is earlier than that of the first combined group;
performing inversion masking processing on the first combined group obtained by copying and the first combined group to obtain a second combined group, including:
and performing inversion mask processing on the first combined group obtained by copying after the processing and the first combined group to obtain a second combined group.
8. The animation method as claimed in claim 1, wherein after the copying the shape mask layer, before performing an inversion masking process based on the copied shape mask layer and the second combined set to obtain a third combined set, further comprising:
performing error frame processing on the copied shape mask layer based on the display start time point of the second combined group to obtain a processed copied shape mask layer, wherein the display start time point of the processed copied shape mask layer is later than the display start time point of the second combined group;
performing an inversion masking process on the copied shape mask layer and the second combined set to obtain a third combined set, including:
and performing reverse mask processing based on the processed copied shape mask layer and the second combined group to obtain a third combined group.
9. The animation method according to claim 1, wherein setting the particle effect parameters required to achieve the target dynamic effect for the third combined group to obtain a dynamic effect combined group comprises:
responding to a first effect adding instruction, and setting a target dynamic particle parameter for the third combined group to obtain a first particle layer;
responding to a second effect adding instruction, and setting a target diffusion particle parameter for the third combined group to obtain a second particle layer;
responding to a third effect adding instruction, and setting a third combined group to realize target light sensation particle parameters to obtain a third particle layer;
and forming a dynamic effect combination group based on the first particle layer, the second particle layer and the third particle layer, wherein the display level of the first particle layer is lower than that of the second particle layer, and the display level of the second particle layer is lower than that of the third particle layer.
10. An animation device, characterized in that the device comprises:
an acquisition unit configured to acquire a shape mask layer having a plurality of frames, each frame having a different mask region, and perform mask processing on a specified material picture based on the shape mask layer to generate a first combined group;
a first processing unit, configured to perform copy processing on the first combined group, and perform inversion mask processing on the first combined group obtained through copy processing and the first combined group to obtain a second combined group;
a second processing unit configured to perform a copy process on the shape mask layer, and perform an inversion mask process on the shape mask layer obtained by the copy process and the second combined group to obtain a third combined group;
a setting unit, configured to set a particle effect parameter required to achieve a target dynamic effect for the third combined group, so as to obtain a dynamic effect combined group, where the dynamic effect combined group is the third combined group with the target dynamic effect;
a generating unit configured to generate a target animation corresponding to the specified material picture based on the first combination group, the second combination group, and the dynamic effect combination group, wherein a display hierarchy of the first combination group is lower than a display hierarchy of the second combination group, and the display hierarchy of the second combination group is lower than a display hierarchy of the dynamic effect combination group.
11. A computer device comprising a processor, a memory and a computer program stored on the memory and capable of running on the processor, the computer program, when executed by the processor, implementing the steps of the animation method as claimed in any one of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the animation method as claimed in one of claims 1 to 9.
CN202210626173.9A 2022-06-02 2022-06-02 Animation production method, animation production device, computer equipment and computer readable storage medium Pending CN115082600A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210626173.9A CN115082600A (en) 2022-06-02 2022-06-02 Animation production method, animation production device, computer equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210626173.9A CN115082600A (en) 2022-06-02 2022-06-02 Animation production method, animation production device, computer equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115082600A true CN115082600A (en) 2022-09-20

Family

ID=83249258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210626173.9A Pending CN115082600A (en) 2022-06-02 2022-06-02 Animation production method, animation production device, computer equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115082600A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10232951A (en) * 1996-12-20 1998-09-02 Seiko Epson Corp Animation generating method and scenario generating method for animation
US20140002746A1 (en) * 2012-06-29 2014-01-02 Xue Bai Temporal Matte Filter for Video Matting
US8988422B1 (en) * 2010-12-17 2015-03-24 Disney Enterprises, Inc. System and method for augmenting hand animation with three-dimensional secondary motion
CN106504311A (en) * 2016-10-28 2017-03-15 腾讯科技(深圳)有限公司 A kind of rendering intent of dynamic fluid effect and device
CN106843631A (en) * 2016-11-15 2017-06-13 北京奇虎科技有限公司 A kind of method and apparatus for processing application icon
CN110738715A (en) * 2018-07-19 2020-01-31 北京大学 automatic migration method of dynamic text special effect based on sample
US20210006835A1 (en) * 2019-07-01 2021-01-07 Microsoft Technology Licensing, Llc Blurring to improve visual quality in an area of interest in a frame

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10232951A (en) * 1996-12-20 1998-09-02 Seiko Epson Corp Animation generating method and scenario generating method for animation
US8988422B1 (en) * 2010-12-17 2015-03-24 Disney Enterprises, Inc. System and method for augmenting hand animation with three-dimensional secondary motion
US20140002746A1 (en) * 2012-06-29 2014-01-02 Xue Bai Temporal Matte Filter for Video Matting
CN106504311A (en) * 2016-10-28 2017-03-15 腾讯科技(深圳)有限公司 A kind of rendering intent of dynamic fluid effect and device
CN106843631A (en) * 2016-11-15 2017-06-13 北京奇虎科技有限公司 A kind of method and apparatus for processing application icon
CN110738715A (en) * 2018-07-19 2020-01-31 北京大学 automatic migration method of dynamic text special effect based on sample
US20210006835A1 (en) * 2019-07-01 2021-01-07 Microsoft Technology Licensing, Llc Blurring to improve visual quality in an area of interest in a frame

Similar Documents

Publication Publication Date Title
CN112037311B (en) Animation generation method, animation playing method and related devices
CN113052947B (en) Rendering method, rendering device, electronic equipment and storage medium
CN111047509A (en) Image special effect processing method and device and terminal
WO2023213037A1 (en) Hair virtual model rendering method and apparatus, computer device, and storage medium
CN112233211A (en) Animation production method and device, storage medium and computer equipment
CN113516742A (en) Model special effect manufacturing method and device, storage medium and electronic equipment
CN113018856A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112465945A (en) Model generation method and device, storage medium and computer equipment
CN113546411A (en) Rendering method and device of game model, terminal and storage medium
CN113645476B (en) Picture processing method and device, electronic equipment and storage medium
CN115082600A (en) Animation production method, animation production device, computer equipment and computer readable storage medium
CN115501590A (en) Display method, display device, electronic equipment and storage medium
CN115100345A (en) Animation production method, animation production device, computer equipment and computer readable storage medium
CN114419237A (en) Map processing method and device, computer equipment and storage medium
CN115984528A (en) Mapping generation method and device for virtual model, computer equipment and storage medium
CN115761066A (en) Animation effect generation method and device for mosaic particles, storage medium and equipment
CN114404953A (en) Virtual model processing method and device, computer equipment and storage medium
CN115797532A (en) Rendering method and device of virtual scene, computer equipment and storage medium
CN113362348B (en) Image processing method, image processing device, electronic equipment and storage medium
CN114416234B (en) Page switching method and device, computer equipment and storage medium
CN112837403B (en) Mapping method, mapping device, computer equipment and storage medium
CN115645917A (en) Virtual model processing method and device, computer equipment and storage medium
CN114972701A (en) Method and device for determining post-processing area, computer equipment and storage medium
CN115908643A (en) Storm special effect animation generation method and device, computer equipment and storage medium
CN114159798A (en) Scene model generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination