CN118079399A - Method and device for generating animation, storage medium and electronic device - Google Patents

Method and device for generating animation, storage medium and electronic device Download PDF

Info

Publication number
CN118079399A
CN118079399A CN202410303234.7A CN202410303234A CN118079399A CN 118079399 A CN118079399 A CN 118079399A CN 202410303234 A CN202410303234 A CN 202410303234A CN 118079399 A CN118079399 A CN 118079399A
Authority
CN
China
Prior art keywords
animation
target
gesture
curve
animations
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410303234.7A
Other languages
Chinese (zh)
Inventor
杨家骏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202410303234.7A priority Critical patent/CN118079399A/en
Publication of CN118079399A publication Critical patent/CN118079399A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/63Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6607Methods for processing data by generating or executing the game program for rendering three dimensional images for animating game characters, e.g. skeleton kinematics

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a method and a device for generating animation, a storage medium and an electronic device. The method comprises the following steps: obtaining universal curve animations corresponding to the multiple redirection gestures, wherein the universal curve animations are curve animations to be shared among the multiple virtual models, and the universal curve animations are used for representing skeleton deflection degrees shared in the process of executing the multiple redirection gestures by the multiple virtual models; based on skeleton binding data of a target virtual model in the plurality of virtual models, making a gesture asset file, wherein the gesture asset file comprises a plurality of key frames corresponding to a plurality of redirection gestures, and the plurality of key frames are used for storing skeleton deflection positions in the process that the target virtual model executes the plurality of redirection gestures; and generating a target animation of the target virtual model by using the gesture asset file and the universal curve animation. The application solves the technical problems of poor simulation degree of the animation effect and high action manufacturing cost obtained by the manufacturing scheme of the animation shared by a plurality of virtual models in the related technology.

Description

Method and device for generating animation, storage medium and electronic device
Technical Field
The present application relates to the field of computer technology and the field of electronic game technology, and in particular, to a method and apparatus for generating an animation, a storage medium, and an electronic apparatus.
Background
In the existing partial game engine, if the expression animation is shared by a plurality of virtual models, the application of the expression animation is generally required to be performed once for each virtual model, or a set of expression animation is output for the virtual skeleton of each virtual model. However, when there is a large shape difference (such as facial form difference) between multiple virtual models, the scheme of directly applying the expression animation is easy to generate the situation of through-model or expression unnaturalness, and huge resources (such as manpower, memory, etc.) are required to be consumed for re-outputting the animation to the skeleton of each virtual model.
Therefore, how to realize multimodal sharing of expression animation based on the environments of these game engines becomes one of the important problems in the related art. In view of the above problems, no effective solution has been proposed at present.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the application and thus may include information that does not form the prior art that is already known to those of ordinary skill in the art.
Disclosure of Invention
At least some embodiments of the present application provide a method, an apparatus, a storage medium, and an electronic device for generating an animation, so as to at least solve the technical problems of poor animation effect fidelity and high motion manufacturing cost obtained by a manufacturing scheme in which a plurality of virtual models share an animation in the related art.
According to one embodiment of the present application, there is provided a method of generating an animation, including: obtaining universal curve animations corresponding to the multiple redirection gestures, wherein the universal curve animations are curve animations to be shared among the multiple virtual models, and the universal curve animations are used for representing skeleton deflection degrees shared in the process of executing the multiple redirection gestures by the multiple virtual models; based on skeleton binding data of a target virtual model in the plurality of virtual models, making a gesture asset file, wherein the gesture asset file comprises a plurality of key frames corresponding to a plurality of redirection gestures, and the plurality of key frames are used for storing skeleton deflection positions in the process that the target virtual model executes the plurality of redirection gestures; and generating a target animation of the target virtual model by using the gesture asset file and the universal curve animation.
According to one embodiment of the present application, there is also provided an apparatus for generating an animation, including: the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring universal curve animation corresponding to a plurality of redirection gestures, wherein the universal curve animation is a curve animation to be shared with a plurality of virtual models, and the universal curve animation is used for representing the bone deflection degree shared in the process of executing the plurality of redirection gestures by the plurality of virtual models; the system comprises a manufacturing module, a storage module and a display module, wherein the manufacturing module is used for manufacturing a gesture asset file based on skeleton binding data of a target virtual model in a plurality of virtual models, wherein the gesture asset file comprises a plurality of key frames corresponding to a plurality of redirection gestures, and the plurality of key frames are used for storing skeleton deflection positions in the process that the target virtual model executes the plurality of redirection gestures; and the generating module is used for generating a target animation of the target virtual model by utilizing the gesture asset file and the universal curve animation.
According to one embodiment of the present application, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to perform the method of generating an animation in any of the above-mentioned claims when run.
According to one embodiment of the present application, there is also provided an electronic apparatus including: comprising a memory in which a computer program is stored and a processor arranged to run the computer program to perform the method of generating an animation as in any of the above.
In at least some embodiments of the present application, a general curve animation corresponding to a plurality of redirecting gestures is obtained, where the general curve animation is a curve animation to be shared with a plurality of virtual models, and the general curve animation is used to characterize a skeletal deflection degree shared in a process of executing the plurality of redirecting gestures by the plurality of virtual models; based on skeleton binding data of a target virtual model in the plurality of virtual models, making a gesture asset file, wherein the gesture asset file comprises a plurality of key frames corresponding to a plurality of redirection gestures, and the plurality of key frames are used for storing skeleton deflection positions in the process that the target virtual model executes the plurality of redirection gestures; and generating a target animation of the target virtual model by using the gesture asset file and the universal curve animation. Therefore, the application achieves the purpose of sharing the universal curve animation to a plurality of virtual models, thereby realizing the technical effects of improving the effect fidelity of the animation shared by a plurality of virtual models and reducing the manufacturing cost of the shared animation, and further solving the technical problems of poor animation effect fidelity and high manufacturing cost of actions obtained by the manufacturing scheme of the animation shared by a plurality of virtual models in the related technology.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a block diagram of a hardware structure of a mobile terminal according to a method of generating an animation according to one embodiment of the present application;
FIG. 2 is a flow chart of a method of generating an animation according to one embodiment of the application;
FIG. 3 is a schematic illustration of an operational window of an alternative curve animation export tool, according to one embodiment of the application;
FIG. 4 is a schematic diagram of an alternative redirect gesture according to one embodiment of the application;
FIG. 5 is a schematic diagram of an alternative animation blending operation interface, according to an embodiment of the present application;
FIG. 6 is a block diagram of an apparatus for generating an animation according to one embodiment of the application;
fig. 7 is a schematic diagram of an electronic device according to an embodiment of the application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the description of the present application, the term "for example" is used to mean "serving as an example, illustration, or description". Any embodiment described as "for example" in this disclosure is not necessarily to be construed as preferred or advantageous over other embodiments. The following description is presented to enable any person skilled in the art to make and use the application. In the following description, details are set forth for purposes of explanation. It will be apparent to one of ordinary skill in the art that the present application may be practiced without these specific details. In other instances, well-known structures and processes have not been described in detail so as not to obscure the description of the application with unnecessary detail. Thus, the present application is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
In describing embodiments of the present application, partial terms or terms that appear are used in the following explanation:
Key frame: for defining changes in the position, rotation, scaling, etc. properties of the object during the animation. The key frames contain states of the animation at different time points, and smooth animation transition can be realized by carrying out interpolation calculation among different key frames. In some game engines, animation clips are created by key frames and managed and invoked in an animation controller.
Game engine: refers to a software framework for developing and creating electronic games. The game engine provides a series of tools for processing graphics rendering, physical simulation, audio processing, artificial intelligence, animation, and other game-related functions. The first game engine has better graphic rendering effect, can create more realistic game scenes and character models, can support C++ programming language, is suitable for advanced programmers and professional developers, is also supported by built-in Virtual Reality (VR) and augmented Reality (Augmented Reality AR) tools, but has the defects of steep learning curve, unfriendly novice, larger generated game files, longer loading time and relatively less resource library and community support. The second-class game engine is friendly to users, the learning curve is gentle and suitable for novice, supports multi-platform (PC, mobile equipment, host computer and the like) release, provides huge resource library and active community support, and is convenient for users to acquire courses and resources, however, the second-class engine is difficult to realize that the general curve animation is conveniently shared to a plurality of virtual models, and ensures the visual expression effect of the animation generated for the plurality of virtual models.
In one possible implementation manner of the present application, aiming at the technical field of computers, the technical concept that in the scene of generating a shared animation for a virtual model, the common animation is directly applied to a plurality of virtual models or the skeleton of each virtual model is singly output, after the inventor is practiced and studied carefully, the technical problems of easy occurrence of through-model, low simulation degree of the animation effect or high manufacturing cost still exist.
The above-described method embodiments to which the present application relates may be performed in a terminal device (e.g. a mobile terminal, a computer terminal or similar computing means). Taking the mobile terminal as an example, the mobile terminal can be a terminal device such as a smart phone, a tablet computer, a palm computer, a mobile internet device, a game machine and the like.
Fig. 1 is a block diagram of a hardware structure of a mobile terminal according to a method of generating an animation according to one embodiment of the present application. As shown in fig. 1, a mobile terminal may include one or more (only one shown in fig. 1) processors 102, memory 104, transmission devices 106, input output devices 108, and display devices 110. Taking the example that the method for generating the animation is applied to the electronic game scene through the mobile terminal, the processor 102 invokes and runs the computer program stored in the memory 104 to execute the method for generating the animation, and the model animations of the multiple virtual models in the generated electronic game scene are transmitted to the input and output device 108 and/or the display device 110 through the transmission device 106, so that the model animations are provided to the player.
As also shown in fig. 1, the processor 102 may include, but is not limited to: a central Processing unit (Central Processing Unit, CPU), a graphics processor (Graphics Processing Unit, GPU), a digital signal Processing (DIGITAL SIGNAL Processing, DSP) chip, a microprocessor (Microcontroller Unit, MCU), a programmable logic device (Field Programmable GATE ARRAY, FPGA), a neural network processor (Neural-Network Processing Unit, NPU), a tensor processor (Tensor Processing Unit, TPU), an artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) type processor, and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
In some optional embodiments, which are mainly based on game scenes, the terminal device may further provide a human-machine interaction interface with a touch-sensitive surface, where the human-machine interaction interface may sense finger contacts and/or gestures to perform human-machine interaction with a graphical user interface (GRAPHICAL USER INTERFACE, GUI), and the human-machine interaction functions may include the following interactions: executable instructions for performing the above-described human-machine interaction functions, such as creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, sending and receiving electronic mail, talking interfaces, playing digital video, playing digital music, and/or web browsing, are configured/stored in a computer program product or readable storage medium executable by one or more processors.
The above-mentioned method embodiments related to the present application may also be executed in a server. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content distribution network (Content Delivery Network, CDN), basic cloud computing services such as big data and an artificial intelligent platform. Taking an example that the method for generating the animation is applied to the electronic game scene through the electronic game server, the electronic game server can generate model animations in the electronic game scene for a plurality of virtual models according to the shared universal curve animations based on the method for generating the animation, and provide the model animations to a player (for example, the model animations can be rendered and displayed on a display screen of a player terminal, or provided to the player through holographic projection, etc.).
According to one embodiment of the present application, there is provided an embodiment of a method of generating an animation, it being noted that the steps shown in the flowchart of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in an order other than that shown or described herein.
In this embodiment, a method for generating an animation operating on the mobile terminal is provided, and fig. 2 is a flowchart of a method for generating an animation according to an embodiment of the present application, as shown in fig. 2, the method includes the following steps:
step S21, obtaining universal curve animation corresponding to the plurality of redirection gestures, wherein the universal curve animation is the curve animation to be shared with the plurality of virtual models, and the universal curve animation is used for representing the bone deflection degree shared in the process of executing the plurality of redirection gestures by the plurality of virtual models;
Step S22, based on skeleton binding data of a target virtual model in the plurality of virtual models, a gesture asset file is produced, wherein the gesture asset file comprises a plurality of key frames corresponding to a plurality of redirection gestures, and the plurality of key frames are used for storing skeleton deflection positions in the process that the target virtual model executes the plurality of redirection gestures;
Step S23, generating a target animation of the target virtual model by using the gesture asset file and the universal curve animation.
The plurality of redirection gestures may be skeletal gestures corresponding to a plurality of expression actions that the virtual model is capable of making. For example, in animating a shared face for multiple virtual models, the multiple reorientation gestures described above may be used to adjust the facial skeleton of the virtual model to multiple specified gestures (e.g., open mouth, close eyes, lift-up of the mouth skin, frowning, etc.). The general animation curve is a curve animation which is pre-manufactured and is to be shared to a plurality of virtual models, the curve animation is controlled by a plurality of curves, and each curve is used for determining the execution degree of a redirection posture. For example, the virtual character in a general curve animation performs the actions of closing and opening the mouth, and at this time, the general curve animation is controlled by two curves, one curve is used for determining the amplitude of closing the eye (the curve value is 1 corresponding to the eye being completely closed, the curve value is 0 corresponding to the eye being completely opened), and the other curve is used for determining the amplitude of opening the mouth (the curve value is 1 corresponding to the mouth being completely opened, and the curve value is 0 corresponding to the mouth being completely closed).
The gesture asset file is an animation file including a plurality of key frames corresponding to a plurality of redirecting gestures, that is, each of the plurality of key frames stores therein bone position deflection data corresponding to one redirecting gesture, for example, data of face bone offset and rotation during the execution of the eye-closing gesture is stored in a certain key frame.
The above steps in embodiments of the present application may be run in a second class of game engines. An animation sharing state machine is built in a game engine, and model animations obtained after the curve animations are applied to each virtual model are generated based on the curve animations to be shared to the plurality of virtual models and the gesture asset files produced for each virtual model in the plurality of virtual models.
In the related art, because there may be a large shape difference between the multiple virtual models, if the skeletal data corresponding to the animation to be shared is directly sleeved on the skeleton of each virtual model, the expression action of the model in the animation may be unnatural, and even the through-mold phenomenon in the animation may be caused. Through the scheme provided by the embodiment of the application, the animation sharing is performed based on the curve animation, and the bone deflection degrees of the redirection postures can be multiplexed to the bones of the virtual models, so that the virtual models in the model animation generated for each virtual model can accurately show the shared animation effect.
The plurality of virtual models may be a plurality of types of virtual character models in a virtual scene in the field of electronic games. For example, the game type corresponding to the virtual scene may be: action classes (e.g., first or third person shooter games, two-or three-dimensional combat games, war action games, sports action games, etc.), adventure classes (e.g., adventure games, collection games, puzzle games, etc.), simulation classes (e.g., simulated sand table games, simulated foster games, strategy simulation games, city building simulation games, business simulation games, etc.), role playing classes and leisure classes (e.g., chess and card game games, recreation game games, music rhythm games, trade foster games, etc.), etc.
In the embodiment of the application, universal curve animation corresponding to a plurality of redirection gestures is obtained, wherein the universal curve animation is the curve animation to be shared with a plurality of virtual models, and the universal curve animation is used for representing the skeletal deflection degree shared in the process of executing the plurality of redirection gestures by the plurality of virtual models; based on skeleton binding data of a target virtual model in the plurality of virtual models, making a gesture asset file, wherein the gesture asset file comprises a plurality of key frames corresponding to a plurality of redirection gestures, and the plurality of key frames are used for storing skeleton deflection positions in the process that the target virtual model executes the plurality of redirection gestures; and generating a target animation of the target virtual model by using the gesture asset file and the universal curve animation. Therefore, the application achieves the purpose of sharing the universal curve animation to a plurality of virtual models, thereby realizing the technical effects of improving the effect fidelity of the animation shared by a plurality of virtual models and reducing the manufacturing cost of the shared animation, and further solving the technical problems of poor animation effect fidelity and high manufacturing cost of actions obtained by the manufacturing scheme of the animation shared by a plurality of virtual models in the related technology.
The above-described methods of embodiments of the present application are further described below.
Optionally, the universal curve animation includes a plurality of sub-animations corresponding to the plurality of reorientation poses; in step S21, the general curve animation is acquired, which may include the following steps:
Step S211, for any one of the plurality of reorientation gestures, obtaining a target sub-animation corresponding to the target reorientation gesture.
In the related art, in the process of multiplexing a certain animation to other virtual models, it is common practice to derive skeleton displacement information corresponding to the animation, that is, derive a skeleton animation, and further, directly apply the skeleton displacement information to skeletons of other virtual models, but due to poor appearance differences between different virtual models, doing so may lead to an unnatural situation of penetrating the model or animation effect. In contrast, according to the method steps described above, the present application derives an animation to be shared to a plurality of virtual models as a curved animation when deriving the animation.
Specifically, the animation to be shared to the plurality of virtual models may include a plurality of sub-animations corresponding to the plurality of redirect gestures. In this regard, the focus in deriving the plurality of sub-animations is to derive a plurality of virtual curves for driving the plurality of sub-animations, the curve value of each virtual curve being used to characterize the extent of execution of the corresponding re-orientation gesture.
On the basis, the curve animation is shared to a plurality of virtual models, so that the bones of the virtual models can be driven to deflect to a corresponding degree according to curve values, and a redirection posture is made, and the conditions that the virtual models penetrate through the models and the expression actions are unnatural due to appearance differences can be avoided.
Optionally, in step S211, acquiring the target sub-animation may include the following steps:
step S2111, deriving a target sub-animation based on a target animation curve determined by a controller parameter of a target virtual controller, where the target virtual controller is a skeletal controller corresponding to a target redirecting gesture, and the target virtual controller is configured to control a skeletal position of a target virtual model to deflect within a target range corresponding to the target redirecting gesture, and a value range of the controller parameter is a normalized range corresponding to the target range.
In one exemplary application scenario, an animation to be shared is produced based on the skeleton of the original virtual model. The bones of the original virtual model adopt a controller binding scheme, namely a plurality of bone controllers are adopted to control the motions of the bones, and the controller parameters of each bone controller are used for determining the deflection motion data of the bones bound by the bone controllers. When a sub-animation corresponding to a certain redirection gesture in the animation to be shared is exported as a curve animation (namely, a target sub-animation), at least one skeleton controller involved in executing the redirection gesture is determined as a target virtual controller, a target animation curve is determined according to the controller parameters of the target virtual controller, and the target sub-animation is exported based on the target animation curve.
Specifically, for a certain reorientation gesture, such as eye closure, in the animation to be shared, the target virtual controller is configured to control the bone position of a part of the bones of the target virtual model to deflect within a target range, where the part of the bones of the face undergo deflection motions when the target virtual model performs the eye closure gesture, and the target range includes a range of bone position variation during the deflection motions of the part of the bones. On this basis, the controller parameter of the target virtual controller may be a normalized range, that is, the value range of the controller parameter is 0 to 1, where the value range corresponds to a target range, for example, a first boundary value of the target range corresponds to the bone position of the part of the bone when the eye is fully open, a second boundary value of the target range corresponds to the bone position of the part of the bone when the eye is fully closed, and accordingly, the first boundary value corresponds to the value 0 of the controller parameter, and the second boundary value corresponds to the value 1 of the controller parameter.
For example, in an application scenario, an operation window of a curve animation deriving tool is shown in fig. 3, a plurality of sub-animations corresponding to a plurality of redirection gestures in an animation to be shared are denoted as curve animation 1, curve animation 2, curve animation 3, curve animation 4 (in this example, the number of redirection gestures in an actual application scenario is usually much more than 4), the animation to be shared is derived as curve animation, an animation curve corresponding to the plurality of sub-animations may be displayed on the right side of the operation window as shown in fig. 3, each animation curve corresponds to one sub-animation, a curve value of each animation curve is between 0 and 1, 0 indicates that the corresponding redirection gesture is not executed at all (such as a completely open eye state corresponding to a closed eye gesture), 1 indicates that the corresponding redirection gesture is executed (such as a completely closed eye state corresponding to a closed eye gesture), and a abscissa of the animation curve indicates an animation time, that is, each curve is used to characterize a change condition of an execution degree of the corresponding redirection gesture in a period of time.
In addition, in other application scenes, when the curve animation is exported, if the skeleton of the original virtual model is not bound to the skeleton controller, the target animation curve can be directly determined according to the skeleton position data of part of the skeleton when the redirection gesture is executed, so that the target sub-animation is exported.
Optionally, in step S22, creating a gesture asset file based on the skeletal binding data of the target virtual model of the plurality of virtual models may include performing the steps of:
step S221, based on the skeleton binding data of the target virtual model, a plurality of groups of gesture data corresponding to the redirection gestures are called;
step S222, baking the plurality of groups of gesture data to a plurality of key frames to obtain a gesture asset file.
In an exemplary application scenario, for any one of a plurality of virtual models, the target virtual model controls a model skeleton using a skeleton-controller binding scheme, where the skeleton-binding data includes skeleton data of a plurality of model skeletons of the target virtual model, controller data of a plurality of skeleton controllers to which the target virtual model binds, and binding relationship data between the plurality of model skeletons and the plurality of skeleton controllers.
Further, a plurality of sets of gesture data corresponding to the plurality of redirecting gestures are called from a redirecting gesture list corresponding to the above skeleton controller binding scheme, each set of gesture data in the plurality of sets of gesture data comprises skeleton motion data corresponding to one redirecting gesture, and the skeleton motion data is determined by the redirecting gesture and the skeleton binding data. Skeletal motion data is used to characterize the motion trajectories of the model skeleton of the target virtual model when performing the reorientation gesture. Further, baking the plurality of groups of gesture data into a plurality of key frames to obtain a gesture asset file, wherein the gesture asset file is an animation comprising the plurality of key frames, and each key frame stores skeleton motion data corresponding to a redirection gesture, that is, each animation frame of the gesture asset file displays a target virtual model to execute the animation of the redirection gesture.
Optionally, in step S222, baking the plurality of sets of gesture data into a plurality of keyframes may include performing the steps of:
Step S2221, in response to the target gesture data in the multiple sets of gesture data meeting the target condition, activating the target virtual controller of the target redirecting gesture corresponding to the target gesture data to move according to the control script, so that the model skeleton bound by the target virtual controller moves synchronously, obtaining the skeleton deflection position of the model skeleton, and storing the skeleton deflection position to the target key frame corresponding to the target redirecting gesture, where the target condition is that no invalid value exists in the value range of the controller parameter of the target virtual controller.
In one exemplary application scenario, a script is written for invoking sets of gesture data corresponding to the plurality of redirected gestures and for baking the sets of gesture data into a plurality of keyframes. The steps implemented by the script include: traversing the plurality of redirecting gestures to obtain at least one target virtual controller (usually a plurality of target virtual controllers) corresponding to each redirecting gesture; further, traversing at least one target virtual controller, obtaining a control script of each target virtual controller, and determining a control range upper limit (uplimit) and a control range lower limit (lowlimit) corresponding to the control script; then, the target baking method is called to realize: judging the validity of the upper limit and the lower limit of the control range corresponding to the currently traversed control script, if the upper limit and the lower limit of the control range are valid values (for example uplimit >0 or lowlimit < 0), determining that the currently traversed target virtual controller can control a redirection gesture in the control range determined by the upper limit and the lower limit of the control range, and activating the currently traversed target virtual controller to move according to the control script by using a move command at the moment so as to enable the model skeleton bound by the target virtual controller to perform corresponding deflection movement, thereby obtaining the skeleton deflection position of the model skeleton; further, the skeletal deflection position is baked to a key frame, and a gesture animation corresponding to the currently traversed redirection gesture is formed in the key frame.
In the above, in the process of producing the gesture asset file, the multiple sets of gesture data corresponding to the multiple redirected gestures are called, and the multiple sets of gesture data are baked to multiple key frames. In the process of calling, two layers of circulation traversing operation are realized through scripts, and range determination, range judgment, controller activation drive and skeleton data acquisition operation are carried out aiming at the control script of each target virtual controller in each redirection gesture. The baking process can be performed synchronously with the retrieving process, that is, the target virtual controller moves according to the control script and acquires the bone deflection data, and then the bone deflection data is baked to the key frame; the above baking process may also be performed after the end of the invoking process, that is, after the skeletal deflection data corresponding to the plurality of redirecting gestures are obtained, the skeletal deflection data corresponding to each redirecting gesture is baked to a key frame in sequence.
For example, in an application scenario, the virtual model face redirection gesture includes a right eye closed gesture, and the face bone deflection data when the target virtual model performs the right eye closed gesture is baked to a key frame, so as to obtain an image as shown in fig. 4, where the state shown in fig. 4 is an image of a right eye full closed moment in a right eye closed animation corresponding to the target virtual model, and the corresponding key frame stores the bone deflection data of the face bone in a process of fully opening the right eye to fully closing the right eye.
Based on the above method steps, the generated gesture asset file can be used as a gesture library of the target virtual model to support multiplexing of multiple redirected gestures in different scenarios.
Optionally, in step S23, generating a target animation of the target virtual model using the gesture asset file and the universal curve animation may include performing the steps of:
step S231, splitting the gesture asset file to obtain a plurality of gesture animations corresponding to the plurality of redirection gestures;
step S232, driving the plurality of gesture animations to perform animation mixing by using the universal curve animation, and generating a target animation.
In an exemplary application scenario, the gesture asset file created for the target virtual model in the foregoing process includes a plurality of key frames, and when the universal curve animation needs to be applied to the target virtual model, a plurality of gesture animations corresponding to a plurality of redirection gestures are split from the gesture asset file corresponding to the target virtual model. The plurality of gesture animations are a plurality of single frame animations, each single frame animation corresponding to a key frame. Further, the redirected gestures in the plurality of gesture animations are mixed according to the universal curve animation, and a target animation is generated. Specifically, animation curves corresponding to a plurality of sub-animations in the universal curve animation are utilized to drive animation mixing of a plurality of gesture animations, wherein the plurality of sub-animations correspond to a plurality of redirection gestures, the plurality of gesture animations also correspond to the plurality of redirection gestures, and when the animations are mixed, the mixing intensity of the corresponding gesture animations during the animation mixing is determined by adopting the animation curves corresponding to each sub-animation.
Optionally, in step S232, the step of generating the target animation corresponding to the target virtual model by driving the plurality of gesture animations to perform animation mixing using the universal curve animation may include the following steps:
step S233, creating a target animation state machine by using a plurality of gesture animations and animation mixing tools;
Step S234, inputting the general curve animation into a target animation state machine, and driving a plurality of gesture animations to perform animation mixing to generate a target animation.
The above-described animation mixing tool is a tool developed in advance for controlling a plurality of animation mixtures. The animation mixing tool supports a user to add a plurality of animations, automatically mixes the plurality of animations according to the values of one or more parameters, and achieves smooth transition of the animations under different parameter values by the obtained mixed animations, thereby achieving a natural and smooth animation effect. The above-described blending of the plurality of animations may be accomplished by interpolation calculations.
In one application scenario, the animation mixing tool described above can implement generating a target animation state machine based on multiple gesture animations one-key. Or the animation mixing tool can integrate the function of splitting the gesture resource file, and generate a target animation state machine based on the gesture resource file by one key. The animation mixing tool is used for realizing the function of mixing and overlapping layers in a target animation state machine.
The target animation state machine may include a plurality of gesture state machines corresponding to the plurality of gesture animations, and each gesture state machine is configured to control the target virtual model to perform an animation behavior of the corresponding gesture animation. When the target animation of the target virtual model is manufactured, the target animation state machine is used for controlling the target virtual model to carry out smooth transition and switching between animation states corresponding to the gesture animations so as to enable the target animation to be smooth and natural.
In the target animation state machine, the animation mixing tool can perform animation mixing on the multiple gesture animations according to the current state of the target animation state machine to obtain a mixing result; further, a target animation having a mixed animation effect is generated based on the mixed result. The current states of the plurality of gesture state machines in the target animation state machine are used for determining the mixing intensity (or mixing weight) corresponding to the plurality of gesture animations when the animations are mixed.
Optionally, in step S233, creating a target animation state machine using a plurality of gesture animations and animation mixing tools may include performing the steps of:
step S2331, introducing the plurality of gesture animations into an animation mixing tool, wherein the mixing intensity of the plurality of gesture animations in the animation mixing tool is controlled by a plurality of intensity parameters;
step S2332, setting the types of the plurality of gesture animations as superimposed animations in the animation mixing tool;
Step S2333, creating a target animation state machine based on the animation mixing tool.
The animation mixing tool includes a mixing overlay layer in which a plurality of gesture animations are mixed by the animation mixing tool. After the multiple gesture animations are led into the animation mixing tool, the multiple gesture animations are used as operation objects of the animation mixing operation, and when the animation mixing tool executes the animation mixing operation, the mixing intensity corresponding to each gesture animation is determined according to multiple intensity parameters, so that the mixed superposition processing of the multiple gesture animations is supported.
Before creating the target animation state machine, the animation types of the plurality of gesture animations imported into the mixed overlay layer are set as overlay animations, and the overlay animations represent the types of animations as animations that can support the overlay mix. For example, the plurality of gesture animations are set to an Additive superimposed animation type that represents superimposing the animation effects of the plurality of gesture animations together in the mixed animation effects, instead of replacing (or overlaying) one animation effect with another. Thus, after setting the plurality of gesture animations to the Additive superimposed animation type, a more complex mixed animation effect can be obtained.
A target animation state machine is created based on an animation mixing tool after importing a plurality of gesture animations and setting the types of the plurality of gesture animations to a superimposed animation. In addition, the target animation state machine further comprises: and a plurality of region animation layers corresponding to the plurality of model subareas of the target virtual model and a basic gesture animation layer. For example, the plurality of zone animation layers include: the face region animation layer is used for playing the curve animation of the face of the target virtual model, the mouth region animation layer is used for playing the curve animation of the mouth of the target virtual model, and the eye region animation layer is used for playing the curve animation of the eyes of the target virtual model.
Taking the above eye region animation layer as an example, the gesture animation corresponding to a part of the redirection gestures (such as blink of left eye and blink of right eye) related to the eyes in the plurality of redirection gestures is derived and added to the eye region animation layer, and the animation of the eye region animation layer is used to cover the eye animation of the upper layer of the target virtual model. That is, the eye region animation layer is used for making a general blink animation, so that a plurality of virtual models can flexibly and automatically play the blink animation. The functions implemented by the face region animation layer and the mouth region animation layer described above, and so on.
The basic gesture animation layer is used for generating a basic gesture with full frames (such as a basic gesture with full frames when facial expression animation is produced), and the basic gesture is used as a basic for superposition and mixing of a plurality of gesture animations. That is, in the above-described mixed superimposed layer, after the animation mixing tool performs animation mixing on a plurality of gesture animations to obtain a mixed result, the mixed result is superimposed on the basic gesture to obtain a superimposed result for generating the target animation.
The creation process of the state machine can be completed through a pre-written script.
Optionally, in step S234, the general curve animation is input into a target animation state machine, and the target animation is generated, which may include the following steps:
Step S2341, inputting the general curve animation into a target animation state machine, and determining a plurality of intensity parameters by using the general curve animation;
In step S2342, the plurality of gesture animations are driven based on the plurality of intensity parameters to perform animation mixing to generate a target animation.
The general curve animation comprises a plurality of sub-animations corresponding to the plurality of redirection gestures, and a plurality of intensity parameters are determined by the plurality of sub-animations. That is, the greater the execution degree of the target virtual model in a certain sub-animation to execute the corresponding redirect gesture, the greater the intensity parameter of the gesture animation corresponding to the redirect gesture in the animation mixture.
According to the above method steps, a schematic diagram of an animation mixing operation interface is provided as shown in fig. 5, and as shown in fig. 5, an animation mixing tool is displayed in one of the sub-windows of the operation interface, in which a plurality of gesture animations are controlled by a plurality of parameters whose parameter names may be the same as the animation names of the plurality of gesture animations. In the process of producing the target animation for the target virtual model, when the parameter call with the same name as any one of the parameters is detected, the animation mixing tool adopts the animation curve corresponding to the parameter to control the corresponding gesture animation to participate in animation mixing.
Optionally, in step S2341, determining a plurality of intensity parameters using a generic curve animation may include the following steps:
Step S2343, for any one of the plurality of reorientation poses, uses a curve value of a target curve corresponding to the target reorientation pose as the intensity of a target pose animation corresponding to the target reorientation pose, wherein the target curve is an animation curve of a target sub-animation corresponding to the target reorientation pose among a plurality of sub-animations of the general-purpose curve animation.
In an exemplary application scenario, after a generic curve animation is input into a target animation state machine, a plurality of animation curves corresponding to the generic curve animation are imported into an animation blending tool in an overlay blending layer. The curve value of each of the plurality of animation curves corresponding to the general-purpose curve animation is in the range of 0 to 1, and the larger the curve value of each of the plurality of animation curves is as the intensity parameter of the corresponding gesture animation, that is, the larger the curve value of a certain animation curve, the greater the execution degree of the target virtual model for executing the redirecting gesture corresponding to the animation curve is, and the higher the mixing intensity (or mixing weight) of the gesture animation corresponding to the redirecting gesture is when the plurality of gesture animations are mixed.
Optionally, in step S2342, driving the plurality of gesture animations based on the plurality of intensity parameters to perform animation mixing to generate the target animation may include performing the steps of:
in step S2344, based on the intensity parameters corresponding to the gesture animations, the animation data of the gesture animations are mixed and calculated to generate the target animation, wherein the mixed calculation includes an offset mixed calculation and a rotation mixed calculation.
At a certain moment in the mixed calculation, parameter values of a plurality of intensity parameters respectively represent mixed weights when a plurality of gesture animations participate in the mixed calculation, the weighted average calculation is carried out on animation effects (such as bone deflection degrees) of the gesture animations at the current moment based on the mixed weights, a calculation result is obtained, and a target animation is generated based on the calculation result.
The animation effects of the above-described multiple gesture animations may be characterized by bone offset, bone rotation, bone offset, bone rotation. When the mixed calculation is performed on the plurality of gesture animations, the mixed calculation and then the average may be performed on the animation effects, respectively, to obtain a calculation result.
The multiple intensity parameters are determined by multiple animation curves corresponding to the universal curve animation, and in the animation duration of the universal curve animation, the curve values of the multiple animation curves change to represent the occurrence of motion of the model skeleton under different redirection postures, that is, the values of the multiple intensity parameters can be the values of the changes in the animation duration corresponding to the universal curve animation. The mixed calculation may be performed on the animation data of the plurality of gesture animations at a plurality of intermediate times within the animation duration of the general-purpose curve animation.
As also shown in FIG. 5, the target animation state machine displayed by the animation hybrid operation interface also provides a checking tool. In particular, the inspection tool can be used for inspecting the animation effect of the expression animation obtained by the mixed redirection gesture animation of the target virtual model aiming at a plurality of preset emotions (such as surprise, happiness, difficulty, confusion and the like). If there is an unnatural performance in the animation effect that does not meet the expectations, adjustments can be made to the target animation state machine.
As also shown in FIG. 5, the target animation state machine displayed by the animation hybrid operation interface also provides an active invocation tool. And the active calling tool provides a plurality of parameter control bars, and when detecting that a user executes drag control operation on a certain parameter control bar, the active calling tool calls gesture animation corresponding to the parameter control bar and controls the target virtual model to execute animation behavior corresponding to the gesture animation. When the drag control operation of the user on the at least two parameter control bars is detected, the corresponding at least two gesture animations are invoked and the target virtual model is controlled to execute the mixed animation behaviors corresponding to the at least two gesture animations.
As also shown in FIG. 5, the target animation state machine displayed by the animation hybrid operation interface also provides a preview sub-window. In the animation mixing process, the checking process or the active calling process, the animation behavior executed by the target virtual model can be displayed to the user through the preview sub-window, so that the user can intuitively know the current animation mixing effect.
Based on the steps of the method provided by the application, the prefabricated universal curve animation can be shared with a plurality of virtual models, even if a large appearance difference exists among the plurality of virtual models, the plurality of virtual models can make the same expression motion defined by the universal curve animation, the shared animation production effect is good, and the animation production cost is saved.
Optionally, the method for generating an animation may further include the following steps:
Step S24, deriving curve animations to be multiplexed corresponding to a plurality of redirection gestures based on a first facial expression animation of the first virtual character;
step S25, generating a first facial expression animation of the target virtual model by utilizing the gesture asset file and the curve animation to be multiplexed.
In the method provided by the embodiment of the application, the universal curve animation can be universal data derived based on the predetermined redirection gesture data, so as to realize the function of sharing the universal curve animation to a plurality of virtual models. The general curve animation may be a curve animation derived based on a first facial expression animation of a first virtual character, and a target animation for the second virtual character may be generated by using a gesture asset file corresponding to the second virtual character and the general curve animation, the target animation being a multiplexed animation obtained by multiplexing the general curve animation to the second virtual character, thereby realizing a function of multiplexing a certain animation of a certain virtual character to another virtual character.
It should be noted that, when a superimposed animation containing skeleton rotation information is processed in part of the game engine, it may occur that the skeleton rotation cannot smoothly transit from 0 to the maximum value according to the parameter value, but the skeleton rotation becomes the maximum value immediately after the parameter is slightly larger than 0. In this case, one non-expressive gesture animation may be added to the animation mixing tool before the animation is mixed, and then the non-expressive gesture animation is mixed with a plurality of gesture animations. The expression-free pose animation is obtained by deleting offset information (only retaining rotation information) in the animation in which the target virtual model is always in the original pose (i.e., the pose without any expression).
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. a magnetic disc, an optical disc), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
In this embodiment, an animation generating device is further provided, and the animation generating device is used to implement the foregoing embodiments and preferred embodiments, and will not be described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 6 is a block diagram illustrating an apparatus for generating an animation according to an embodiment of the present application, as shown in fig. 6, the apparatus comprising:
The obtaining module 601 is configured to obtain a general curve animation corresponding to the plurality of redirecting gestures, where the general curve animation is a curve animation to be shared by the plurality of virtual models, and the general curve animation is used to characterize a bone deflection degree shared by the plurality of virtual models in a process of executing the plurality of redirecting gestures;
A making module 602, configured to make a gesture asset file based on skeletal binding data of a target virtual model in the plurality of virtual models, where the gesture asset file includes a plurality of keyframes corresponding to the plurality of redirecting gestures, the plurality of keyframes being configured to store skeletal deflection positions of the target virtual model during execution of the plurality of redirecting gestures;
The generating module 603 is configured to generate a target animation of the target virtual model using the gesture asset file and the universal curve animation.
Optionally, the universal curve animation includes a plurality of sub-animations corresponding to the plurality of reorientation poses; the above-mentioned acquisition module 601 is further configured to: and for any target redirection gesture in the multiple redirection gestures, acquiring a target sub-animation corresponding to the target redirection gesture.
Optionally, the above-mentioned obtaining module 601 is further configured to: and deriving a target sub-animation based on a target animation curve determined by controller parameters of a target virtual controller, wherein the target virtual controller is a skeleton controller corresponding to a target redirection posture, the target virtual controller is used for controlling the skeleton position of the target virtual model to deflect in a target range corresponding to the target redirection posture, and the value range of the controller parameters is a normalization range corresponding to the target range.
Optionally, the foregoing fabrication module 602 is further configured to: based on skeleton binding data of a target virtual model, a plurality of groups of gesture data corresponding to a plurality of redirection gestures are called; and baking the multiple groups of gesture data to multiple key frames to obtain a gesture asset file.
Optionally, the foregoing fabrication module 602 is further configured to: and responding to the target gesture data in the plurality of groups of gesture data to meet target conditions, enabling the target virtual controller of the target redirection gesture corresponding to the target gesture data to move according to the control script, so that the model skeleton bound by the target virtual controller moves synchronously, obtaining the skeleton deflection position of the model skeleton, and storing the skeleton deflection position to the target key frame corresponding to the target redirection gesture, wherein the target conditions are that invalid values do not exist in the value range of the controller parameters of the target virtual controller.
Optionally, the generating module 603 is further configured to: splitting the gesture asset file to obtain a plurality of gesture animations corresponding to the plurality of redirection gestures; and driving the plurality of gesture animations to perform animation mixing by using the universal curve animation to generate the target animation.
Optionally, the generating module 603 is further configured to: creating a target animation state machine using the plurality of gesture animations and the animation mixing tool; and inputting the universal curve animation into a target animation state machine, and driving a plurality of gesture animations to perform animation mixing to generate the target animation.
Optionally, the generating module 603 is further configured to: importing a plurality of gesture animations into an animation mixing tool, wherein the mixing intensity of the plurality of gesture animations in the animation mixing tool is controlled by a plurality of intensity parameters; setting types of the plurality of gesture animations as superimposed animations in the animation mixing tool; based on the animation mixing tool, a target animation state machine is created.
Optionally, the generating module 603 is further configured to: inputting the universal curve animation into a target animation state machine, and determining a plurality of intensity parameters by utilizing the universal curve animation; and driving the plurality of gesture animations to perform animation mixing based on the plurality of intensity parameters to generate the target animation.
Optionally, the generating module 603 is further configured to: and for any one of the plurality of redirecting postures, using a curve value of a target curve corresponding to the target redirecting posture as the intensity of the target posture animation corresponding to the target redirecting posture, wherein the target curve is an animation curve of a target sub-animation corresponding to the target redirecting posture in a plurality of sub-animations of the general curve animation.
Optionally, the generating module 603 is further configured to: and performing mixed calculation on the animation data of the plurality of gesture animations based on a plurality of intensity parameters corresponding to the plurality of gesture animations to generate a target animation, wherein the mixed calculation comprises offset mixed calculation and rotation mixed calculation.
Optionally, the apparatus for generating an animation includes, in addition to all the modules described above: a migration module 604 (not shown) for: based on a first facial expression animation of a first virtual character, deriving curve animations to be multiplexed corresponding to a plurality of redirection gestures; and generating a first facial expression animation of the target virtual model by utilizing the gesture asset file and the curve animation to be multiplexed.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; or the above modules may be located in different processors in any combination.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media in which a computer program can be stored.
Alternatively, in this embodiment, the above-mentioned computer-readable storage medium may be located in any one of the computer terminals in the computer terminal group in the computer network, or in any one of the mobile terminals in the mobile terminal group.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may be configured to store a computer program for performing the steps of:
S1, acquiring universal curve animations corresponding to a plurality of redirection gestures, wherein the universal curve animations are curve animations to be shared among a plurality of virtual models, and are used for representing skeleton deflection degrees shared in the process of executing the plurality of redirection gestures by the plurality of virtual models;
s2, based on skeleton binding data of a target virtual model in the plurality of virtual models, a gesture asset file is manufactured, wherein the gesture asset file comprises a plurality of key frames corresponding to the plurality of redirection gestures, and the plurality of key frames are used for storing skeleton deflection positions in the process that the target virtual model executes the plurality of redirection gestures;
S3, generating a target animation of the target virtual model by utilizing the gesture asset file and the universal curve animation.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: the universal curve animation comprises a plurality of sub animations corresponding to the plurality of redirection gestures; and for any target redirection gesture in the multiple redirection gestures, acquiring a target sub-animation corresponding to the target redirection gesture.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: and deriving a target sub-animation based on a target animation curve determined by controller parameters of a target virtual controller, wherein the target virtual controller is a skeleton controller corresponding to a target redirection posture, the target virtual controller is used for controlling the skeleton position of the target virtual model to deflect in a target range corresponding to the target redirection posture, and the value range of the controller parameters is a normalization range corresponding to the target range.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: based on skeleton binding data of a target virtual model, a plurality of groups of gesture data corresponding to a plurality of redirection gestures are called; and baking the multiple groups of gesture data to multiple key frames to obtain a gesture asset file.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: and responding to the target gesture data in the plurality of groups of gesture data to meet target conditions, enabling the target virtual controller of the target redirection gesture corresponding to the target gesture data to move according to the control script, so that the model skeleton bound by the target virtual controller moves synchronously, obtaining the skeleton deflection position of the model skeleton, and storing the skeleton deflection position to the target key frame corresponding to the target redirection gesture, wherein the target conditions are that invalid values do not exist in the value range of the controller parameters of the target virtual controller.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: splitting the gesture asset file to obtain a plurality of gesture animations corresponding to the plurality of redirection gestures; and driving the plurality of gesture animations to perform animation mixing by using the universal curve animation to generate the target animation.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: creating a target animation state machine using the plurality of gesture animations and the animation mixing tool; and inputting the universal curve animation into a target animation state machine, and driving a plurality of gesture animations to perform animation mixing to generate the target animation.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: importing a plurality of gesture animations into an animation mixing tool, wherein the mixing intensity of the plurality of gesture animations in the animation mixing tool is controlled by a plurality of intensity parameters; setting types of the plurality of gesture animations as superimposed animations in the animation mixing tool; based on the animation mixing tool, a target animation state machine is created.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: inputting the universal curve animation into a target animation state machine, and determining a plurality of intensity parameters by utilizing the universal curve animation; and driving the plurality of gesture animations to perform animation mixing based on the plurality of intensity parameters to generate the target animation.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: and for any one of the plurality of redirecting postures, using a curve value of a target curve corresponding to the target redirecting posture as the intensity of the target posture animation corresponding to the target redirecting posture, wherein the target curve is an animation curve of a target sub-animation corresponding to the target redirecting posture in a plurality of sub-animations of the general curve animation.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: and performing mixed calculation on the animation data of the plurality of gesture animations based on a plurality of intensity parameters corresponding to the plurality of gesture animations to generate a target animation, wherein the mixed calculation comprises offset mixed calculation and rotation mixed calculation.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: based on a first facial expression animation of a first virtual character, deriving curve animations to be multiplexed corresponding to a plurality of redirection gestures; and generating a first facial expression animation of the target virtual model by utilizing the gesture asset file and the curve animation to be multiplexed.
In the computer-readable storage medium of the above embodiment, a technical solution for implementing a method for generating an animation is provided. Obtaining universal curve animations corresponding to the multiple redirection gestures, wherein the universal curve animations are curve animations to be shared among the multiple virtual models, and the universal curve animations are used for representing skeleton deflection degrees shared in the process of executing the multiple redirection gestures by the multiple virtual models; based on skeleton binding data of a target virtual model in the plurality of virtual models, making a gesture asset file, wherein the gesture asset file comprises a plurality of key frames corresponding to a plurality of redirection gestures, and the plurality of key frames are used for storing skeleton deflection positions in the process that the target virtual model executes the plurality of redirection gestures; and generating a target animation of the target virtual model by using the gesture asset file and the universal curve animation. Therefore, the application achieves the purpose of sharing the universal curve animation to a plurality of virtual models, thereby realizing the technical effects of improving the effect fidelity of the animation shared by a plurality of virtual models and reducing the manufacturing cost of the shared animation, and further solving the technical problems of poor animation effect fidelity and high manufacturing cost of actions obtained by the manufacturing scheme of the animation shared by a plurality of virtual models in the related technology.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a computer readable storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present application.
In an exemplary embodiment of the present application, a computer-readable storage medium stores thereon a program product capable of implementing the method described above in this embodiment. In some possible implementations, the various aspects of the embodiments of the application may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the application as described in the "exemplary methods" section of this embodiment, when the program product is run on the terminal device.
A program product for implementing the above-described method according to an embodiment of the present application may employ a portable compact disc read-only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the embodiments of the present application is not limited thereto, and in the embodiments of the present application, the computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Any combination of one or more computer readable media may be employed by the program product described above. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that the program code embodied on the computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
An embodiment of the application also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
S1, acquiring universal curve animations corresponding to a plurality of redirection gestures, wherein the universal curve animations are curve animations to be shared among a plurality of virtual models, and are used for representing skeleton deflection degrees shared in the process of executing the plurality of redirection gestures by the plurality of virtual models;
s2, based on skeleton binding data of a target virtual model in the plurality of virtual models, a gesture asset file is manufactured, wherein the gesture asset file comprises a plurality of key frames corresponding to the plurality of redirection gestures, and the plurality of key frames are used for storing skeleton deflection positions in the process that the target virtual model executes the plurality of redirection gestures;
S3, generating a target animation of the target virtual model by utilizing the gesture asset file and the universal curve animation.
Optionally, the above processor may be further configured to perform the following steps by a computer program: the universal curve animation comprises a plurality of sub animations corresponding to the plurality of redirection gestures; and for any target redirection gesture in the multiple redirection gestures, acquiring a target sub-animation corresponding to the target redirection gesture.
Optionally, the above processor may be further configured to perform the following steps by a computer program: and deriving a target sub-animation based on a target animation curve determined by controller parameters of a target virtual controller, wherein the target virtual controller is a skeleton controller corresponding to a target redirection posture, the target virtual controller is used for controlling the skeleton position of the target virtual model to deflect in a target range corresponding to the target redirection posture, and the value range of the controller parameters is a normalization range corresponding to the target range.
Optionally, the above processor may be further configured to perform the following steps by a computer program: based on skeleton binding data of a target virtual model, a plurality of groups of gesture data corresponding to a plurality of redirection gestures are called; and baking the multiple groups of gesture data to multiple key frames to obtain a gesture asset file.
Optionally, the above processor may be further configured to perform the following steps by a computer program: and responding to the target gesture data in the plurality of groups of gesture data to meet target conditions, enabling the target virtual controller of the target redirection gesture corresponding to the target gesture data to move according to the control script, so that the model skeleton bound by the target virtual controller moves synchronously, obtaining the skeleton deflection position of the model skeleton, and storing the skeleton deflection position to the target key frame corresponding to the target redirection gesture, wherein the target conditions are that invalid values do not exist in the value range of the controller parameters of the target virtual controller.
Optionally, the above processor may be further configured to perform the following steps by a computer program: splitting the gesture asset file to obtain a plurality of gesture animations corresponding to the plurality of redirection gestures; and driving the plurality of gesture animations to perform animation mixing by using the universal curve animation to generate the target animation.
Optionally, the above processor may be further configured to perform the following steps by a computer program: creating a target animation state machine using the plurality of gesture animations and the animation mixing tool; and inputting the universal curve animation into a target animation state machine, and driving a plurality of gesture animations to perform animation mixing to generate the target animation.
Optionally, the above processor may be further configured to perform the following steps by a computer program: importing a plurality of gesture animations into an animation mixing tool, wherein the mixing intensity of the plurality of gesture animations in the animation mixing tool is controlled by a plurality of intensity parameters; setting types of the plurality of gesture animations as superimposed animations in the animation mixing tool; based on the animation mixing tool, a target animation state machine is created.
Optionally, the above processor may be further configured to perform the following steps by a computer program: inputting the universal curve animation into a target animation state machine, and determining a plurality of intensity parameters by utilizing the universal curve animation; and driving the plurality of gesture animations to perform animation mixing based on the plurality of intensity parameters to generate the target animation.
Optionally, the above processor may be further configured to perform the following steps by a computer program: and for any one of the plurality of redirecting postures, using a curve value of a target curve corresponding to the target redirecting posture as the intensity of the target posture animation corresponding to the target redirecting posture, wherein the target curve is an animation curve of a target sub-animation corresponding to the target redirecting posture in a plurality of sub-animations of the general curve animation.
Optionally, the above processor may be further configured to perform the following steps by a computer program: and performing mixed calculation on the animation data of the plurality of gesture animations based on a plurality of intensity parameters corresponding to the plurality of gesture animations to generate a target animation, wherein the mixed calculation comprises offset mixed calculation and rotation mixed calculation.
Optionally, the above processor may be further configured to perform the following steps by a computer program: based on a first facial expression animation of a first virtual character, deriving curve animations to be multiplexed corresponding to a plurality of redirection gestures; and generating a first facial expression animation of the target virtual model by utilizing the gesture asset file and the curve animation to be multiplexed.
In the electronic device of the above embodiment, a technical solution for implementing a method for generating an animation is provided. Obtaining universal curve animations corresponding to the multiple redirection gestures, wherein the universal curve animations are curve animations to be shared among the multiple virtual models, and the universal curve animations are used for representing skeleton deflection degrees shared in the process of executing the multiple redirection gestures by the multiple virtual models; based on skeleton binding data of a target virtual model in the plurality of virtual models, making a gesture asset file, wherein the gesture asset file comprises a plurality of key frames corresponding to a plurality of redirection gestures, and the plurality of key frames are used for storing skeleton deflection positions in the process that the target virtual model executes the plurality of redirection gestures; and generating a target animation of the target virtual model by using the gesture asset file and the universal curve animation. Therefore, the application achieves the purpose of sharing the universal curve animation to a plurality of virtual models, thereby realizing the technical effects of improving the effect fidelity of the animation shared by a plurality of virtual models and reducing the manufacturing cost of the shared animation, and further solving the technical problems of poor animation effect fidelity and high manufacturing cost of actions obtained by the manufacturing scheme of the animation shared by a plurality of virtual models in the related technology.
Fig. 7 is a schematic diagram of an electronic device according to an embodiment of the application. As shown in fig. 7, the electronic device 700 is only an example, and should not be construed as limiting the functionality and scope of use of the embodiments of the present application.
As shown in fig. 7, the electronic apparatus 700 is embodied in the form of a general purpose computing device. The components of the electronic device 700 may include, but are not limited to: the at least one processor 710, the at least one memory 720, a bus 730 connecting the various system components including the memory 720 and the processor 710, and a display 740.
Wherein the memory 720 stores program code that can be executed by the processor 710 to cause the processor 710 to perform the steps according to various exemplary embodiments of the present application described in the method section above of the embodiments of the present application.
The memory 720 may include readable media in the form of volatile memory units, such as Random Access Memory (RAM) 7201 and/or cache memory 7202, and may further include Read Only Memory (ROM) 7203, and may also include nonvolatile memory, such as one or more magnetic storage devices, flash memory, or other nonvolatile solid state memory.
In some examples, memory 720 may also include a program/utility 7204 having a set (at least one) of program modules 7205, such program modules 7205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Memory 720 may further include memory located remotely from processor 710, which may be connected to electronic device 700 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Bus 730 may be a bus representing one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processor 710, or a local bus using any of a variety of bus architectures.
The display 740 may be, for example, a touch screen type Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD) that may enable a user to interact with a user interface of the electronic device 700.
Optionally, the electronic apparatus 700 may also communicate with one or more external devices 800 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic apparatus 700, and/or with any device (e.g., router, modem, etc.) that enables the electronic apparatus 700 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 750. Also, the electronic device 700 may communicate with one or more networks (e.g., a local area network (Local Area Network, LAN), a wide area network (Wide Area Network, WAN) and/or a public network, such as the internet) through the network adapter 760. As shown in fig. 7, network adapter 760 communicates with other modules of electronic device 700 over bus 730. It should be appreciated that although not shown in fig. 7, other hardware and/or software modules may be used in connection with the electronic device 700, which may include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, disk array (Redundant Arrays of INDEPENDENT DISKS, RAID) systems, tape drives, data backup storage systems, and the like.
The electronic device 700 may further include: a keyboard, a cursor control device (e.g., a mouse), an input/output interface (I/O interface), a network interface, a power supply, and/or a camera.
It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 7 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the electronic device 700 may also include more or fewer components than shown in fig. 7, or have a different configuration than shown in fig. 7. The memory 720 may be used to store a computer program and corresponding data, such as a computer program and corresponding data corresponding to a method of generating an animation in an embodiment of the present application. The processor 710 executes various functional applications and data processing by executing computer programs stored in the memory 720, i.e., implements the animation generation method described above.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a read-only memory (ROM), a random-access memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, etc., which can store program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (15)

1. A method of generating an animation comprising:
obtaining universal curve animations corresponding to a plurality of redirection gestures, wherein the universal curve animations are curve animations to be shared with a plurality of virtual models, and the universal curve animations are used for representing skeleton deflection degrees shared in the process of executing the plurality of redirection gestures by the plurality of virtual models;
Based on skeleton binding data of a target virtual model in the plurality of virtual models, making a gesture asset file, wherein the gesture asset file comprises a plurality of key frames corresponding to the plurality of redirection gestures, and the plurality of key frames are used for storing skeleton deflection positions in the process of executing the plurality of redirection gestures by the target virtual model;
And generating a target animation of the target virtual model by utilizing the gesture asset file and the universal curve animation.
2. The method of claim 1, wherein the generic curve animation comprises a plurality of sub-animations corresponding to the plurality of reorientation poses; the step of obtaining the universal curve animation comprises the following steps:
and for any target redirection gesture in the multiple redirection gestures, acquiring a target sub-animation corresponding to the target redirection gesture.
3. The method of claim 2, wherein obtaining the target sub-animation comprises:
And deriving the target sub-animation based on a target animation curve determined by controller parameters of a target virtual controller, wherein the target virtual controller is a skeleton controller corresponding to the target redirection posture, the target virtual controller is used for controlling the skeleton position of the target virtual model to deflect in a target range corresponding to the target redirection posture, and the value range of the controller parameters is a normalization range corresponding to the target range.
4. The method of claim 1, wherein creating the gesture asset file based on the skeletal binding data of the target virtual model of the plurality of virtual models comprises:
based on the skeletal binding data of the target virtual model, invoking multiple sets of gesture data corresponding to the multiple redirected gestures;
and baking the plurality of groups of gesture data to the plurality of key frames to obtain the gesture asset file.
5. The method of claim 4, wherein baking the plurality of sets of gesture data to the plurality of keyframes comprises:
And responding to the target gesture data in the plurality of groups of gesture data to meet a target condition, activating a target virtual controller of a target redirection gesture corresponding to the target gesture data to move according to a control script, so that a model skeleton bound by the target virtual controller moves synchronously to obtain the skeleton deflection position of the model skeleton, and storing the skeleton deflection position into a target key frame corresponding to the target redirection gesture, wherein the target condition is that an invalid value does not exist in a value range of a controller parameter of the target virtual controller.
6. The method of claim 1, wherein generating a target animation of the target virtual model using the gesture asset file and the generic curve animation comprises:
Splitting the gesture asset file to obtain a plurality of gesture animations corresponding to the plurality of redirection gestures;
And driving the plurality of gesture animations to perform animation mixing by using the universal curve animation to generate the target animation.
7. The method of claim 6, wherein driving the plurality of gesture animations using the generic curve animation for animation blending, the generating the target animation corresponding to the target virtual model comprises:
creating a target animation state machine using the plurality of gesture animations and animation mixing tools;
And inputting the universal curve animation into the target animation state machine, driving the gesture animations to perform animation mixing, and generating the target animation.
8. The method of claim 7, wherein creating the target animation state machine using the plurality of gesture animations and the animation mixing tool comprises:
Importing the plurality of gesture animations into the animation mixing tool, wherein the mixing intensity of the plurality of gesture animations in the animation mixing tool is controlled by a plurality of intensity parameters;
Setting the types of the plurality of gesture animations as superimposed animations in the animation mixing tool;
the target animation state machine is created based on the animation mixing tool.
9. The method of claim 8, wherein inputting the generic curve animation into the target animation state machine, generating the target animation comprises:
inputting the universal curve animation into the target animation state machine, and determining the intensity parameters by using the universal curve animation;
And driving the plurality of gesture animations to perform animation mixing based on the plurality of intensity parameters to generate the target animation.
10. The method of claim 9, wherein determining the plurality of intensity parameters using the generic curve animation comprises:
and for any one of the plurality of redirecting postures, taking a curve value of a target curve corresponding to the target redirecting posture as the intensity of a target posture animation corresponding to the target redirecting posture, wherein the target curve is an animation curve of a target sub-animation corresponding to the target redirecting posture in a plurality of sub-animations of the general curve animation.
11. The method of claim 9, wherein driving the plurality of gesture animations based on the plurality of intensity parameters for animation blending to generate the target animation comprises:
And carrying out mixed calculation on animation data of the plurality of gesture animations based on the plurality of intensity parameters corresponding to the plurality of gesture animations to generate the target animation, wherein the mixed calculation comprises offset mixed calculation and rotation mixed calculation.
12. The method according to claim 1, wherein the method further comprises:
deriving curve animations to be multiplexed corresponding to the plurality of redirection postures based on a first facial expression animation of the first virtual character;
and generating a first facial expression animation of the target virtual model by using the gesture asset file and the curve animation to be multiplexed.
13. An apparatus for generating an animation, comprising:
The system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring universal curve animation corresponding to a plurality of redirection gestures, wherein the universal curve animation is a curve animation to be shared with a plurality of virtual models, and the universal curve animation is used for representing skeleton deflection degrees shared in the process of executing the plurality of redirection gestures by the plurality of virtual models;
The system comprises a manufacturing module, a gesture asset file and a control module, wherein the manufacturing module is used for manufacturing a gesture asset file based on skeleton binding data of a target virtual model in the plurality of virtual models, the gesture asset file comprises a plurality of key frames corresponding to a plurality of redirection gestures, and the plurality of key frames are used for storing skeleton deflection positions in the process of executing the plurality of redirection gestures by the target virtual model;
And the generating module is used for generating a target animation of the target virtual model by utilizing the gesture asset file and the universal curve animation.
14. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, wherein the computer program is arranged to, when run by a processor, perform the method of generating an animation according to any of claims 1 to 12.
15. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of generating an animation as claimed in any of claims 1 to 12.
CN202410303234.7A 2024-03-15 2024-03-15 Method and device for generating animation, storage medium and electronic device Pending CN118079399A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410303234.7A CN118079399A (en) 2024-03-15 2024-03-15 Method and device for generating animation, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410303234.7A CN118079399A (en) 2024-03-15 2024-03-15 Method and device for generating animation, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN118079399A true CN118079399A (en) 2024-05-28

Family

ID=91148737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410303234.7A Pending CN118079399A (en) 2024-03-15 2024-03-15 Method and device for generating animation, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN118079399A (en)

Similar Documents

Publication Publication Date Title
US20200051305A1 (en) Control system for virtual characters
Gillies et al. Comparing and evaluating real time character engines for virtual environments
WO2007130689A2 (en) Character animation framework
US11978145B2 (en) Expression generation for animation object
WO2024120032A1 (en) Virtual object hair processing method and apparatus, device, storage medium, and program product
US11645805B2 (en) Animated faces using texture manipulation
CN118015160A (en) Method and device for generating expression animation, storage medium and electronic device
CN118079399A (en) Method and device for generating animation, storage medium and electronic device
Li et al. Emotion modeling and interaction of NPCS in virtual simulation and games
WO2024183466A1 (en) Virtual character switching method and apparatus, device, and storage medium
TWI814318B (en) Method for training a model using a simulated character for animating a facial expression of a game character and method for generating label values for facial expressions of a game character using three-imensional (3d) image capture
EP4385592A1 (en) Computer-implemented method for controlling a virtual avatar
US20240037884A1 (en) Performance recording method and apparatus in virtual scenario, device, storage medium, and program product
JP2022159519A (en) Component operating method, electronic device, storage medium, and program
US20230412852A1 (en) Live interactive method and apparatus, device, and storage medium
Madsen et al. OpenGL Game Development By Example
US20240331261A1 (en) Bakeless keyframe animation solver
CN117576274A (en) Animation generation method and device, storage medium and electronic device
de Pinho Framework for Developing Interactive 360-Degree Video Adventure Games
Rathore et al. 17 Design of 2D Space and Development Shooter Game and Arcade Game Using Unity
Noisri et al. Designing Avatar System and Integrate to the Metaverse
CN118247404A (en) Model rendering method and device, storage medium and electronic device
CN116645461A (en) Ray tracing adjustment method and device for virtual three-dimensional model and storage medium
CN117934670A (en) Method and device for producing animation, storage medium and electronic device
CN117695663A (en) Editing method and device for game object, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination