CN115779436B - Animation switching method, device, equipment and computer readable storage medium - Google Patents

Animation switching method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN115779436B
CN115779436B CN202310086668.1A CN202310086668A CN115779436B CN 115779436 B CN115779436 B CN 115779436B CN 202310086668 A CN202310086668 A CN 202310086668A CN 115779436 B CN115779436 B CN 115779436B
Authority
CN
China
Prior art keywords
animation
candidate
gesture
track
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310086668.1A
Other languages
Chinese (zh)
Other versions
CN115779436A (en
Inventor
陈石磊
练钊荣
侯季春
徐滔
胡波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202310086668.1A priority Critical patent/CN115779436B/en
Publication of CN115779436A publication Critical patent/CN115779436A/en
Application granted granted Critical
Publication of CN115779436B publication Critical patent/CN115779436B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application provides an animation switching method, an animation switching device, computer equipment and a computer readable storage medium; the method comprises the following steps: acquiring the current state, position information and movement information of the virtual character; determining a target state of the virtual character when it is determined that a state switching opportunity is reached based on the current state, the position information and the movement information; determining a target animation to be played from an animation database by utilizing a motion matching node corresponding to the target state; and outputting the target animation. By the method and the device, the complexity of the motion animation system can be simplified, fine tuning and optimization are realized, and the animation quality is improved.

Description

Animation switching method, device, equipment and computer readable storage medium
Technical Field
The present disclosure relates to data processing technology, and in particular, to an animation switching method, apparatus, device, and computer readable storage medium.
Background
The motion animation system is used as a core component of the virtual character animation system and is responsible for generating the moving animation of the character in the virtual scene. The prior motion animation system technical schemes generally have two types: based on an animation state machine, based on Motion Matching (Motion Matching). An animation state machine system consists of at least two state machines, which transition between states according to conditions. An animation classifier is used inside each state machine to select and play the animation. The transition based on the animation state machine requires special preprocessing and curve calculation for the animation according to the difference of the animation sorters, and develops a runtime system, the workflow is complex, and faults (Bug) are easy to occur due to the complexity of state transition logic and the animation sorters, and debugging and maintenance are difficult; the Motion Matching technology does not distinguish Motion states artificially any more, but a developer cannot intervene in the switching and playing logic of the animation conveniently, so that the controllability is weak, the whole Motion animation system shares a set of parameters, the whole search algorithm can be influenced by the adjustment parameters, the whole body is pulled, and fine tuning cannot be performed.
Disclosure of Invention
The embodiment of the application provides an animation switching method, an animation switching device and a computer readable storage medium, which can simplify the complexity of a motion animation system, realize fine tuning and improve the animation quality.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides an animation switching method, which comprises the following steps:
acquiring the current state, position information and movement information of the virtual character;
determining a target state of the virtual character when it is determined that a state switching opportunity is reached based on the current state, the position information and the movement information;
determining a target animation to be played from an animation database by utilizing a motion matching node corresponding to the target state;
and outputting the target animation.
The embodiment of the application provides an animation switching device, which comprises:
the first acquisition module is used for acquiring the current state, the position information and the movement information of the virtual character;
a first determining module for determining a target state of the virtual character when it is determined that a state switching occasion is reached based on the current state, the position information and the movement information;
the second determining module is used for determining a target animation to be played from the animation database by utilizing the motion matching node corresponding to the target state;
And the playing module is used for outputting the target animation.
In some embodiments, the second determining module is further configured to:
acquiring a motion matching node corresponding to the target state and a target animation database corresponding to the target state;
acquiring a target cost function corresponding to the motion matching node, and a current gesture characteristic and a current track characteristic of a current playing animation;
and determining the target animation to be played from the target animation database based on the target cost function, the current gesture feature and the current track feature.
In some embodiments, the gesture feature comprises at least one gesture feature component, the trajectory feature comprises at least one trajectory feature component, the second determining module is further configured to:
acquiring gesture weights corresponding to the gesture feature components and track weights corresponding to the track feature components based on the target cost function;
acquiring candidate gesture features and candidate track features of each first candidate animation in the target animation database;
determining the similarity between the current playing animation and each first candidate animation based on the gesture weight corresponding to each gesture feature component, the track weight corresponding to each track feature component, the current gesture feature, the current track feature, and the candidate gesture feature and the candidate track feature of each first candidate animation;
And determining a target animation to be played from the first candidate animations based on the similarity between the currently played animation and the first candidate animations.
In some embodiments, the second determining module is further configured to:
determining the similarity corresponding to each gesture feature component in the current gesture feature and each candidate gesture feature;
determining the similarity corresponding to each track feature component in the current track feature and each candidate track feature;
carrying out weighted summation on the gesture weights corresponding to the gesture feature components and the similarity corresponding to the gesture feature components to obtain the gesture similarity between the current gesture feature and each candidate gesture feature;
carrying out weighted summation on the track weights corresponding to the track feature components and the similarity corresponding to the gesture feature components to obtain the track similarity between the current track feature and each candidate track feature;
and determining the sum of the similarity of each gesture and the similarity of each corresponding track as the similarity between the currently played animation and each first candidate animation.
In some embodiments, the second determining module is further configured to:
Determining a first highest similarity from the similarities between the currently playing animation and the respective first candidate animations;
and when the first highest similarity is larger than a similarity threshold, determining the first candidate animation corresponding to the first highest similarity as the target animation to be played.
In some embodiments, the apparatus further comprises:
the second acquisition module is used for acquiring each second candidate animation in the current animation database corresponding to the current state;
a third determining module, configured to determine a similarity between the currently playing animation and each second candidate animation in the current animation database;
a fourth determining module, configured to determine a second highest similarity from the similarities between the currently playing animation and the respective second candidate animations;
and a fifth determining module, configured to determine the second highest similarity as a similarity threshold.
In some embodiments, the apparatus further comprises:
the third acquisition module is used for acquiring a second candidate animation corresponding to the second highest similarity from the current animation database when the first highest similarity is smaller than or equal to the similarity threshold;
And a sixth determining module, configured to determine a second candidate animation corresponding to the second highest similarity as a target animation to be played.
In some embodiments, the apparatus further comprises:
a seventh determining module configured to determine a similarity between the currently played animation and each of the second candidate animations in the current animation database when it is determined that the state switching opportunity is not reached based on the current state, the position information and the movement information;
an eighth determining module, configured to determine a third highest similarity from the similarities between the second candidate animations in the current animation database;
and a ninth determining module, configured to determine the second candidate animation corresponding to the third highest similarity as a target animation to be played.
In some embodiments, the apparatus further comprises:
the fourth acquisition module is used for acquiring a plurality of reference animations with different preset speeds;
the fusion module is used for carrying out fusion processing on at least two reference animations with different preset speeds to obtain a plurality of fused animations;
and the data adding module is used for adding the reference animation and the fused animation to an animation database.
In some embodiments, the apparatus further comprises:
A fifth acquisition module, configured to acquire candidate animations in an animation database corresponding to each state;
and the feature extraction module is used for extracting the gesture features and the track features of the candidate animations in the animation database corresponding to each state.
In some embodiments, the apparatus further comprises:
a sixth obtaining module, configured to obtain a preset cost function, where a gesture weight corresponding to each gesture feature component and a track weight corresponding to each track feature component in the preset cost function are preset values;
the weight setting module is used for respectively setting the gesture weights corresponding to the gesture feature components and the track weights corresponding to the track feature components in the cost function based on the configuration requirement information of each state to obtain the cost function corresponding to each state.
An embodiment of the present application provides a computer device, including:
a memory for storing computer executable instructions;
and the processor is used for realizing the method provided by the embodiment of the application when executing the computer executable instructions stored in the memory.
The embodiment of the application provides a computer readable storage medium, which stores computer executable instructions for implementing the animation switching method provided by the embodiment of the application when being executed by a processor.
The embodiment of the application provides a computer program product, which comprises a computer program or computer executable instructions, and the computer program or the computer executable instructions realize the animation switching method provided by the embodiment of the application when being executed by a processor.
The embodiment of the application has the following beneficial effects:
in the game playing process, the current state, the position information and the movement information of the virtual character are firstly obtained, when the state switching time is determined to be reached based on the current state, the position information and the movement information, the target state of the virtual character is determined, wherein the target state is different from the current state, then a target animation to be played is determined from a motion database by utilizing a motion matching node corresponding to the target state, and the target animation is output.
Drawings
FIG. 1 is a schematic diagram of a gaming system architecture provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of a server 400 provided in an embodiment of the present application;
FIG. 3A is a schematic flow chart of an implementation of the animation switching method according to the embodiment of the present application;
fig. 3B is a schematic flowchart of an implementation of determining a target animation to be played according to an embodiment of the present application;
FIG. 4A is a schematic diagram of an implementation flow of determining a target animation based on a target cost function according to an embodiment of the present application;
FIG. 4B is a schematic diagram of an implementation flow for determining the similarity between the currently playing animation and each first candidate animation according to an embodiment of the present application;
FIG. 5A is a schematic diagram of an implementation flow for determining a target animation from a plurality of candidate animations based on respective similarities according to an embodiment of the present application;
FIG. 5B is a flowchart illustrating another implementation of the animation switching method according to the embodiment of the present application;
FIG. 6 is a schematic diagram of an animation system architecture of a conventional animation state machine according to the related art;
FIG. 7 is a schematic diagram of an animation system according to an embodiment of the present disclosure;
fig. 8 is a schematic diagram of an implementation flow of animation production and animation blueprints according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail with reference to the accompanying drawings, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", "third" and the like are merely used to distinguish similar objects and do not represent a specific ordering of the objects, it being understood that the "first", "second", "third" may be interchanged with a specific order or sequence, as permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
Before further describing embodiments of the present application in detail, the terms and expressions that are referred to in the embodiments of the present application are described, and are suitable for the following explanation.
1) The status refers to an action of playing a specific motion picture, such as standby, start, stop, turn back, walk, run, attack, jump, etc., where the virtual character is located. Typically, a virtual character has multiple states and switches between states in a logical order to perform different actions.
2) An animation state machine for managing and controlling a manager for switching the virtual character between states.
3) Motion matching is a technology for preprocessing animation data to obtain information such as gestures and motion tracks, and determining the optimal animation from an animation feature database according to the input of a player and the current gestures as target data in the game running process.
The embodiment of the application provides an animation switching method, an animation switching device, animation switching equipment and a computer readable storage medium, which can solve the problems that the traditional animation state machine is complex in workflow, difficult to debug and maintain and the motion matching technology cannot be fine-tuned and optimized. The following describes exemplary applications of the computer device provided in the embodiments of the present application, where the device provided in the embodiments of the present application may be implemented as a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device), and other various types of user terminals, and may also be implemented as a server. In the following, an exemplary application when the device is implemented as a server will be described.
Referring to fig. 1, fig. 1 is a schematic architecture diagram of a game system 100 provided in an embodiment of the present application, as shown in fig. 1, the game system 100 includes a server 400, a network 300, and a terminal 200, where the terminal 200 is connected to the server 400 through the network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two. The network architecture is suitable for an application mode that completes virtual scene computation depending on the computing power of the server 400 and outputs the virtual scene at the terminal 200.
Taking the example of forming the visual perception of the virtual scene, the server 400 performs calculation of the virtual scene related display data and sends the calculated virtual scene related display data to the terminal 200, the terminal 200 finishes loading, analyzing and rendering of the calculated display data depending on the graphic calculation hardware, and outputs the virtual scene depending on the graphic output hardware to form the visual perception, for example, a two-dimensional video frame can be presented on the display screen of the smart phone, or a video frame for realizing a three-dimensional display effect can be projected on the lens of the augmented reality/virtual reality glasses; for the perception of the form of the virtual scene, it will be appreciated that the auditory perception may be formed by means of the corresponding hardware output of the terminal, e.g. using microphone output, the tactile perception may be formed using vibrator output, etc.
As an example, the terminal 200 is operated with a client 210 (e.g., a web-version game application), and plays game interactions with other users by connecting a game server (i.e., the server 400), and the terminal 200 outputs a virtual scene of the client 210, where the virtual scene may include multiple virtual characters, such as a host and a friend virtual character, and may also include an enemy virtual character. In the game play process, the server 400 acquires operation information of a user through the client 210 in the terminal 200, acquires position information and movement information of the virtual character based on the operation information, acquires a current state of the virtual character, determines a target state of the virtual character when determining that a state switching time is reached based on the current state, the position information and the movement information, determines a target animation to be played from a motion database by using a motion matching node corresponding to the target state, and outputs the target animation. When it is determined that the state switching time is not reached, determining a target animation to be played from an animation database corresponding to the current state of the virtual character, and outputting the target animation, wherein in the network architecture shown in fig. 1, the server 400 outputs the target animation, that is, sends the target animation to the terminal 200, so as to play and display in the client 210 of the terminal 200. In the embodiment of the application, the different states are correspondingly provided with the independent motion matching nodes, so that the complexity of the motion animation system can be simplified, the motion animation system is light and reusable, each independent motion matching node is provided with the independently configured motion matching algorithm parameters, and the motion matching algorithm parameters can comprise the track characteristic weight and the gesture characteristic weight, so that the independent motion matching nodes corresponding to each state can be subjected to fine tuning, and the smoothness and the naturalness of animation playing are improved; in addition, each independent motion matching node also has an independent animation database, so that the controllability of animation switching can be improved.
The server 400 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligence platforms, and the like. The terminal 200 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, a car terminal, etc. The terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiments of the present application.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a server 400 provided in an embodiment of the present application, and the server 400 shown in fig. 2 includes: at least one processor 410, a memory 450, at least one network interface 420, and a user interface 430. The various components in server 400 are coupled together by bus system 440. It is understood that the bus system 440 is used to enable connected communication between these components. The bus system 440 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled in fig. 2 as bus system 440.
The processor 410 may be an integrated circuit chip having signal processing capabilities such as a general purpose processor, such as a microprocessor or any conventional processor, or the like, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
The user interface 430 includes one or more output devices 431, including one or more speakers and/or one or more visual displays, that enable presentation of the media content. The user interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
Memory 450 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 450 optionally includes one or more storage devices physically remote from processor 410.
Memory 450 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a random access Memory (RAM, random Access Memory). The memory 450 described in the embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 450 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 451 including system programs, e.g., framework layer, core library layer, driver layer, etc., for handling various basic system services and performing hardware-related tasks, for implementing various basic services and handling hardware-based tasks;
network communication module 452 for reaching other computer devices via one or more (wired or wireless) network interfaces 420, exemplary network interfaces 420 include: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (USB, universal Serial Bus), etc.;
a presentation module 453 for enabling presentation of information (e.g., a user interface for operating peripheral devices and displaying content and information) via one or more output devices 431 (e.g., a display screen, speakers, etc.) associated with the user interface 430;
an input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
In some embodiments, the apparatus provided in the embodiments of the present application may be implemented in software, and fig. 2 shows an animation switching apparatus 455 stored in a memory 450, which may be software in the form of a program, a plug-in, or the like, including the following software modules: the first acquisition module 4551, the first determination module 4552, the second determination module 4553 and the play module 4554 are logical, and thus may be arbitrarily combined or further split according to the functions implemented. The functions of the respective modules will be described hereinafter.
In other embodiments, the apparatus provided by the embodiments of the present application may be implemented in hardware, and by way of example, the apparatus provided by the embodiments of the present application may be a processor in the form of a hardware decoding processor that is programmed to perform the animation switching method provided by the embodiments of the present application, e.g., the processor in the form of a hardware decoding processor may employ one or more application specific integrated circuits (ASIC, application Specific Integrated Circuit), DSPs, programmable logic devices (PLD, programmable Logic Device), complex programmable logic devices (CPLD, complex Programmable Logic Device), field programmable gate arrays (FPGA, field-Programmable Gate Array), or other electronic components.
In some embodiments, the terminal or the server may implement the animation switching method provided in the embodiments of the present application by running a computer program. For example, the computer program may be a native program or a software module in an operating system; a local (Native) Application program (APP), i.e. a program that needs to be installed in an operating system to run, such as a game APP or an instant messaging APP; the method can also be an applet, namely a program which can be run only by being downloaded into a browser environment; but also an applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in.
The animation switching method provided by the embodiment of the present application will be described with reference to an exemplary application and implementation of the server provided by the embodiment of the present application.
Next, the animation switching method provided in the embodiment of the present application is described, and as described above, the computer device implementing the animation switching method in the embodiment of the present application may be a server, a terminal, or a combination of both. The execution subject of the respective steps will not be repeated hereinafter.
Referring to fig. 3A, fig. 3A is a schematic flowchart of an implementation of the animation switching method according to the embodiment of the present application, and will be described with reference to the steps shown in fig. 3A.
In step 101, the current state, position information, and movement information of the virtual character are acquired.
In some embodiments, the action that the avatar is in that plays a particular motion, such as standby, move, pivot, attack, etc., is referred to as a state. Typically, a virtual character has multiple states and can switch between states in a logical order. When the virtual character is realized, a plurality of states and conversion conditions for converting the states are preset for the virtual character, and when the conversion conditions are reached, the states can be converted mutually, so that an animation state machine is formed. The character may exhibit different behavior when the state transitions. Many basic actions can be implemented by an animation state machine, such as: running, jumping, going up, swimming, etc. In order to enrich the expression of moving animation such as walking, running and the like, the moving animation is thinned into sub-animation states such as starting, stopping, circulating, turning back and the like. Animation, which is an expression mode of character behavior, is a complete animation by recording and playing actions of a virtual object in a period of time, like a movie or a cartoon. In the game, for each virtual character, a plurality of animation segments are required to be independently manufactured by artistic staff, and the animation segments are led into a game engine to be mixed and switched, so that the effect in the game is finally realized. For example, a character may have a running animation while running, a jump-up animation, and a battle may play a battle animation, all of which are different animation segments.
During game play, the virtual character has a corresponding state at each moment, for example, when the game starts, the state of the virtual character is an initial Idle state, and during the game, the state of the virtual character may be a walking state, a running state, an in-place turning state, or the like. During game play, the player may control the movement of the virtual character through the input device of the terminal, thereby changing the position and speed of the virtual character. The current state of the virtual character may be acquired from state information of the virtual character, and the position information and the movement information of the virtual character may be determined based on operation information of the player. The position information of the virtual character includes coordinates of the virtual character in the virtual scene, and the movement information of the virtual character includes at least a movement speed, a movement direction, and an acceleration of the virtual character.
In step 102, when it is determined that a state switching opportunity is reached based on the current state, the position information, and the movement information, a target state of the virtual character is determined.
In some embodiments, before step 102, it is required to determine whether a state switching opportunity is reached based on the current state, the location information and the movement information, and when implementing, at least one transferable state corresponding to the current state is first acquired, and transfer conditions corresponding to each transferable state are acquired, then it is determined whether at least one transfer condition is satisfied based on the location information and the movement information of the virtual character, if it is determined that at least one transfer condition is satisfied, it is determined that the switching opportunity is reached, and the transferable state corresponding to the transfer condition is determined as a target state.
The current state of the virtual character is a walking state, the transferable state corresponding to the walking state comprises an idle state (i.e. a static state) and a running state, when the acceleration of the virtual character is positive and the speed is greater than the speed threshold value at the moment, the transfer condition of the running state is determined to be reached, the state transfer time is determined to be reached, and the target state is determined to be the running state; when the acceleration of the virtual character is zero and the speed is smaller than the second speed threshold value, determining that the idle state transition condition is reached, determining that the state transition time is reached, and determining that the target state is a stationary state.
In some embodiments, whether the state switching time is reached or not can be determined once every interval preset time, the target state is determined only when the state switching time is determined to be reached, the target animation to be played is determined from the dynamic database, and the target animation to be played is played, and when the state switching time is determined not to be reached, the target animation to be played is determined from the current animation database corresponding to the current state, so that frequent switching can be avoided, the situation that the played target animation is matched with the state of the virtual object can be ensured, and further smoothness and naturalness of animation playing are improved.
In step 103, determining the target animation to be played from the animation database by using the motion matching node corresponding to the target state.
In the embodiment of the application, each state corresponds to an independent motion matching node, each state corresponds to an animation database, and parameters of cost functions adopted by the motion matching nodes corresponding to different states in animation selection are different.
In some embodiments, step 103 shown in fig. 3A may be implemented by steps 1031 to 1033 shown in fig. 3B, as described below in connection with fig. 3B.
In step 1031, a motion matching node corresponding to the target state and a target animation database corresponding to the target state are obtained.
In some embodiments, a motion matching node corresponding to the target state and a target animation database corresponding to the target state may be obtained based on the identification of the target state. The target animation database stores a plurality of candidate animations corresponding to the target states, and the states of the virtual characters in the plurality of candidate animations are all target states, but the execution actions of the virtual characters or the details of the execution actions are different. For example, the target state is a moving state, and then the multiple candidate animations included in the target state may be animations of running by the virtual character with double-arm swing, animations of holding the virtual prop and the virtual character with the submachine, and animations of jumping after running by the virtual character.
In step 1032, the objective cost function corresponding to the motion matching node, the current gesture feature of the current playing animation and the current track feature are obtained.
In some embodiments, each state corresponds to an independent motion matching node, each motion matching node corresponds to a cost function, and the similarity between two animations can be determined using the cost function corresponding to the motion matching node. The parameters of the cost function can comprise gesture weights and track weights, the gesture weights and the track weights of the cost function can be independently set according to different state requirements, if the configuration requirement information of one state is that the gesture is required to be consistent and natural between animations, the gesture weights are required to be higher than the track weights when the parameters of the cost function are set, and if the configuration requirement information of one gesture is that the motion track is accurately matched, the gesture weights are required to be smaller than the track weights when the parameters of the cost function are set. This can enable the similarity between two animations to be determined using the cost function to meet the state requirements, thereby improving the fineness of the animation sorting capability.
The current gesture feature and the current track feature of the current playing animation refer to the current gesture feature and the current track feature of the virtual character in the current playing animation. The gesture characteristics of the virtual character and the track characteristics of the virtual character can be flexibly adjusted. For example, the posing features of the avatar may include the position of one or more bones of the avatar and the velocity of those bones. For example, the position and velocity of the foot bones of the avatar, the position and velocity of the chest bones, the position and velocity of the hip bones may be included. The trajectory characteristics of the virtual character include the trajectory of the virtual character (including position and orientation) over a period of time, and the trajectory of the virtual character over a period of time in the future.
In the embodiment of the present application, assuming that the virtual character is a bipedal standing animal, the current posture characteristics of the virtual character include: the current position of the left foot, the current position of the right foot, the current speed of the left foot, the current speed of the right foot and the current speed of the buttocks of the virtual character; the current trajectory characteristics include: the position and orientation of the virtual character at the present moment, the position and orientation of the virtual character corresponding to three sampling time points in the past one second, and the position and orientation of the virtual character corresponding to three sampling time points in the future one second. The three sampling time points within the past one second may be sampling times corresponding to the first 1 second, the first 0.66 second, and the first 0.33 second of the current time. The three sampling time points within one second in the future may be sampling times corresponding to the last 0.33 seconds, the last 0.66 seconds, and the last 1 second of the current time.
In step 1033, a target animation to be played is determined from the target animation database based on the target cost function, the current pose feature, and the current trajectory feature.
The gesture feature comprises at least one gesture feature component and the track feature comprises at least one track feature component. In some embodiments, step 1033 shown in fig. 3B may be implemented by steps 331 through 334 shown in fig. 4A, each of which is described below in conjunction with fig. 4A.
In step 331, the gesture weights corresponding to the gesture feature components and the track weights corresponding to the track feature components are obtained based on the objective cost function.
In the objective cost function, different gesture feature components may correspond to the same gesture weight, or may correspond to different gesture weights. Likewise, different trajectory feature components may correspond to the same trajectory weight, or may correspond to different trajectory weights. In the embodiment of the application, the gesture weights corresponding to different gesture feature components and the track weights of different track feature components can be independently set, so that different target cost functions can be obtained based on actual state requirements, and the target cost functions can be further refined when being used for animation sorting.
When different gesture feature components have the same gesture weight, the influence of the different gesture feature components on the gesture similarity calculation between the two animations is the same, and when the different gesture feature components have different gesture weights, the influence of the different gesture feature components on the gesture similarity calculation between the two animations is different, and the larger the gesture weight is, the larger the influence on the gesture similarity is. Similarly, when different track feature components have the same track weight, it is explained that the influence of the different track feature components on the track similarity calculation between two animations is the same, and when different track feature components have different track weights, it is explained that the influence of the different track feature components on the track similarity calculation between two animations is different, and the larger the track weight is, the larger the influence on the track similarity is.
Illustratively, assume that the gesture feature has two gesture feature components, F P1 And F P2 ,F P1 The corresponding gesture weight is W P1 ,F P2 The corresponding gesture weight is W P2 The method comprises the steps of carrying out a first treatment on the surface of the The track features have two track feature components, F T1 And F T2 ,F T1 The corresponding gesture weight is W T1 ,F T2 The corresponding gesture weight is W T2 The objective cost function may be represented by equation (1-1):
Figure SMS_1
(1-1);/>
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_2
gesture feature component F for animation a P1 And a gesture feature component F of animation b P1 Similarity of->
Figure SMS_3
Gesture feature component F for animation a P2 And a gesture feature component F of animation b P2 Is used for the degree of similarity of (c) to (c),
Figure SMS_4
trajectory feature component F for animation a T1 And a trajectory characteristic component F of animation b T1 Is used for the degree of similarity of (c) to (c),
Figure SMS_5
trajectory feature component F for animation a T2 And a trajectory characteristic component F of animation b T2 Is a similarity of (3).
In step 332, candidate pose features and candidate trajectory features of each first candidate animation in the target animation database are obtained.
In some embodiments, candidate pose features and candidate trajectory features of each first candidate animation in the target animation database may be feature extracted using an animation import tool.
In step 333, a similarity between the current playing animation and the respective first candidate animations is determined based on the gesture weights corresponding to the respective gesture feature components, the track weights corresponding to the respective track feature components, the current gesture feature, the candidate gesture feature and the candidate track feature of the current track feature and the respective first candidate animations.
In some embodiments, step 333 illustrated in fig. 4A may be implemented by steps 3331 through 3335 in fig. 4B, as described below in connection with fig. 4B.
In step 3331, the similarity of the current pose feature and each of the pose feature components of each candidate pose feature is determined.
In some embodiments, step 3331 may determine, when implemented, hamming distances of the current pose feature and each of the pose feature components in each of the candidate pose features, euclidean distances of the current pose feature and each of the pose feature components in each of the candidate pose features, or cosine similarities of the current pose feature and each of the pose feature components in each of the candidate pose features, etc. The embodiment of the application does not limit the specific algorithm of the similarity.
In step 3332, the similarity of the current track feature and each of the track feature components in each of the candidate track features is determined.
Similar to step 3331, step 3332 may determine hamming distances of the current track feature and each track feature component in each candidate track feature, euclidean distances of each track feature component in the current track feature and each candidate track feature, or cosine similarities of each gesture feature component in the current track feature and each candidate track feature, and so on.
In step 3333, the gesture weights corresponding to the gesture feature components and the similarities corresponding to the gesture feature components are weighted and summed to obtain the similarities of the current gesture feature and the candidate gesture features.
Assume that the current gesture feature is F pc ,F Tc Including a gesture feature component of F p1c And F p2c Candidate gesture feature is F ps ,F ps Comprising a gesture component of F p1s And F p2s Then the pose similarity between the current pose feature and the candidate pose feature is
Figure SMS_6
In step 3334, the track weights corresponding to the track feature components and the similarities corresponding to the gesture feature components are weighted and summed to obtain the track similarities between the current track feature and the candidate track features.
Assume that the current track is characterized as F Tc ,F Tc Including a trace feature component of F T1c And F T2c Candidate track feature is F Ts ,F Ts Including a track component of F T1s And F T2s Then the pose similarity between the current and candidate track features is
Figure SMS_7
。/>
In step 3335, the sum of the respective gesture similarities and the respective corresponding track similarities is determined as the similarity between the currently playing animation and the respective first candidate animations.
Taking the above example, the similarity between the currently playing animation c and the candidate animation s is
Figure SMS_8
In the steps 3331 to 3335, when determining the similarity between the currently playing animation and the candidate animation, if the gesture weight is increased, the searched animation approaches the original gesture and is more consistent and natural; if the track weight is increased, the searched animation will exactly match the motion distance. The gesture weight and the track weight are simultaneously effective, so that the similarity of the gesture and the track can be simultaneously evaluated, and the animation classification capability is finer than that of a traditional animation classification node.
With continued reference to fig. 4A, the above description continues with step 333.
In step 334, a target animation to be played is determined from the respective first candidate animations based on the similarity between the currently playing animation and the respective first candidate animations.
In some embodiments, step 334 shown in fig. 4A may be implemented using steps 3341 through 3345 shown in fig. 5A, as described below in connection with fig. 5A.
In step 3341, a first highest similarity is determined from the similarities between the currently playing animation and the respective first candidate animations.
In some embodiments, the respective similarities may be ordered to determine a first highest similarity. When the method is implemented, algorithms such as violent traversal, K-Dimensional Tree (K-D Tree), ball Tree (Ball Tree), axis parallel bounding box Tree (AABB Tree, axis Aligned Bounding Box Tree) and the like can be utilized to sort the similarity. In the embodiment of the application, the selected Ball Tree algorithm with the highest performance ranks the similarity to determine the first highest similarity.
In step 3342, it is determined whether the first highest similarity is greater than a similarity threshold.
When the first highest similarity is greater than the similarity threshold, step 3343 is entered; when the first highest similarity is less than or equal to the similarity threshold, step 3344 is entered.
In some embodiments, the similarity threshold may be preset, for example, may be 0, or may be a preset value greater than 0; the similarity threshold may also be determined based on the similarity of the second candidate animation in the current animation database corresponding to the current state to the currently playing animation. When the method is implemented, each second candidate animation in the current animation database corresponding to the current state is obtained, the similarity between the currently played animation and each second candidate animation in the current animation database is determined, the second highest similarity is determined from the similarity between the currently played animation and each second candidate animation, and the second highest similarity is determined to be a similarity threshold.
It should be noted that, if the second highest similarity is determined as the similarity threshold, the cost functions used in determining the similarity between the currently playing animation and each of the second candidate animations in the current animation database and in determining the similarity between the currently playing animation and each of the first candidate animations in the target animation database are the same, and may be determined using, for example, the target cost functions corresponding to the target states.
It should be noted that, since the currently playing animation is also one of the animation segments in the current animation database, in the process of determining the similarity threshold, the obtained second candidate animations in the current animation database do not include the currently playing animation.
In step 3343, the first candidate animation corresponding to the first highest similarity is determined as the target animation to be played.
Since step 3343 is performed on the premise that the first highest similarity is greater than the similarity threshold, when the similarity threshold is the second highest similarity determined from the current animation database, the animation corresponding to the first highest similarity in the target animation database illustrating the target state is more similar to the currently played animation than the animation corresponding to the second highest similarity in the current animation database, thereby ensuring smooth and natural transition of the animation.
In step 3344, a second candidate animation corresponding to a second highest similarity is obtained from the current animation database.
In some embodiments, a target cost function corresponding to the target state is obtained, and the similarity between the currently played animation and each second candidate animation except for the currently played animation in the current animation database is determined by using the target cost function, so that a second candidate animation corresponding to the second highest similarity is determined.
In step 3345, the second candidate animation corresponding to the second highest similarity is determined as the target animation to be played.
In some embodiments, when the first highest similarity is less than or equal to the similarity threshold, it is indicated that the animation with the highest similarity is selected from the animation database of the target state at this time and is lower than the similarity between the animation with the highest similarity selected from the current animation database and the currently played animation, so that the second candidate animation corresponding to the second highest similarity in the current animation database is determined as the target animation to be played, thereby avoiding the animation switching from being too hard.
In step 104, the target animation is output.
When the animation switching method provided by the embodiment of the application is implemented by the server, outputting the target animation means that the server sends the target animation to the client, and the client plays and presents the target animation comprising the virtual character in the virtual scene. When the animation switching method provided by the embodiment of the application is implemented by the terminal, the target animation is output by playing and displaying the target animation through the display device of the terminal.
In the animation switching method provided by the embodiment of the application, in the game playing process, the current state, the position information and the movement information of the virtual character are firstly obtained, when the state switching time is determined to be reached based on the current state, the position information and the movement information, the target state of the virtual character is determined, wherein the target state is different from the current state, then the target animation to be played is determined from a motion database by utilizing a motion matching node corresponding to the target state, and the target animation is output, and in the embodiment of the application, an independent motion matching node corresponds to the different states, so that the complexity of a motion animation system can be simplified, the motion animation system becomes light and reusable, and each independent motion matching node is further provided with an independent animation database, thereby improving the controllability of animation switching; and each independent motion matching node is provided with an independently configured motion matching algorithm parameter, and the motion matching algorithm parameter can comprise a track characteristic weight and a gesture characteristic weight, so that the independent motion matching node corresponding to each state can be subjected to fine tuning, and the fluency and naturalness of animation playing are improved.
In some embodiments, as shown in fig. 5B, steps 201 to 205 shown in fig. 5B may also be performed after step 101, as described below in connection with fig. 5B.
In step 201, it is determined whether a state switching occasion is reached based on the current state, the position information and the movement information.
In some embodiments, at least one transferable state corresponding to the current state is obtained first, transfer conditions corresponding to the transferable states are obtained, then whether at least one transfer condition is met is determined based on the position information and the movement information of the virtual character, and if the at least one transfer condition is determined to be met, a transition switching time is determined to be reached, and step 102 is entered; if it is determined that any one of the transition conditions is not satisfied, it is determined that the state switching timing is not reached, and step 202 is entered.
In step 202, a similarity between the currently playing animation and each of the second candidate animations in the current animation database is determined.
In some embodiments, in determining the similarity between the currently playing animation and each of the second candidate animations in the current animation database, a current cost function corresponding to the current state is used. When the method is realized, firstly, a motion matching node corresponding to a current state is obtained, so that a current cost function corresponding to the motion matching node is obtained, a current gesture weight corresponding to each gesture feature component and a current track weight corresponding to each track feature component are obtained based on the current cost function, candidate gesture features and candidate track features of each second candidate animation in a current animation database are obtained, and the similarity corresponding to each gesture feature component in the current gesture features and each candidate gesture feature and the similarity corresponding to each track feature component in the current track feature and each candidate track feature are determined; and carrying out weighted summation on the gesture weights corresponding to the gesture feature components and the similarities corresponding to the gesture feature components to obtain the gesture similarities between the current gesture feature and the candidate gesture feature, correspondingly carrying out weighted summation on the track weights corresponding to the track feature components and the similarities corresponding to the gesture feature components to obtain the track similarities between the current track feature and the candidate track feature, and finally determining the sum of the gesture similarities and the corresponding track similarities as the similarity between the current playing animation and the second candidate animation.
In step 203, a third highest similarity is determined from the similarities between the respective second candidate animations in the current animation database.
In some embodiments, the preset sorting algorithm may be used to sort the similarities, for example, the similarities may be sorted in order from large to small by an bubbling sorting method to obtain the third highest similarity, or the algorithms such as the violent traversal K-D Tree, the Ball Tree, the AABB Tree, etc. may be used to sort the similarities to obtain the third highest similarity.
In step 204, the second candidate animation corresponding to the third highest similarity is determined as the target animation to be played.
In some embodiments, after step 204, step 104 is performed to play the target animation. Because the second candidate animation corresponding to the third highest similarity is the animation segment with the highest similarity with the currently played animation in the current animation database, the second candidate animation corresponding to the third highest similarity is determined to be the target animation to be played, and the smoothness and naturalness of the animation playing can be ensured.
In the embodiment of the application, each state in the animation state machine corresponds to one Motion Matching node, and each Motion Matching node uses Motion Matching as a core algorithm and can automatically compensate a Motion track by using a Mesh separation technology in running, so that in the animation manufacturing process, no particularly accurate Motion track Matching is needed, many animations even no modification is needed, and good animation quality can be achieved by only modifying the phase and track of the step-down animation, and the animation quality can be ensured while the animation manufacturing cost is reduced. Each state corresponds to an animation database, and the full database corresponding to the traditional state machine is divided to obtain the animation databases corresponding to the states, so that each motion matching node only searches in the animation databases corresponding to the motion matching node, and the animation switching efficiency can be improved.
In some embodiments, when a Blend Space classifier is used in a conventional animation state machine in a state, a plurality of reference animations with different preset speeds can be obtained, a prebaking technology is used to perform fusion processing on at least two reference animations with different preset speeds, a plurality of fused animations are obtained, and the reference animations and the fused animations are added to an animation database. When fusion processing is carried out, at least two reference animations with different preset speeds can be fused according to different weights, so that fused animations are obtained. Wherein the weights of different reference animations are real numbers between 0 and 1, and the sum of the weights for fusion processing is 1.
Illustratively, assume that a reference animation having a speed of 1 m/s and a reference animation having a speed of 5 m/s are weighted according to weights of 0.5 and 0.5 to obtain a fused animation, wherein the speed of the fused animated virtual character is 3 m/s.
In addition, in the embodiment of the application, the animation importing tool is utilized to preprocess the manufactured animation, candidate animations in the animation database corresponding to each state are obtained when preprocessing is performed, and the gesture features and the track features of the candidate animations in the animation database corresponding to each state are extracted. Therefore, compared with the traditional state machine which needs to extract information such as leg phase curves, distance curves, rotation curves and the like of the virtual roles when preprocessing is carried out, the data processing complexity and the data processing amount are greatly reduced, and therefore animation preprocessing efficiency is improved.
In the embodiment of the present application, because each state corresponds to an independent motion matching node, the gesture weight and the track weight in the cost function corresponding to each motion matching node can be set independently according to different state requirements. In some embodiments, a preset cost function is first obtained, where a gesture weight corresponding to each gesture feature component and a track weight corresponding to each track feature component in the preset cost function are preset values, for example, the preset value may be 1, may be 0.5, and so on. And then, based on configuration requirement information of each state, respectively setting gesture weights corresponding to gesture feature components and track weights corresponding to track feature components in the cost function to obtain the cost function corresponding to each state.
In some embodiments, if the configuration requirement information of a state is that the gestures are required to be consistent and natural between animations, when setting the parameters of the cost function, the gesture weight needs to be ensured to be higher than the track weight, for example, the gesture weight may be 5, the track weight may be 1, and if the configuration requirement information of a gesture is that the motion track is precisely matched, when setting the parameters of the cost function, the gesture weight needs to be ensured to be smaller than the track weight.
In some embodiments, when setting the parameters of the cost function, reference may also be made to the animation classifier adopted by each state in the traditional animation state machine, and if the starting state is a distance classifier, then when setting the cost function corresponding to the starting state, it is required to ensure that the track weight is greater than the gesture weight; assuming that the stopping state uses a gesture classifier and a distance classifier, when a cost function corresponding to the stopping state is set, the track weight and the gesture weight are equivalent in size; assuming that a gesture classifier is used in the moving state, when a cost function corresponding to the moving gesture is set, the gesture weight needs to be ensured to be greater than the track weight.
In the following, an exemplary application of the embodiments of the present application in a practical application scenario will be described.
The animation switching method provided by the embodiment of the application can be applied to a game engine to realize a motion animation system. Taking the illusion engine as an example, a basic state switching logic and a corresponding state machine system are built in an animation blueprint of the game engine by using the animation switching method provided by the embodiment of the application. When the Motion animation system runs, the state is switched firstly, and then the Motion Matching is used for Matching proper Motion animation.
In order to better understand the animation switching method provided in the embodiment of the present application, an animation system of a conventional animation state machine in the related art will be described first. Fig. 6 is a schematic diagram of an animation system structure of a conventional animation state machine in the related art, as shown in fig. 6, in which the following six states are included: a rotation in place (Idle) 601, an Idle state (Idle) 602, a Start state (Start) 603, a Turn-back state (Pivot) 604, a motion state (Move) 605 and a Stop state (Stop) 606, as shown in fig. 6, each state machine has a transition relationship with each other, and a single state machine needs to select one or more animation sorters according to its own characteristics to perform selection and playing of an animation. The method comprises the steps of using an Idle Turn state as a directional Matching (Orientation Matching) animation classifier, using an Idle state as a fixed animation, using a distance Matching animation classifier for starting a Start state, using a distance Matching (Distance Matching) animation classifier for a Pivot state, using a Pose Matching (Pose Matching) animation classifier for a Move state, and using a Pose Matching animation classifier and a distance Matching animation classifier for a Stop state.
Fig. 7 is a schematic structural diagram of an animation system provided in an embodiment of the present application, as shown in fig. 7, where the animation system also includes six states, namely, a spin-in-place state 701, an idle state 702, a start state 703, a turn-back state 704, a Motion state 705, and a stop state 706, but unlike the animation system shown in fig. 6, in the animation system provided in the embodiment of the present application, each animation classifier in the state machine is replaced with a Motion Matching (Motion Matching) algorithm to perform selection and playing of an animation. In practice, it is also possible to replace only part of the animation classifier of the state machine to achieve a greater control capability.
Because each state machine has independent motion matching nodes, the algorithm parameters of the motion matching nodes in each state machine can be independently configured, so that the effect of finishing the algorithm can be achieved by specially adjusting the parameters aiming at each state machine, and each motion matching node has an independent animation database, so that the control of animation switching is stronger.
In the animation system provided by the embodiment of the application, parameters of the motion matching node in each state machine may include a gesture weight and a track weight, and the gesture weight and the track weight of each state machine are independently configured. And in the embodiment of the present application, a cost function shown in formula (2-1) is defined as the similarity evaluation function:
Figure SMS_9
(2-1);
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_10
for gesture weight, ++>
Figure SMS_11
For track weight, ++>
Figure SMS_12
For gesture feature Pose 1 And Pose 2 Similarity of->
Figure SMS_13
For trace features Trajectry 1 And Trajectry 2 Is a similarity of (3).
And determining an animation frame which is most similar to the current gesture feature (Pose) and the motion track feature (Trajectry) from a database corresponding to the motion matching node based on a cost function shown in a formula (2-1). Wherein the weight of the pose similarity of the Cost function is represented by W Pose The weight of the track similarity is determined by W Trajectory And (5) determining. Increase W Pose The searched animation approaches the original gesture and is more coherent and natural, so that the action of a gesture matching animation classifier can be replaced; increase W Trajectory The searched animation will exactly match the motion distance and thus can replace the distance matching animation sorter. Adding rotation features and independent weights to the motion trail features can replace the Orientation Matching animation sorter. Because W is Pose And W is Trajectory Is effective at the same time, so that the similarity of the gesture and the track can be evaluated at the same time by using the cost function, and therefore, the animation classification capability is finer than that of the traditional animation classification node: for example, the problem of jumping of the animation gesture due to the fact that only the distance is considered as a single matching factor in the distance matching can be avoided.
For Blend Space, a prebaking technology can be used, and the animations at each speed are extracted into different animation segments independently in advance to serve as a candidate animation set of the motion matching nodes, namely the Blend Space animation sorter can be replaced by the motion matching nodes.
Fig. 8 is a schematic diagram of an implementation flow of animation production and animation blueprints according to an embodiment of the present application, as shown in fig. 8, where the flow includes:
step 801, an animation is modified using an animation production tool.
In correcting the animation, it is necessary to modify the fixed pose and the moving trajectory. In some embodiments, only the fixed pose needs to be modified when the animation is modified. If a smoother stop motion picture is desired at the time of switching, it is necessary to additionally produce a stop motion picture of a different phase. When the track is modified, accurate matching is not required, and only a preset tolerance error is required to be met.
In step 802, an animation is preprocessed using an animation import tool.
In some embodiments, extraction of gesture features and trajectory features of the animation is required during preprocessing. Wherein the gesture features include: the current position of the left foot, the current position of the right foot, the current speed of the left foot, the current speed of the right foot and the current speed of the buttocks of the virtual character; the track features include: the position and orientation of the virtual character at the present moment, the position and orientation of the virtual character corresponding to three sampling time points in the past one second, and the position and orientation of the virtual character corresponding to three sampling time points in the future one second.
In step 803, an animation is selected and played through the state switching logic.
The state switching logic is used as an upper control system and can determine whether to transition from an old state to a new state or keep the old state unchanged according to the current state, speed, acceleration, orientation and other information of the character. The state switching logic is the basis on which the animation system can transition between states.
For the lower-layer state animation system, if the state is transferred, the Motion Matching must also select the animation from the candidate animation list of the new state to play again according to the Matching algorithm parameters corresponding to the new state. If the state is not transferred, the animation is reselected from the candidate animation list in the current state to be played according to the current gesture and the motion trail of the character at fixed time intervals (usually between 0.1s and 0.5 s).
In the embodiment of the present application, the animation database corresponding to each state is obtained by dividing a full-scale animation database, and each motion matching node searches only for a part of animation data. In some embodiments, the motion matching node may still keep searching in the full amount of animation data, but the approximations of the animation data of different states are weighted, and the state is switched only when the weighted animation approximations are higher, so that the state is not too hard when switched.
In the embodiment of the application, a Motion state machine prototype system realized based on Motion Matching is utilized and tested in three states of walking, running and gun holding. Through evaluation, only state machine-scale animations are selected, so that compared with the traditional Motion Matching resources, the number of animations is reduced from 614 to 444, 27% and the length of animations is reduced from 37 to 16 minutes, and 56%; compared with the traditional state machine, the logic is less, the blueprint logic nodes are reduced from 371 to 67 on the basis of no reduction of the animation quality, and 82% is reduced, so that logic simplification is realized; in addition, because the resource quantity is reduced, the performance of the device is higher than that of the traditional Motion Matching, the walking time is reduced by 86%, the running time is reduced by 45%, the gun holding time is reduced by 15%, the quality of the device is higher than that of the traditional state machine, the problem of animation quality reduction caused by Distance Matching changing the animation rhythm is avoided, and the animation quality is effectively improved.
Besides the benefits of the system aspect when the animation in the engine runs, the data processing method provided by the embodiment of the application also simplifies the animation modification flow of the DCC side by adopting the motion matching technology, and can achieve good animation quality by only modifying the phase and track of the stop animation, thereby greatly reducing the labor cost of an animator.
Continuing with the description below of an exemplary architecture of animation switching apparatus 455 provided in embodiments of the present application implemented as a software module, in some embodiments, as shown in fig. 2, the software module stored in animation switching apparatus 455 of memory 450 may include:
a first obtaining module 4551 configured to obtain a current state, location information, and movement information of the virtual character;
a first determining module 4552 configured to determine a target state of the virtual character when it is determined that a state switching occasion is reached based on the current state, the position information, and the movement information;
a second determining module 4553, configured to determine a target animation to be played from the motion database by using the motion matching node corresponding to the target state;
and the playing module 4554 is used for outputting the target animation.
In some embodiments, the second determining module 4553 is further configured to:
acquiring a motion matching node corresponding to the target state and a target animation database corresponding to the target state;
acquiring a target cost function corresponding to the motion matching node, and a current gesture characteristic and a current track characteristic of a current playing animation;
and determining the target animation to be played from the target animation database based on the target cost function, the current gesture feature and the current track feature.
In some embodiments, the gesture feature includes at least one gesture feature component, and the second determining module 4553 is further configured to:
acquiring gesture weights corresponding to the gesture feature components and track weights corresponding to the track feature components based on the target cost function;
acquiring candidate gesture features and candidate track features of each first candidate animation in the target animation database;
determining the similarity between the current playing animation and each first candidate animation based on the gesture weight corresponding to each gesture feature component, the track weight corresponding to each track feature component, the current gesture feature, the current track feature, and the candidate gesture feature and the candidate track feature of each first candidate animation;
and determining a target animation to be played from the first candidate animations based on the similarity between the currently played animation and the first candidate animations.
In some embodiments, the second determining module 4553 is further configured to:
determining the similarity corresponding to each gesture feature component in the current gesture feature and each candidate gesture feature;
Determining the similarity corresponding to each track feature component in the current track feature and each candidate track feature;
carrying out weighted summation on the gesture weights corresponding to the gesture feature components and the similarity corresponding to the gesture feature components to obtain the gesture similarity between the current gesture feature and each candidate gesture feature;
carrying out weighted summation on the track weights corresponding to the track feature components and the similarity corresponding to the gesture feature components to obtain the track similarity between the current track feature and each candidate track feature;
and determining the sum of the similarity of each gesture and the similarity of each corresponding track as the similarity between the currently played animation and each first candidate animation.
In some embodiments, the second determining module 4553 is further configured to:
determining a first highest similarity from the similarities between the currently playing animation and the respective first candidate animations;
and when the first highest similarity is larger than a similarity threshold, determining the first candidate animation corresponding to the first highest similarity as the target animation to be played.
In some embodiments, the apparatus further comprises:
the second acquisition module is used for acquiring each second candidate animation in the current animation database corresponding to the current state;
a third determining module, configured to determine a similarity between the currently playing animation and each second candidate animation in the current animation database;
a fourth determining module, configured to determine a second highest similarity from the similarities between the currently playing animation and the respective second candidate animations;
and a fifth determining module, configured to determine the second highest similarity as a similarity threshold.
In some embodiments, the apparatus further comprises:
the third acquisition module is used for acquiring a second candidate animation corresponding to the second highest similarity from the current animation database when the highest similarity is smaller than or equal to the similarity threshold;
and a sixth determining module, configured to determine a second candidate animation corresponding to the second highest similarity as a target animation to be played.
In some embodiments, the apparatus further comprises:
a seventh determining module configured to determine a similarity between the currently played animation and each of the second candidate animations in the current animation database when it is determined that the state switching opportunity is not reached based on the current state, the position information and the movement information;
An eighth determining module, configured to determine a third highest similarity from the similarities between the second candidate animations in the current animation database;
and a ninth determining module, configured to determine the second candidate animation corresponding to the third highest similarity as a target animation to be played.
In some embodiments, the apparatus further comprises:
the fourth acquisition module is used for acquiring a plurality of reference animations with different preset speeds;
the fusion module is used for carrying out fusion processing on at least two reference animations with different preset speeds to obtain a plurality of fused animations;
and the data adding module is used for adding the reference animation and the fused animation to an animation database.
In some embodiments, the apparatus further comprises:
a fifth acquisition module, configured to acquire candidate animations in an animation database corresponding to each state;
and the feature extraction module is used for extracting the gesture features and the track features of the candidate animations in the animation database corresponding to each state.
In some embodiments, the apparatus further comprises:
a sixth obtaining module, configured to obtain a preset cost function, where a gesture weight corresponding to each gesture feature component and a track weight corresponding to each track feature component in the preset cost function are preset values;
The weight setting module is used for respectively setting the gesture weights corresponding to the gesture feature components and the track weights corresponding to the track feature components in the cost function based on the configuration requirement information of each state to obtain the cost function corresponding to each state.
It should be noted here that: the description of the above animation switching device embodiment items is similar to the above description of the method, and has the same advantageous effects as the method embodiment. For technical details not disclosed in the embodiments of the animation switching device of the present application, those skilled in the art will understand with reference to the description of the embodiments of the method of the present application.
Embodiments of the present application provide a computer program product comprising a computer program or computer-executable instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer-executable instructions from the computer-readable storage medium, and the processor executes the computer-executable instructions, so that the computer device executes the animation switching method according to the embodiment of the present application.
The present embodiments provide a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, cause the processor to perform the animation switching method provided by the embodiments of the present application, for example, the animation switching method as shown in fig. 3A, 5B.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, computer-executable instructions may be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, in the form of programs, software modules, scripts, or code, and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, computer-executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, such as in one or more scripts in a hypertext markup language (HTML, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one computer device or on multiple computer devices located at one site or, alternatively, distributed across multiple sites and interconnected by a communication network.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modifications, equivalent substitutions, improvements, etc. that are within the spirit and scope of the present application are intended to be included within the scope of the present application.

Claims (13)

1. An animation switching method, characterized in that the method comprises:
acquiring the current state, position information and movement information of the virtual character;
determining a target state of the virtual character when it is determined that a state switching opportunity is reached based on the current state, the position information and the movement information; wherein, the different states correspond to independent motion matching nodes, each independent motion matching node is provided with an independently configured cost function, and parameters of the cost function comprise gesture weights corresponding to gesture feature components and track weights corresponding to track feature components;
determining a target animation to be played from an animation database by utilizing a motion matching node corresponding to the target state;
Outputting the target animation;
the determining the target animation to be played from the motion database by using the motion matching node corresponding to the target state comprises the following steps:
acquiring a motion matching node corresponding to the target state and a target animation database corresponding to the target state; acquiring a target cost function corresponding to the motion matching node, and a current gesture characteristic and a current track characteristic of a current playing animation;
and determining the target animation to be played from the target animation database based on the gesture weights corresponding to the gesture feature components, the track weights corresponding to the track feature components, the current gesture features, the current track features and the candidate gesture features and the candidate track features of each first candidate animation in the target animation database.
2. The method according to claim 1, wherein the gesture feature includes at least one gesture feature component, the track feature includes at least one track feature component, and the determining, from the target animation database, the target animation to be played based on the gesture weight corresponding to each gesture feature component included in the target cost function parameter, the track weight corresponding to each track feature component, the current gesture feature, the current track feature, and the candidate gesture feature and the candidate track feature of each first candidate animation in the target animation database includes:
Acquiring gesture weights corresponding to the gesture feature components and track weights corresponding to the track feature components based on the target cost function;
acquiring candidate gesture features and candidate track features of each first candidate animation in the target animation database;
determining the similarity between the current playing animation and each first candidate animation based on the gesture weight corresponding to each gesture feature component, the track weight corresponding to each track feature component, the current gesture feature, the current track feature, and the candidate gesture feature and the candidate track feature of each first candidate animation;
and determining a target animation to be played from the first candidate animations based on the similarity between the currently played animation and the first candidate animations.
3. The method of claim 2, wherein the determining the similarity between the currently playing animation and the respective first candidate animations based on the pose weights for the respective pose feature components, the trajectory weights for the respective trajectory feature components, the current pose feature, the current trajectory feature, and the candidate pose features and candidate trajectory features of the respective first candidate animations comprises:
Determining the similarity corresponding to each gesture feature component in the current gesture feature and each candidate gesture feature;
determining the similarity corresponding to each track feature component in the current track feature and each candidate track feature;
carrying out weighted summation on the gesture weights corresponding to the gesture feature components and the similarity corresponding to the gesture feature components to obtain the gesture similarity between the current gesture feature and each candidate gesture feature;
carrying out weighted summation on the track weights corresponding to the track feature components and the similarity corresponding to the gesture feature components to obtain the track similarity between the current track feature and each candidate track feature;
and determining the sum of the similarity of each gesture and the similarity of each corresponding track as the similarity between the currently played animation and each first candidate animation.
4. The method according to claim 2, wherein the determining a target animation to be played from the respective first candidate animations based on the similarity between the currently played animation and the respective first candidate animations comprises:
Determining a first highest similarity from the similarities between the currently playing animation and the respective first candidate animations;
and when the first highest similarity is larger than a similarity threshold, determining the first candidate animation corresponding to the first highest similarity as the target animation to be played.
5. The method as recited in claim 4, wherein the method further comprises:
acquiring each second candidate animation in a current animation database corresponding to the current state;
determining the similarity between the currently played animation and each second candidate animation in the current animation database;
determining a second highest similarity from the similarities between the currently playing animation and the respective second candidate animations;
and determining the second highest similarity as a similarity threshold.
6. The method as recited in claim 4, wherein the method further comprises:
when the highest similarity is smaller than or equal to the similarity threshold, acquiring a second candidate animation corresponding to a second highest similarity from a current animation database corresponding to the current state;
and determining the second candidate animation corresponding to the second highest similarity as the target animation to be played.
7. The method according to any one of claims 1 to 6, further comprising:
when the state switching time is not reached based on the current state, the position information and the movement information, determining the similarity between the current playing animation and each second candidate animation in the current animation database;
determining a third highest similarity from the similarities between the respective second candidate animations in the current animation database;
and determining the second candidate animation corresponding to the third highest similarity as the target animation to be played.
8. The method according to any one of claims 1 to 6, further comprising:
acquiring a plurality of reference animations with different preset speeds;
fusing at least two reference animations with different preset speeds to obtain a plurality of fused animations;
and adding the reference animation and the fused animation to an animation database.
9. The method according to any one of claims 1 to 6, further comprising:
acquiring candidate animations in an animation database corresponding to each state;
and extracting the gesture features and the track features of the candidate animation in the animation database corresponding to each state.
10. The method according to any one of claims 1 to 6, further comprising:
acquiring a preset cost function, wherein the gesture weights corresponding to all gesture feature components and the track weights corresponding to all track feature components in the preset cost function are preset values;
and respectively setting gesture weights corresponding to gesture feature components and track weights corresponding to track feature components in the cost function based on configuration requirement information of each state to obtain the cost function corresponding to each state.
11. An animation switching apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring the current state, the position information and the movement information of the virtual character;
a first determining module for determining a target state of the virtual character when it is determined that a state switching occasion is reached based on the current state, the position information and the movement information; wherein, the different states correspond to independent motion matching nodes, each independent motion matching node is provided with an independently configured cost function, and parameters of the cost function comprise gesture weights corresponding to gesture feature components and track weights corresponding to track feature components;
The second determining module is used for determining a target animation to be played from the animation database by utilizing the motion matching node corresponding to the target state;
the playing module is used for outputting the target animation;
the second determining module is further used for obtaining a motion matching node corresponding to the target state and a target animation database corresponding to the target state; acquiring a target cost function corresponding to the motion matching node, and a current gesture characteristic and a current track characteristic of a current playing animation; and determining the target animation to be played from the target animation database based on the gesture weights corresponding to the gesture feature components, the track weights corresponding to the track feature components, the current gesture features, the current track features and the candidate gesture features and the candidate track features of each first candidate animation in the target animation database.
12. A computer device, the computer device comprising:
a memory for storing computer executable instructions;
a processor for implementing the method of any one of claims 1 to 10 when executing computer-executable instructions stored in the memory.
13. A computer readable storage medium storing computer executable instructions which when executed by a processor implement the method of any one of claims 1 to 10.
CN202310086668.1A 2023-02-09 2023-02-09 Animation switching method, device, equipment and computer readable storage medium Active CN115779436B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310086668.1A CN115779436B (en) 2023-02-09 2023-02-09 Animation switching method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310086668.1A CN115779436B (en) 2023-02-09 2023-02-09 Animation switching method, device, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN115779436A CN115779436A (en) 2023-03-14
CN115779436B true CN115779436B (en) 2023-05-05

Family

ID=85430658

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310086668.1A Active CN115779436B (en) 2023-02-09 2023-02-09 Animation switching method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115779436B (en)

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8615136B2 (en) * 2010-10-08 2013-12-24 Industrial Technology Research Institute Computing device and method for motion detection
US10086286B2 (en) * 2016-01-27 2018-10-02 Electronic Arts Inc. Systems and methods for capturing participant likeness for a video game character
CN105894555B (en) * 2016-03-30 2020-02-11 腾讯科技(深圳)有限公司 Method and device for simulating limb actions of animation model
CN110310350B (en) * 2019-06-24 2021-06-11 清华大学 Animation-based motion prediction generation method and device
US11972353B2 (en) * 2020-01-22 2024-04-30 Electronic Arts Inc. Character controllers using motion variational autoencoders (MVAEs)
CN113209618B (en) * 2021-06-01 2023-04-28 腾讯科技(深圳)有限公司 Virtual character control method, device, equipment and medium
US20230025389A1 (en) * 2021-07-23 2023-01-26 Electronic Arts Inc. Route generation system within a virtual environment of a game application
US11562523B1 (en) * 2021-08-02 2023-01-24 Electronic Arts Inc. Enhanced animation generation based on motion matching using local bone phases
CN114998491B (en) * 2022-08-01 2022-11-18 阿里巴巴(中国)有限公司 Digital human driving method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN115779436A (en) 2023-03-14

Similar Documents

Publication Publication Date Title
US10888785B2 (en) Method and system for real-time animation generation using machine learning
US11836843B2 (en) Enhanced pose generation based on conditional modeling of inverse kinematics
KR102645536B1 (en) Animation processing methods and devices, computer storage media, and electronic devices
Yannakakis Game AI revisited
US11648480B2 (en) Enhanced pose generation based on generative modeling
US11995754B2 (en) Enhanced animation generation based on motion matching using local bone phases
JP2023527403A (en) Automatic generation of game tags
US11816772B2 (en) System for customizing in-game character animations by players
US20230177755A1 (en) Predicting facial expressions using character motion states
Thalmann et al. Crowd rendering
CN117203675A (en) Artificial intelligence for capturing facial expressions and generating mesh data
Zhang Design of mobile augmented reality game based on image recognition
US11830121B1 (en) Neural animation layering for synthesizing martial arts movements
CN116091667A (en) Character artistic image generation system based on AIGC technology
Tan et al. Dance movement design based on computer three-dimensional auxiliary system
Davtyan et al. Controllable video generation through global and local motion dynamics
CN113763568A (en) Augmented reality display processing method, device, equipment and storage medium
CN115779436B (en) Animation switching method, device, equipment and computer readable storage medium
CN113592986B (en) Action generation method and device based on neural network and computing equipment
Hu et al. Deep learning applications in games: a survey from a data perspective
US20230310998A1 (en) Learning character motion alignment with periodic autoencoders
US20240238679A1 (en) Method and system for generating an image representing the results of a gaming session
US20220319088A1 (en) Facial capture artificial intelligence for training models
DIOGO INTEGRATING 3D OBJECTS AND POSE ESTIMATION FOR MULTIMODAL VIDEO ANNOTATIONS
CN116980543A (en) Video generation method, device, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40083146

Country of ref document: HK