CN113379590B - Animation data processing method, device, computer equipment and storage medium - Google Patents

Animation data processing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN113379590B
CN113379590B CN202110631858.8A CN202110631858A CN113379590B CN 113379590 B CN113379590 B CN 113379590B CN 202110631858 A CN202110631858 A CN 202110631858A CN 113379590 B CN113379590 B CN 113379590B
Authority
CN
China
Prior art keywords
target
animation
action
node
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110631858.8A
Other languages
Chinese (zh)
Other versions
CN113379590A (en
Inventor
陈广宇
杨双才
肖瑞焜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shanghai Co Ltd
Original Assignee
Tencent Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shanghai Co Ltd filed Critical Tencent Technology Shanghai Co Ltd
Priority to CN202110631858.8A priority Critical patent/CN113379590B/en
Publication of CN113379590A publication Critical patent/CN113379590A/en
Application granted granted Critical
Publication of CN113379590B publication Critical patent/CN113379590B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation

Abstract

The application relates to an animation data processing method, an animation data processing device, computer equipment and a storage medium. The method comprises the following steps: acquiring target interaction information carrying an animation object identifier corresponding to a target animation object; obtaining target animation configuration information corresponding to the animation object identifier; determining a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information; the method comprises the steps of obtaining target motion data corresponding to target motion and target node information of target motion type identification corresponding to target motion of target motion replacement nodes, sending the target motion data and the target node information to a second animation state machine, enabling the second animation state machine to activate the target motion replacement nodes based on the target node information, obtaining target object data corresponding to target animation objects, and loading the target motion data and the target object data through the target motion replacement nodes to obtain target motion animations corresponding to the target animation objects. The method can reduce the resource consumption during the animation processing.

Description

Animation data processing method, device, computer equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and apparatus for processing animation data, a computer device, and a storage medium.
Background
With the development of computer technology, the information display mode is dynamic, such as animation, besides static expression forms, such as text.
In the conventional technology, if an animation of one action needs to be generated, a corresponding animation node needs to be created on an animation state machine in an animation engine, and different actions need to create different animation nodes on the animation state machine. However, animation nodes on an animation state machine can expand linearly as animation requirements increase. Eventually, the increasing number of nodes occupies more and more memory, resulting in a large amount of resource consumption.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an animation data processing method, apparatus, computer device, and storage medium capable of reducing resource consumption.
A method of processing animation data, the method comprising:
acquiring target interaction information, wherein the target interaction information carries an animation object identifier corresponding to a target animation object;
obtaining target animation configuration information corresponding to the animation object identifier, wherein the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object;
Determining a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information, wherein the target action has a corresponding target action type identifier;
the method comprises the steps of obtaining target motion data corresponding to target motion and target node information of target animation replacement nodes corresponding to target motion type identifiers, sending the target motion data and the target node information to a second animation state machine, enabling the second animation state machine to activate the target animation replacement nodes based on the target node information, obtaining target object data corresponding to target animation objects, and loading the target motion data and the target object data through the target animation replacement nodes to obtain target motion animations corresponding to the target animation objects.
In one embodiment, the node information includes a node path, the animation playing information includes action triggering information, and determining a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information includes:
acquiring node information of animation replacement nodes corresponding to all the actions respectively;
determining an upper animation node corresponding to the animation replacement node based on the node path, and acquiring the node state of the upper animation node corresponding to the animation replacement node;
Taking the action corresponding to the animation replacement node with the node state of the upper animation node being the activation state as a candidate action;
and triggering action detection is carried out on the target animation object based on the target interaction information and the animation playing information corresponding to each candidate action, so as to obtain the target action corresponding to the target animation object.
In one embodiment, the animation playing information includes action triggering information, and triggering action detection is performed on the target animation object based on the target interaction information and the animation playing information corresponding to each candidate action, so as to obtain a target action corresponding to the target animation object, including:
matching the target interaction information with the action triggering information corresponding to each candidate action, and taking the candidate action corresponding to the successfully matched action triggering information as the target action.
In one embodiment, the animation playing information includes a storage path of motion data corresponding to a motion, and the obtaining the target motion data corresponding to the target motion includes:
obtaining target animation playing information corresponding to a target action;
and acquiring target action data based on a target storage path in the target animation playing information.
In one embodiment, the animation playing information includes action breaking information, the target node information includes a target node path and target node breaking information corresponding to a target animation replacement node, and the method further includes:
In the playing process of the target action animation, when the interrupt interaction information matched with the action interrupt information corresponding to the target action is obtained, the target node path and the target node interrupt information are sent to a second animation state machine, so that the second animation state machine sends the target node interrupt information to a target animation replacement node based on the target node path, and the target node interrupt information is used for interrupting the playing of the target action animation.
An animation data processing device, the device comprising:
the interactive information acquisition module is used for acquiring target interactive information which carries an animation object identifier corresponding to a target animation object;
the configuration information acquisition module is used for acquiring target animation configuration information corresponding to the animation object identifier, wherein the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object;
the target action determining module is used for determining a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information, wherein the target action has a corresponding target action type identifier;
the action animation generation module is used for acquiring target action data corresponding to the target action and target node information of a target animation replacement node corresponding to the target action type identifier, sending the target action data and the target node information to the second animation state machine, enabling the second animation state machine to activate the target animation replacement node based on the target node information, acquiring target object data corresponding to the target animation object, and loading the target action data and the target object data through the target animation replacement node to obtain target action animation corresponding to the target animation object.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring target interaction information, wherein the target interaction information carries an animation object identifier corresponding to a target animation object;
obtaining target animation configuration information corresponding to the animation object identifier, wherein the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object;
determining a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information, wherein the target action has a corresponding target action type identifier;
the method comprises the steps of obtaining target motion data corresponding to target motion and target node information of target animation replacement nodes corresponding to target motion type identifiers, sending the target motion data and the target node information to a second animation state machine, enabling the second animation state machine to activate the target animation replacement nodes based on the target node information, obtaining target object data corresponding to target animation objects, and loading the target motion data and the target object data through the target animation replacement nodes to obtain target motion animations corresponding to the target animation objects.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring target interaction information, wherein the target interaction information carries an animation object identifier corresponding to a target animation object;
obtaining target animation configuration information corresponding to the animation object identifier, wherein the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object;
determining a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information, wherein the target action has a corresponding target action type identifier;
the method comprises the steps of obtaining target motion data corresponding to target motion and target node information of target animation replacement nodes corresponding to target motion type identifiers, sending the target motion data and the target node information to a second animation state machine, enabling the second animation state machine to activate the target animation replacement nodes based on the target node information, obtaining target object data corresponding to target animation objects, and loading the target motion data and the target object data through the target animation replacement nodes to obtain target motion animations corresponding to the target animation objects.
A method of processing animation data, the method comprising:
receiving target action data and target node information sent by a first animation state machine; the target motion data is motion data corresponding to a target motion of a target animation object, the target motion is determined by a first animation state machine based on target interaction information carrying an animation object identifier corresponding to the target animation object and target animation configuration information corresponding to the animation object identifier, the target animation configuration information is used for configuring animation play information of at least one motion corresponding to the target animation object, the target node information is node information of a target animation replacement node, and the target animation replacement node is determined based on a target motion type identifier corresponding to the target motion;
activating a target animation replacement node based on the target node information;
acquiring target object data corresponding to a target animation object;
and loading the target action data and the target object data through the target animation replacement node to obtain the target action animation corresponding to the target animation object.
In one embodiment, activating a target animation replacement node based on target node information includes:
acquiring a forward node list corresponding to a target animation replacement node;
When the forward animation node in the active state exists in the forward node list, the target animation replacement node is activated based on the target node information.
In one embodiment, loading the target motion data and the target object data through the target motion replacement node to obtain a target motion corresponding to the target motion object, including:
loading target action data and the target object data through a target animation replacement node to obtain trigger action animations corresponding to target actions of target animation objects;
acquiring a forward motion animation corresponding to a forward animation node in an activated state;
acquiring second transition time information;
and transitioning from the forward motion animation to the trigger motion animation based on the second transition time information to obtain the target motion animation.
In one embodiment, the target node information includes target node trigger information, and acquiring the second transition time information includes:
acquiring target transition time information corresponding to the target node triggering information;
and taking the target transition time information as second transition time information.
An animation data processing device, the device comprising:
the information receiving module is used for receiving the target action data and the target node information sent by the first animation state machine; the target motion data is motion data corresponding to a target motion of a target animation object, the target motion is determined by a first animation state machine based on target interaction information carrying an animation object identifier corresponding to the target animation object and target animation configuration information corresponding to the animation object identifier, the target animation configuration information is used for configuring animation play information of at least one motion corresponding to the target animation object, the target node information is node information of a target animation replacement node, and the target animation replacement node is determined based on a target motion type identifier corresponding to the target motion;
The node activating module is used for activating the target animation replacement node based on the target node information;
the information acquisition module is used for acquiring target object data corresponding to the target animation object;
and the animation generation module is used for loading the target action data and the target object data through the target animation replacement node to obtain the target action animation corresponding to the target animation object.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
receiving target action data and target node information sent by a first animation state machine; the target motion data is motion data corresponding to a target motion of a target animation object, the target motion is determined by a first animation state machine based on target interaction information carrying an animation object identifier corresponding to the target animation object and target animation configuration information corresponding to the animation object identifier, the target animation configuration information is used for configuring animation play information of at least one motion corresponding to the target animation object, the target node information is node information of a target animation replacement node, and the target animation replacement node is determined based on a target motion type identifier corresponding to the target motion;
Activating a target animation replacement node based on the target node information;
acquiring target object data corresponding to a target animation object;
and loading the target action data and the target object data through the target animation replacement node to obtain the target action animation corresponding to the target animation object.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
receiving target action data and target node information sent by a first animation state machine; the target motion data is motion data corresponding to a target motion of a target animation object, the target motion is determined by a first animation state machine based on target interaction information carrying an animation object identifier corresponding to the target animation object and target animation configuration information corresponding to the animation object identifier, the target animation configuration information is used for configuring animation play information of at least one motion corresponding to the target animation object, the target node information is node information of a target animation replacement node, and the target animation replacement node is determined based on a target motion type identifier corresponding to the target motion;
activating a target animation replacement node based on the target node information;
Acquiring target object data corresponding to a target animation object;
and loading the target action data and the target object data through the target animation replacement node to obtain the target action animation corresponding to the target animation object.
According to the animation data processing method, the device, the computer equipment and the storage medium, the target interaction information is obtained, the target interaction information carries the animation object identification corresponding to the target animation object, the target animation configuration information corresponding to the animation object identification is obtained, the target animation configuration information is used for configuring the animation playing information of at least one action corresponding to the target animation object, the target action corresponding to the target animation object is determined based on the target interaction information and the target animation configuration information, the target action type identification corresponding to the target action exists, the target action data corresponding to the target action and the target node information of the target animation replacement node corresponding to the target action type identification are obtained, the target action data and the target node information are sent to the second animation state machine, the target animation replacement node is activated based on the target node information, the target object data corresponding to the target animation object is obtained, and the target action animation corresponding to the target animation object is obtained by loading the target action data and the target object data through the target animation replacement node. In this way, the corresponding relation exists between the animation replacement node in the second animation state machine and the action type identifier of the action, so that different actions corresponding to the same action type identifier can share the same animation replacement node, thereby effectively controlling the number of nodes in the second animation state machine and avoiding the expansion and growth of the number of nodes in the second animation state machine. After the first animation state machine determines the triggered target action of the target animation object, the target action data corresponding to the target action is sent to the corresponding target animation node on the second animation state machine to generate the animation. Through the cooperation of the first animation state machine and the second animation state machine, when the second animation state machine is used for generating the animation, the number of nodes in the second animation state machine can be reduced, so that the aim of reducing resource consumption is achieved.
Drawings
FIG. 1 is an application environment diagram of an animation data processing method in one embodiment;
FIG. 2 is a flow diagram of an animation data processing method according to an embodiment;
FIG. 3 is a schematic illustration of a kicking action in one embodiment;
FIG. 4 is a flow chart of determining a target action according to another embodiment;
FIG. 5 is a flowchart of an animation data processing method according to another embodiment;
FIG. 6A is a schematic diagram of forward node list and animation replacement nodes in one embodiment;
FIG. 6B is an interface diagram of an animation node presentation interface, in one embodiment;
FIG. 6C is an interface diagram of an animation node display interface, according to another embodiment;
FIG. 7 is a schematic diagram of a target animation replacement node and a backward animation node in one embodiment;
FIG. 8A is a schematic diagram of an interface for configuring animation configuration information, in one embodiment;
FIG. 8B is a flowchart of an animation data processing method according to yet another embodiment;
FIG. 8C is a schematic diagram of the software architecture of a computer device in one embodiment;
FIG. 9A is a block diagram showing the structure of an animation data processing device in one embodiment;
FIG. 9B is a block diagram showing the structure of an animation data processing device according to another embodiment;
FIG. 10A is a block diagram showing the structure of an animation data processing device in one embodiment;
FIG. 10B is a block diagram showing the structure of an animation data processing device according to another embodiment;
FIG. 11 is an internal block diagram of a computer device in one embodiment;
fig. 12 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
It will be understood that the terms "first," "second," and the like, as used herein, may be used to describe various elements, but these elements are not limited by these terms unless otherwise specified. These terms are only used to distinguish one element from another element.
The animation data processing method provided by the application can be applied to an application environment shown in fig. 1. Wherein the first animation state machine 102 communicates with the second animation state machine 104 via a network or interface. The first animation state machine and the second animation state machine can be deployed on a terminal or a server. It will be appreciated that the first animation state machine and the second animation state machine may be located on the same terminal, on the same server, or may be located on different terminals, respectively, on different servers. The terminal may be, but not limited to, various personal computers, notebook computers, smartphones, tablet computers and portable wearable devices, and the server may be implemented by a stand-alone server or a server cluster or cloud server composed of a plurality of servers.
Specifically, the first animation state machine acquires target interaction information, wherein the target interaction information carries an animation object identifier corresponding to a target animation object, acquires target animation configuration information corresponding to the animation object identifier, and the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object. The first animation state machine determines a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information, wherein the target action has a corresponding target action type identifier. The first animation state machine obtains target motion data corresponding to the target motion and target node information of a target animation replacement node corresponding to the target motion type identifier, and sends the target motion data and the target node information to the second animation state machine, so that the second animation state machine activates the target animation replacement node based on the target node information, obtains target object data corresponding to a target animation object, and loads the target motion data and the target object data through the target animation replacement node to obtain target motion animation corresponding to the target animation object.
The animation data processing method provided by the application also relates to a blockchain technology. For example, a first animation state machine and a second animation state machine that perform an animation data processing method may be deployed on nodes of a blockchain. Animation configuration information corresponding to different animation object identifications, and node information of each animation replacement node can be stored on a blockchain.
In one embodiment, as shown in fig. 2, there is provided an animation data processing method, taking as an example the application of the method to the first animation state machine in fig. 1, the method comprising the steps of:
step S202, obtaining target interaction information, wherein the target interaction information carries an animation object identifier corresponding to a target animation object.
Wherein, the animation state machine is used for monitoring and managing the motion change of the animation object and controlling the motion animation of the generated animation object. In the animation engine, an animation state machine interfaces up with an animation logic layer and controls the animation pipeline down.
The first animation state machine is a state machine that is primarily used to monitor and manage motion changes of an animation object. The second animation state machine is a state machine mainly used for controlling the action animation of the generated animation object. If the first animation state machine monitors that the animation object generates action changes such as action triggering, jumping, breaking and the like, the first animation state machine timely informs the second animation state machine and sends related data to the second animation state machine so that the second animation state machine generates corresponding action animations according to the received data. For example, the first animation state machine stores animation configuration information of an animation object, which can be used to determine what motion change has occurred to the animation object. The first animation state machine may determine what action the animation object triggered based on the message sent by the animation logic layer and the locally stored animation configuration information. Further, the first animation state machine may send the action data of the trigger action to the second animation state machine, notify the second animation state machine that the animation object needs to generate the action animation of the trigger action, and generate the corresponding action animation by the second animation state machine based on the action data of the trigger action. The second animation state machine comprises an animation node for generating the action animation, and the second animation state machine can generate the corresponding action animation by loading the action data on the animation node.
The second animation state machine may be a conventional animation state machine. It will be appreciated that the second animation state machine has all the functions of the conventional animation state machine, i.e. the second animation state machine retains the relevant characteristics and functions of the conventional animation state machine, but based on the animation data processing method of the present application, the number of animation nodes on the second animation state machine is reduced compared with the number of animation nodes on the animation state machine using the conventional method through cooperation of the first animation state machine and the second animation state machine. The first animation state machine may be an animation state machine built out of the second animation state machine, the first animation state machine being grafted to the second animation state machine. The data interaction based on the first animation state machine and the second animation state machine may control the resource consumption based on the target action animation that effectively generates the target animation object.
The animated object may be a virtual object, such as a flag in a virtual scene, or a virtual character, such as a player character, a non-player character, etc. in a game. The target animation object refers to an animation object corresponding to the target interaction information. An animation object identifier is an identifier that is used to uniquely identify an animation object, and may specifically include a character string of at least one of letters, numbers, and symbols. The target interaction information is information generated by the user and the terminal based on the interaction. For example, the animation playing interface of the terminal is displayed with a box, and the user clicks the box on the animation playing interface, so as to generate corresponding target interaction information, wherein the target interaction information carries the animation object identifier corresponding to the box. The animation playing interface is an interface for playing an animation, for example, a game interface for playing a game animation, an interactive interface for playing an interactive animation, and a product interface for playing a product display animation.
It is understood that the target interaction information may carry an animation object identifier corresponding to at least one target animation object. For example, when the user clicks the attack control on the game interface to trigger multiple people to attack simultaneously, the generated interaction information may carry the animation object identifiers corresponding to the multiple target animation objects respectively.
Specifically, the first animation state machine obtains target interaction information, and determines what action is triggered by the target animation object based on the target interaction information. The target interaction information may specifically be generated based on a triggering operation performed by a user on a corresponding interface, for example, when the user clicks a trigger control corresponding to an action on an animation playing interface, the target interaction information may be generated, where the target interaction information is used to indicate that the animation object triggers the action one. The target interaction information may also be generated based on scene change information, for example, in a game interface, when a user controls a game character to enter a scene B from a scene a, corresponding target interaction information is generated, where the target interaction information is used to indicate an initial action of triggering an animation object belonging to an environmental element in the scene B. Wherein, the animation objects belonging to the environment elements can be flags in the scene, fish shoals in water, field animals and the like.
Step S204, obtaining target animation configuration information corresponding to the animation object identifier, wherein the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object.
The animation configuration information is used for configuring animation playing information of at least one action corresponding to the animation object. The animation configuration information includes animation play information of at least one action corresponding to the animation object. The animation configuration information may be configured and stored in the form of a graph. The target animation configuration information refers to animation configuration information corresponding to a target animation object, and is used for configuring animation playing information of at least one action corresponding to the target animation object. The animation playing information of one action comprises starting information of at least one action state of the action, and the starting information is used for controlling the starting of various action states of one action of the animation object, so that the generation of corresponding action animations can be triggered. The action states may include a trigger state, an interrupt state, and a jump state. It is to be appreciated that an animated object may exhibit at least one action, e.g., a virtual character including a jump action, an attack action, a dialog action, etc.
Specifically, the animation configuration information corresponding to each animation object may be generated in advance, and the animation configuration information and the animation object identifier of the corresponding animation object may be stored in association in the first animation state machine. Then, after the first animation state machine obtains the target interaction information, the first animation state machine may obtain corresponding animation configuration information as target animation configuration information according to the animation object identifier carried by the target interaction information. It will be appreciated that the same type of animation object may correspond to the same animation configuration information or may correspond to different animation configuration information.
Step S206, determining a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information, wherein the target action has a corresponding target action type identifier.
Wherein the target action refers to an action in which the target animation object is triggered. The action type identifier is an identifier for uniquely identifying the type of action, and may specifically include a character string of at least one of letters, numbers, and symbols. For example, the attack action type identifier may correspond to a plurality of different attack actions. The dancing action type identifier may correspond to a number of different dancing actions, e.g., a "street dancing" action, a "modern dancing" action, a "classical dancing" action, etc. The target action type identifier refers to an action type identifier of the target action.
Specifically, since the target animation configuration information includes animation play information of at least one action corresponding to the target animation object, the first animation state machine may perform information matching on the obtained target interaction information and the target animation configuration information, and determine the target action corresponding to the target animation object based on the matching result.
In one embodiment, the animation playback information includes action trigger information. The action trigger information refers to start information of a trigger state of one action, i.e., trigger conditions of one action. The action triggering information may include at least one candidate interaction information for triggering an action, and may also include characteristics and conditions that need to be satisfied by the target interaction information for triggering an action. The first animation state machine matches each action triggering information in the target interaction information and the target animation configuration information, and takes the action corresponding to the action triggering information which is successfully matched as a target action. For example, when a player triggers a corresponding attack action to attack one, the animation logic layer may generate target interaction information ("attack 1" + "1"), and the first animation state machine may determine that the target action is attack one according to the target interaction information that matches action triggering information ("attack 1" + "1") of action attack one in the target animation configuration information. It may be appreciated that it may be provided that the action is determined to be the target action only when the target interaction information acquired by the first animation state machine satisfies all trigger conditions of the action. It may also be configured to determine that the action is a target action when the target interaction information acquired by the first animation state machine satisfies at least one trigger condition of the action. The action trigger information of each action in the same animation configuration information can be different, so that the condition of disordered action trigger can be avoided.
In one embodiment, the target interaction information may carry a trigger action identifier of the action to be triggered, and the first animation state machine may match the trigger action identifier in the target interaction information with candidate action identifiers corresponding to the actions in the target animation configuration information, and use the action corresponding to the candidate action identifier that is successfully matched as the target action.
Step S208, obtaining target motion data corresponding to the target motion and target node information of a target animation replacement node corresponding to the target motion type identifier, and sending the target motion data and the target node information to a second animation state machine, so that the second animation state machine activates the target animation replacement node based on the target node information, obtains target object data corresponding to the target animation object, and loads the target motion data and the target object data through the target animation replacement node to obtain target motion animation corresponding to the target animation object.
The motion data refers to animation information of one motion, and is used for determining the gesture of each stage in the process from the beginning to the completion of one motion. For example, if the animation object is built based on a skeletal model, the motion data may be animation information of a target skeletal node to which the animation object is related to triggering a certain motion. The action data may specifically include data such as the number of target skeleton nodes, skeleton identifiers corresponding to the target skeleton nodes, connection relationships, start positions, end positions, movement routes, movement speeds, and the like. Loading the motion data and the object data means loading the motion data and the object data into a memory, and generating a corresponding motion animation in a memory space based on the motion data and the object data. The target motion data is motion data corresponding to the target motion.
The object data refers to animation information of an animation object for determining a basic pose of the animation object. For example, if the animation object is built based on a skeletal model, the object data may be skeletal information of the initial skeletal nodes that make up the animation object. The object data may specifically include data such as a bone identifier, bone shape information, connection relation, initial position, and bone decoration information, which respectively correspond to each initial bone node that forms the animation object. It is understood that the number of initial skeletal nodes is greater than or equal to the number of target skeletal nodes. For example, when the animation object is a person, the initial skeletal nodes include skeletal nodes corresponding to human body components such as limbs, trunk, head, neck, etc., and when the target motion is a punch, the target skeletal nodes may be skeletal nodes corresponding to a part of the human body components involved in the punch, for example, skeletal nodes corresponding to upper limbs. Furthermore, object data corresponding to different animation objects is different. Even if different animation objects are the same type of animation object, object data corresponding to different animation objects may be different. However, the same type of animation object is identical in motion data corresponding to the same motion. For example, different players in the same game may use the same type of game character, have the same skeleton, and only have different appearances, that is, different skeleton decoration information, but they may trigger the same action, and the action data corresponding to the same action only needs one piece. The target object data refers to object data corresponding to a target animation object.
One action type identifier corresponds to one animation replacement node. It will be appreciated that the plurality of actions corresponding to an action type identifier are of the same or similar animation logic, except that the specific action data is different. Thus, the action data of a plurality of actions corresponding to one action type identifier can be sent to the same animation replacement node for loading to generate an animation. For example, the overall logic that controls the animated object to exhibit various dance movements is relatively stable, fixed, and only differs for movement data between specific dance movements such as "street dance", "modern dance", and so on. Therefore, there is no need to configure a large number of similar animation nodes in the second animation state machine, so that one dance action corresponds to one animation node. At this time, only one animation replacement node is required to be configured in the second animation state machine, the animation replacement node has a corresponding relation with the action type identifier for representing the dance, and when the corresponding dance action is triggered, the action data of the corresponding dance action is loaded through the animation replacement node, so that the action animations of different dance actions can be finally generated through one animation replacement node, and the number of animation nodes in the second animation state machine is effectively controlled.
An animation replacement node may correspond to at least one action type identifier. For example, when the actions respectively corresponding to the two action type identifiers cannot be triggered simultaneously, the two action type identifiers may correspond to the same animation replacement node. Because, even if the two action type identifiers correspond to the same animation replacement node, only one action is triggered at one time point, and action data of one action is loaded on the animation replacement node. Even if one animation replacement node corresponds to two action type identifiers, the situation of action confusion and animation confusion does not occur.
The animation replacement node refers to an animation node for receiving motion data transmitted by the first animation state machine. Each time new motion data is received, the animation replacement node may replace from loading old motion data to loading new motion data, generating a new motion animation for playback. The node information comprises data such as node identification, node paths, node triggering information, node breaking information and the like. The position of the animation replacement node can be quickly found based on the node path. The animation replacement node can be activated to load action data and play the generated action animation based on the node trigger information. The play of the action animation generated on the animation replacement node may be interrupted based on the node interrupt information.
The target animation replacement node is an animation replacement node corresponding to the target action type identifier. The motion animation refers to an animation in which an animation object shows a picture that triggers a motion. The target motion animation refers to an animation of a target animation object showing a picture of a target motion. The target action animation includes gesture information for the target animation object to exhibit the target action, e.g., the target action animation is composed of a series of decomposed gestures for the target animation object a to exhibit the kicking action alone. When the target animation object is built based on a skeleton model, the target action animation may be a three-dimensional skeleton animation, and the target action animation may be composed of a series of time-varying skeleton poise of the target animation object, where one poise (Pose) is composed of displacement, rotation and scaling of a set of Bone.
Specifically, after determining the target action type identifier corresponding to the target action, the first animation state machine may determine the target animation replacement node based on the target action type identifier. After determining the target action and the target animation replacement node, the first animation state machine can acquire target action data corresponding to the target action and target node information of the target animation replacement node, and send the target action data and the target node information to the second animation state machine. After receiving the target action data and the target node information, the second animation state machine can find a target animation replacement node based on the target node information and activate the target animation replacement node. After the target animation replacement node is activated, the second animation state machine can acquire target object data corresponding to the target animation object, and load the target action data and the target object data through the target animation replacement node to obtain target action animation corresponding to the target animation object. Finally, the second animation state machine may output the generated target action animation through the target animation replacement node. And playing the target action animation on an animation playing interface after rendering the target action animation into a video.
In one embodiment, the second animation state machine may activate the target animation replacement node based on target node trigger information in the target node information. The second animation state machine may also find a target animation replacement node based on the target node information, and send a node activation instruction to the target animation replacement node, so that the target animation replacement node enters an activated state based on the node activation instruction.
In one embodiment, the second animation state machine obtains target object data corresponding to the target animation object, loads the target action data and the target object data through the target animation replacement node, and obtains a target action animation corresponding to the target animation object, including: the target motion data and the target object data are fused through the target motion replacement node, a plurality of decomposition gestures corresponding to the target motion are generated, and the target motion animation is obtained based on the decomposition gestures.
The decomposed gesture is an action gesture corresponding to the decomposed action of the target action. For example, the kicking action is formed by a series of decomposition gestures corresponding to a kicking process of the animation object from lifting feet to kicking to a target position. Each decomposition gesture carries a time stamp, and the complete action animation of one action can be formed by combining each decomposition gesture according to the time stamp. The target action animation may ultimately be rendered into video for playback. For example, referring to fig. 3, a target motion animation corresponding to a kicking motion of a target animation object may be rendered as a motion video including video frames corresponding to three decomposed poses, respectively.
Specifically, when loading the target motion data and the target object data to generate the target motion animation corresponding to the target animation object, the second animation state machine can fuse the target motion data and the target object data through the target animation replacement node to form fusion data corresponding to each decomposition motion of the target animation object, and obtain the motion gesture of the target object corresponding to each decomposition motion based on the fusion data, so as to obtain a plurality of decomposition gestures corresponding to the target animation object when the target motion is performed. The second animation state machine may generate the target action animation based on the respective decomposition gesture through the target animation replacement node. Subsequently, in performing animation rendering, the target action may be animated to a video consisting of a series of video frames in which the target animation object triggers the target action.
In one embodiment, in the first animation state machine, the animation configuration information and the node information may be stored separately, i.e., in isolation. Of course, in order to improve the information acquisition efficiency, the node information of the animation replacement node may also be recorded in the animation configuration information.
In the above animation data processing method, the target interaction information is obtained, the target interaction information carries an animation object identifier corresponding to a target animation object, the target animation configuration information corresponding to the animation object identifier is obtained, the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object, the target action corresponding to the target animation object is determined based on the target interaction information and the target animation configuration information, the target action has a corresponding target action type identifier, the target action data corresponding to the target action and the target node information of a target animation replacement node corresponding to the target action type identifier are obtained, the target action data and the target node information are sent to a second animation state machine, so that the second animation state machine activates the target animation replacement node based on the target node information, and the target action data is loaded through the target animation replacement node, thereby obtaining the target action animation corresponding to the target animation object. In this way, the corresponding relation exists between the animation replacement node in the second animation state machine and the action type identifier of the action, so that different actions corresponding to the same action type identifier can share the same animation replacement node, thereby effectively controlling the number of nodes in the second animation state machine and avoiding the expansion and growth of the number of nodes in the second animation state machine. After the first animation state machine determines the triggered target action of the target animation object, the target action data corresponding to the target action is sent to the corresponding target animation node on the second animation state machine to generate the animation. Through the cooperation of the first animation state machine and the second animation state machine, when the second animation state machine is used for generating the animation, the number of nodes in the second animation state machine can be reduced, so that the aim of reducing resource consumption is achieved.
In one embodiment, the action carries an action type identifier, and before the target interaction information is obtained, the method further includes:
performing action clustering on each action corresponding to the same action type identifier to obtain action clustering clusters respectively corresponding to each action type identifier; based on at least two candidate replacement nodes established in the second animation state machine in advance, performing node allocation on each action cluster to obtain animation replacement nodes respectively corresponding to each action cluster; and configuring node information of the corresponding animation replacement node in the animation playing information of each action.
The method for obtaining the target node information of the target animation replacement node corresponding to the target action type identifier comprises the following steps: and obtaining target node information from the target animation playing information corresponding to the target action.
The action clustering is used for clustering the actions corresponding to the same action type identifier in the same cluster, namely, aggregating the actions belonging to the same action type. The node allocation is used for allocating animation replacement nodes for the action clusters of each action type. The target animation playing information refers to animation playing information corresponding to the target action.
Specifically, each action has a corresponding action type identifier, and if the animation replacement node corresponding to each action type identifier is determined, the animation replacement node corresponding to each action can also be determined. In order to improve the information acquisition efficiency, the node information of the animation replacement node corresponding to each action may be stored in advance in the animation play information corresponding to each action. Then, after determining the target action, the first animation state machine may directly obtain the target node information corresponding to the target animation replacement node from the target animation playing information corresponding to the target action.
Before the target interaction information is acquired, in the first animation state machine, each action corresponding to the same action type identifier can be subjected to action clustering, so that action clustering clusters respectively corresponding to each action type identifier are obtained. One action cluster comprises a plurality of actions corresponding to one action type identifier. While in the second animation state machine, a plurality of idle candidate replacement nodes may be created in advance. The second animation state machine may send node information for each candidate replacement node to the first animation state machine. In the first animation state machine, node allocation can be performed on each action cluster based on at least two candidate replacement nodes established in the second animation state machine in advance, so as to obtain animation replacement nodes respectively corresponding to each action cluster. That is, candidate replacement nodes corresponding to the motion type identifiers are determined, so that animation replacement nodes corresponding to the motion type identifiers are obtained. It can be appreciated that the nodes can be randomly allocated, one action type identifier corresponds to one animation replacement node, and the animation replacement nodes corresponding to different action type identifiers can be the same or different. When the animation replacement nodes corresponding to different action type identifiers are the same, the actions corresponding to the action type identifiers are ensured to be unable to be triggered at the same time. Finally, the first animation state machine may configure the node information of the corresponding animation replacement node in the animation play information of each action.
In one embodiment, the node creation and node information configuration may also be performed by a computer device. The computer equipment performs action clustering on each action corresponding to the same action type identifier to obtain action clustering clusters corresponding to each action type identifier, performs node allocation on each action clustering cluster based on at least two candidate replacement nodes established in the second animation state machine in advance to obtain animation replacement nodes corresponding to each action clustering cluster respectively, and configures node information of the corresponding animation replacement nodes in animation playing information of each action. The computer equipment sends the animation configuration information composed of the animation playing information of each action to the first animation state machine for storage. Subsequently, after the first animation state machine determines the target action, the first animation state machine can directly obtain the target node information corresponding to the target animation replacement node from the target animation playing information corresponding to the target action.
In this embodiment, through action clustering and node allocation, an action type identifier corresponds to an animation replacement node. In this way, the action data of each action corresponding to the same action type identifier can be sent to the same animation replacement node to generate the corresponding action, so that the number of nodes in the second animation state machine is effectively controlled. And the node information of the corresponding animation replacement node is added in the animation playing information of the action, so that the node information of the target animation replacement node can be quickly obtained after the target action is determined.
In one embodiment, as shown in fig. 4, the node information includes a node path, and determining a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information includes:
step S402, obtaining node information of animation replacement nodes corresponding to the actions respectively.
Step S404, determining an upper animation node corresponding to the animation replacement node based on the node path, and obtaining the node state of the upper animation node corresponding to the animation replacement node.
Wherein in the second animation state machine, the animation nodes are expanded one level at a time. The node path may be an animation level line through which an animation replacement node is sought. For example, the node path of the animation replacement node is core|interaction|simpleanimation 1, "|" is a divider of each level of animation node, simpleAnimation1 represents the animation replacement node, and core|interaction represents the upper animation node of the animation replacement node.
The node states include an active state and a standby state. When the animation node is in an active state, the animation node is activated, and in use, internal data is in participation operation. When the moving node is in a standby state, the moving node is not activated, and is in standby state, and the internal data does not participate in operation.
Specifically, in order to reduce the data amount of information matching, the first animation state machine may determine the target action through information matching only when the upper animation node of the animation replacement node is in an active state, so as to avoid that the first animation state machine blindly performs information matching to determine the target action. Only when the upper animation node of the animation replacement node is in an active state, the animation replacement node can be successfully activated and used.
After the target interaction information is obtained, the first animation state machine can determine animation replacement nodes corresponding to all the actions respectively based on the action type identifiers corresponding to all the actions of the target animation object, so that node information of the animation replacement nodes corresponding to all the actions respectively is obtained. The first animation state machine may also obtain node information of animation replacement nodes corresponding to each action from animation playing information of each action in the target animation configuration information. The node information comprises a node path, and the first animation state machine can determine an upper animation node corresponding to the animation replacement node based on the node path because the node path comprises an animation level line through which the animation replacement node is found, so that the node state of the upper animation node corresponding to the animation replacement node is obtained. The first animation state machine can send a node state query request to the second animation state machine, the node state query request carries the node identification of the animation node of the node state to be queried, the second animation state machine replaces the node state of the upper animation node corresponding to the node through the animation required by the node state query request query, and the node state is returned to the first animation state machine.
Step S406, the action corresponding to the animation replacement node with the node state of the upper animation node being the active state is taken as the candidate action.
Step S408, trigger action detection is performed on the target animation object based on the target interaction information and the animation playing information corresponding to each candidate action, so as to obtain the target action corresponding to the target animation object.
Wherein, the trigger action detection is used for detecting which specific action of the target animation object is triggered by the target interaction information.
Specifically, after the first animation state machine obtains the node states of the upper animation nodes corresponding to the animation replacement nodes, it may determine whether there is an upper animation node in which the node states are active states. If there is an upper animation node whose node state is an active state, the first animation state machine may use an action corresponding to an animation replacement node whose node state is an active state as a candidate action. Further, the first animation state machine may perform trigger action detection on the target animation object based on the target interaction information and the animation playing information corresponding to each candidate action, and determine the target action corresponding to the target animation object based on the trigger action detection result.
On the whole, first screening is performed based on the node state of the upper animation node of the animation replacement node, candidate actions are screened out in each action, second screening is performed based on the information matching of the target interaction information and the animation playing information corresponding to each candidate action, and the target action is finally determined from each candidate action.
In one embodiment, the animation playing information includes action triggering information, and triggering action detection is performed on the target animation object based on the target interaction information and the animation playing information corresponding to each candidate action, so as to obtain a target action corresponding to the target animation object, including:
matching the target interaction information with the action triggering information corresponding to each candidate action, and taking the candidate action corresponding to the successfully matched action triggering information as the target action.
Specifically, the action trigger information refers to start information of a trigger state of one action, that is, a trigger condition of one action. The action triggering information may include at least one candidate interaction information for triggering an action, and may also include characteristics and conditions that need to be satisfied by the target interaction information for triggering an action. When trigger action detection is performed, the first animation state machine can match the target interaction information with action trigger information corresponding to each candidate action, and the candidate action corresponding to the action trigger information which is successfully matched is taken as the target action. For example, if the target interaction information is a, the action trigger information of the action one corresponding to the target animation object is a, the action trigger information of the action two is B, and the action trigger information of the action three is C, then the action one can be determined to be the target action according to the information matching. Thus, the target action can be quickly and accurately determined from the candidate actions through the matching of the target interaction information and the action triggering information in the animation playing information of the candidate actions.
In one embodiment, the target interaction information carries a trigger action identifier of the action to be triggered, and the animation playing information includes candidate action identifiers corresponding to the actions. When the trigger action detection is performed, the first animation state machine can match the trigger action identifier in the target interaction information with the candidate action identifiers corresponding to the candidate actions, and takes the candidate action corresponding to the candidate action identifier which is successfully matched as the target action.
In this embodiment, when determining the target action based on the target interaction information and the target animation configuration information, the target interaction information is matched with the animation playing information corresponding to the candidate action, and the upper animation node of the animation replacement node corresponding to the candidate action is in an active state. Therefore, the target interaction information and the animation playing information of each action are not required to be matched, and only the target interaction information and the animation playing information corresponding to the candidate action are required to be matched, so that the number of times of information matching can be greatly reduced, the number of information traversal is reduced, the efficiency of information matching is improved, and the target action is rapidly determined.
In one embodiment, the animation playing information includes a storage path of motion data corresponding to a motion, and the obtaining the target motion data corresponding to the target motion includes:
Obtaining target animation playing information corresponding to a target action; and acquiring target action data based on a target storage path in the target animation playing information.
Specifically, when the animation configuration information of the animation object is configured, the motion data of each motion is not required to be directly added to the corresponding animation playing information, but the storage path of the motion data of each motion is added to the corresponding animation playing information, so that the data volume of the animation configuration information can be effectively reduced. The animation playing information comprises a storage path of action data corresponding to the action, and the action data can be quickly acquired based on the storage path. Therefore, when the first animation state machine acquires the target motion data corresponding to the target motion, the first animation state machine can acquire the target animation playing information corresponding to the target motion, acquire the target storage path corresponding to the target motion data from the target animation playing information, and acquire the target motion data based on the target storage path. The storage path may specifically be a storage address of the action data in the hard disk.
In one embodiment, the target node information includes a target node path and target node trigger information corresponding to the target animation replacement node. The target motion data and the target node information are sent to a second animation state machine, so that the second animation state machine activates a target animation replacement node based on the target node information, target object data corresponding to a target animation object is obtained, the target motion data and the target object data are loaded through the target animation replacement node, and a target motion animation corresponding to the target animation object is obtained, and the method comprises the following steps:
And sending the target action data, the target node path and the target node trigger information to a second animation state machine, so that the second animation state machine sends the target node trigger information to a target animation replacement node based on the target node path, the target action data and the target object data are loaded through the target animation replacement node to obtain target action animation corresponding to the target animation object, and the target node trigger information is used for activating the target animation replacement node.
The target node path refers to a node path corresponding to the target animation replacement node. The target node trigger information refers to node trigger information corresponding to the target animation replacement node. The node trigger information is used to activate the corresponding animation replacement node. The node trigger information may cause the node state of the animation replacement node to transition from the standby state to the active state. The target node trigger information is used to activate the target animation replacement node. Different animation replacement nodes may correspond to different node trigger information.
It is understood that an animation replacement node may correspond to at least one node trigger. The triggered target actions are different, and the node trigger information sent to the second animation state machine by the first animation state machine can be the same or different. For example, action one and action two belong to the same action type, and both action one and action two correspond to animation to replace node one. When the trigger action is one, the first animation state machine may send node trigger information one to the second animation state machine, so that the second animation state machine activates the animation to replace the node one based on the node trigger information one. When the trigger action is one, the first animation state machine may send node trigger information two to the second animation state machine, so that the second animation state machine activates the animation to replace the node one based on the node trigger information two.
Specifically, node trigger information for activating the animation replacement node may be agreed between the first animation state machine and the second animation state machine, so that when the target animation replacement node needs to be activated after determining the target action, the first animation state machine may send the target node trigger information to the second animation state machine to activate the target animation replacement node. The first animation state machine may send the target action data and the target node information to the second animation state machine, and may specifically send the target action data, the target node path, and the target node trigger information to the second animation state machine. After the second animation state machine receives the data, the second animation state machine can find the target animation replacement node based on the target node path, send the target node trigger information to the target animation replacement node, and activate the target animation replacement node through the target node trigger information. After the target animation replacement node is activated, the second animation state machine can load target action data and target object data through the target animation replacement node to obtain target action animation corresponding to the target animation object.
In this embodiment, the second animation state machine may quickly find the target animation replacement node through the target node path, and activate the target animation replacement node through the target node trigger information, so that the target action animation may be quickly generated through the activated target animation replacement node.
In one embodiment, the animation playing information includes action breaking information, the target node information includes a target node path and target node breaking information corresponding to a target animation replacement node, and the method further includes:
in the playing process of the target action animation, when the interrupt interaction information matched with the action interrupt information corresponding to the target action is obtained, the target node path and the target node interrupt information are sent to a second animation state machine, so that the second animation state machine sends the target node interrupt information to a target animation replacement node based on the target node path, and the target node interrupt information is used for interrupting the playing of the target action animation.
The action interruption information refers to starting information of an interruption state of an action, namely an interruption condition of the action. The action interruption information may include at least one candidate interaction information for interrupting an action, and may also include characteristics and conditions that need to be satisfied by the target interaction information for interrupting an action. The node disruption information is used to disrupt the play of the action animation generated on the animation replacement node. The target node interrupt information is used to interrupt the playing of the target action animation generated on the target animation replacement node.
It is understood that different animation replacement nodes may correspond to different node break information. An animation replacement node may correspond to at least one node break information. The target actions of the interruption are different, and the node interruption information sent to the second animation state machine by the first animation state machine can be the same or different. For example, action one and action two belong to the same action type, and both action one and action two correspond to animation replacement node a. When initiating a break action one, the first animation state machine may send node break information one to the second animation state machine to cause the second animation state machine to break the playback of the action animation replacing action one generated on node a based on the node break information. When the interrupt action is initiated, the first animation state machine may send node interrupt information two to the second animation state machine, such that the second animation state machine interrupts the playback of the action animation replacing action two generated on node a based on the node interrupt information two.
Specifically, during the playing process of the target action animation, the first animation state machine may continuously determine whether interrupt interaction information matched with the action interrupt information corresponding to the target action is received, that is, whether interaction information satisfying the interrupt condition is received. If the first animation state machine receives the interrupt interaction information matched with the action interrupt information corresponding to the target action, the first animation state machine can send the target node path and the target node interrupt information to the second animation state machine. After the second animation state machine receives the data, the second animation state machine can determine a target animation replacement node based on the target node path, send the target node breaking information to the target animation replacement node, and break the playing of the target action animation generated on the target animation replacement node through the target node breaking information. At this time, the second animation state machine may interrupt only the motion animation corresponding to the target motion of the target animation object, and continue playing motion animations that are not interrupted or not played by other animation objects.
In this embodiment, in the playing process of the target action animation, if the interrupt interaction information matched with the action interrupt information of the target action is obtained, the first animation state machine may send the node interrupt information to the second animation state machine to stop playing of the target action.
In one embodiment, the animation playback information comprises action jump information, the method further comprising:
when jump interactive information matched with action jump information corresponding to a target action is obtained in the playing process of the target action animation, jump action data of the jump action corresponding to the target action is obtained, and first transition time information is obtained; and transmitting the jump motion data, the first transition time information and the target node information to a second animation state machine, so that the second animation state machine loads the jump motion data and the target object data through a target animation replacement node corresponding to the target node information to obtain a jump motion animation corresponding to the target animation object, and transitioning from the target motion animation to the jump motion animation based on the first transition time information to obtain a fusion motion animation.
The action jump information refers to starting information of a jump state of an action, namely a jump condition of the action. The action jump information may include at least one candidate interaction information for jumping from the target action to the other action, and may also include characteristics, conditions, which need to be satisfied by the target interaction information for jumping from the target action to the other action. The jump action refers to the action that the target animation object needs to jump and display after starting action jump. The skip action data is action data corresponding to the skip action.
The transition time information is a mixed time of animation transition when a jump occurs between an action animation of a target action and an action animation of a jump action. It will be appreciated that the longer the mixing time, the slower and smoother the action jump, and the shorter the mixing time, the faster and harder the action jump. The transition time information between different motion animations can set a proper mixing time according to the requirements of the motion itself. The first transition time information refers to transition time information for transitioning from the target action to the jump action.
Specifically, during the playing process of the target action animation, the first animation state machine may continuously determine whether jump interaction information matched with the action jump information corresponding to the target action is received, that is, whether interaction information satisfying the jump condition is received. And if the first animation state machine receives the jump interaction information matched with the action jump information corresponding to the target action, the first animation state machine can acquire the jump action data and the first transition time information of the jump action corresponding to the target action and send the jump action data, the first transition time information and the target node information to the second animation state machine. In one embodiment, the target animation playing information corresponding to the target action may further include an action identifier and first transition time information of the skip action, and then the first animation state machine may acquire the action identifier and the first transition time information of the skip action from the target animation playing information, acquire skip picture playing information corresponding to the skip action based on the action identifier of the skip action, and acquire skip action data based on the skip picture playing information.
After the second animation state machine receives the skip action data, the first transition time information and the target node information, the second animation state machine can determine a target animation replacement node based on the target node information, send the skip action data to the target animation replacement node, and load the skip action data and the target object data through the target animation replacement node to obtain the skip action animation corresponding to the target animation object. And the second animation state machine transits from the target action animation to the jump motion animation based on the first transition time information through the target animation replacement node to perform animation, so as to obtain the fusion action animation.
In one embodiment, transitioning from a target action animation to a skip action animation based on first transition time information, resulting in a fused action animation, comprises: and based on the first transition time information, the transition from the triggering action animation to the jumping action animation is performed, so as to obtain the fusion action animation, wherein the first transition time information is used for controlling the state parameter corresponding to the target action to gradually decrease from a first preset value to a second preset value and controlling the state parameter of the jumping action to gradually increase from the second preset value to the first preset value, and the first preset value is larger than the second preset value.
Wherein the state parameter is a parameter for describing an action presentation state. The state parameter may also be referred to as a state weight. When the state parameter corresponding to one action is a first preset value, the action is clearly presented according to the original action logic, and when the state parameter corresponding to one action is a second preset value, the action is hidden and disappeared and is not presented, and at the moment, the animation object presents a basic gesture, for example, a standing gesture in a shape of a Chinese character 'zhu'. When the state parameter corresponding to one action is between the first preset value and the second preset value, the animation object is in the middle process from the basic form to the unfolding action. The first preset value may be 1 and the second preset value may be 0.
The fusion action animation is used for showing the dynamic process of jumping the target animation object from the target action to the jumping action. The fused action animation includes the target animation object exhibiting gesture information for jumping from the target action to the jumping action.
Specifically, after the target motion animation and the skip motion animation are obtained, the second animation state machine may transition from the target motion animation of the target animation object to the skip motion animation based on the first transition time information, thereby obtaining the fusion motion animation. When the motion is transited, the second animation state machine can control the state parameters corresponding to the target motion to gradually decrease from the first preset value to the second preset value based on the first transition time information so as to gradually fade out the target motion, and control the state parameters corresponding to the jump motion to gradually increase from the second preset value to the first preset value based on the first transition time information so as to gradually present the jump motion and gradually fade in the jump motion. The fused action animation may eventually exhibit an animation effect that smoothly fades from the target action to the jump action.
In one embodiment, the second animation state machine may adopt a freezing transition mode, the second animation state machine freezes a time line of the trigger action animation, that is, the local clock of the trigger action animation pauses, controls the state weight of the trigger action animation to gradually decrease from 1 to 0, and the state weight of the skip action animation gradually increases from 0 to 1. That is, the current state of the target motion is frozen at the current position of the target animation object, and mixed with the initial state of the jump motion over time, and the transition between the two states is performed, and the initial state moves forward over time during the mixing process.
In one embodiment, the first animation state machine may determine, when each frame of video frame is played, whether interrupt interaction information matching the action interrupt information corresponding to the target action is received, and determine whether skip interaction information matching the action skip information corresponding to the target action is received.
In this embodiment, if skip interaction information matching with the motion skip information of the target motion is obtained during the playing process of the target motion animation, the first animation state machine may send the node skip motion data and the first transition time information to the second animation state machine, so that the motion animation of the target motion can be smoothly skipped to the motion animation of the skip motion.
In one embodiment, the animation playback information includes a storage path corresponding to the action data of the action. Acquiring target motion data corresponding to the target motion and target node information of a target animation replacement node corresponding to the target motion type identifier, and transmitting the target motion data and the target node information to a second animation state machine, wherein the method comprises the following steps:
generating a target animation instance corresponding to the target action of the target animation object based on the target animation playing information corresponding to the target action and the object attribute information corresponding to the animation object identifier; running a target animation instance, and acquiring target action data based on a target storage path in target animation playing information; and sending the target action data and the target node information to a second animation state machine through the target animation instance.
The object attribute information is used for describing an attribute of the animation object, and may specifically include attribute information such as a name, a gender, a race, a occupation, a force value, a life value, a shape, and a decoration of the animation object, where the object attribute information corresponding to the animation object of the same type may include the same information, for example, a shape of the animation object, and may also include different information, for example, a decoration of the animation object. For example, different players may similarly use horses as game characters, and different players may dress up the respective game characters to form horses with different appearances.
The animation instance is used for allocating a memory space in the memory to monitor the motion state change of the motion of the animation object. Each animation instance is independent, and old animation instances can be interrupted by new animation instances, i.e., the animation instances are stateless. When the motion state is monitored to be changed, the first animation state machine can send corresponding information to the second animation state machine through the animation instance, so that the second animation state machine is notified.
Specifically, the object attribute information corresponding to the animation object and the animation object identifier are stored in association, so that the object attribute information corresponding to the target animation object can be obtained based on the animation object identifier of the target animation object. After determining the target action, the first animation state machine can acquire target animation playing information corresponding to the target action. The first animation state machine may create a target animation instance corresponding to a target action of the target animation object based on the target animation play information and the object attribute information corresponding to the target animation object. It will be appreciated that since the object attribute information is different for different animation objects, the resulting animation instances are different even if the same action is triggered by different animation objects. The first animation state machine runs a target animation instance, obtains target action data based on a target storage path in target animation playing information through the target animation instance, and sends the target action data and target node information to the second animation state machine through the target animation instance.
Subsequently, in the running process of the target animation instance, when the first animation state machine obtains the interrupt interaction information matched with the action interrupt information corresponding to the target action, the first animation state machine can send the target node path and the target node interrupt information to the second animation state machine through the target animation instance. That is, when the target animation instance monitors that the target animation object changes from the trigger state of the target action to the interrupt state, the first animation state machine may send the target node path and the target node interrupt information to the second animation state machine through the target animation instance to notify the second animation state machine to interrupt the playing of the target action animation.
In the running process of the target animation instance, when the first animation state machine obtains jump interaction information matched with action jump information corresponding to the target action, the first animation state machine can send jump action data, first transition time information and target node information to the second animation state machine through the target animation instance. That is, when the target animation instance monitors that the target animation object changes from the trigger state of the target action to the skip state, the first animation state machine may send skip action data, first transition time information and target node information to the second animation state machine through the target animation instance to inform the second animation state machine to perform action skip, so as to generate the fusion action animation.
In this embodiment, the efficiency of information interaction may be improved by creating a target animation instance to monitor the motion state change of the target motion and timely notifying the second animation state machine. When the target action is triggered, the first animation state machine can quickly send the target action data and the target node information to the second animation state machine through the target animation instance.
In one embodiment, to reduce the play latency of the animation, the first animation state machine may pre-generate a key action instance corresponding to the key action of the target animation object. When the target animation object is generated, the first animation state machine can start the preloading of the key animation instance corresponding to the key action of the target animation object, and the key animation instance is pre-created before the interaction information matched with the action triggering information of the key action is acquired. Then, when the target motion determined based on the target interaction information is a key motion, the first animation state machine can directly acquire a corresponding key animation instance, and the target motion data and the target node information are sent to the second animation state machine by running the key animation instance.
In one embodiment, the first animation state machine can additionally create a new thread based on the existing thread, and run the target animation instance in the new thread, so that the animation loading speed is improved, and the picture blocking is reduced. For example, in a game scenario, the first animation state machine may create a new thread outside of the main thread controlling the game logic, running the target animation instance in the new thread, thereby avoiding affecting the running of the main thread.
In one embodiment, after the playing of the target action animation is finished, storing the target animation instance into the object pool; when the target interaction information is acquired next time, acquiring a target animation instance from the object pool, and multiplexing the target animation instance to monitor the action state change of the target action of the target animation object.
The object pool is to open up a storage space in the memory to store animation examples which need to be repeatedly used. The end of playing the target motion animation can be that the playing time of the target motion of the target animation object reaches the preset playing time, or that the target motion is interrupted or jumped.
Specifically, after the target action animation is played, the first animation state machine may store the target animation instance in the object pool. Thus, when the same target interaction information is acquired next time, the first animation state machine can directly acquire the target animation instance from the object pool, and the target animation instance is multiplexed to monitor the action state change of the target action of the target animation object. For example, upon initiation of a trigger state of a target action, sending target action data and target node information to a second animation state machine; when the interrupt state of the target action is started, the target node path and the target node interrupt information are sent to a second animation state machine; and when the jump state of the target action is started, the jump action data, the first transition time information and the target node information are sent to a second animation state machine.
Therefore, after the playing of the target action animation is finished, the target animation instance is stored in the object pool, when the target interaction information is acquired next time, the target animation instance is acquired from the object pool, the target animation instance is multiplexed to monitor the action state change of the target action of the target animation object, repeated memory application and release can be avoided, and resource consumption is further reduced.
In one embodiment, as shown in fig. 5, there is provided an animation data processing method, which is exemplified by the application of the method to the second animation state machine in fig. 1, the method comprising the steps of:
step S502, receiving target action data and target node information sent by a first animation state machine; the target motion data is motion data corresponding to a target motion of a target animation object, the target motion is determined by a first animation state machine based on target interaction information carrying an animation object identifier corresponding to the target animation object and target animation configuration information corresponding to the animation object identifier, the target animation configuration information is used for configuring animation play information of at least one motion corresponding to the target animation object, the target node information is node information of a target animation replacement node, and the target animation replacement node is determined based on a target motion type identifier corresponding to the target motion.
Step S504, activating a target animation replacement node based on the target node information.
Step S506, obtaining target object data corresponding to the target animation object.
And step S508, loading the target action data and the target object data through the target animation replacement node to obtain the target action animation corresponding to the target animation object.
Specifically, the first animation state machine acquires target interaction information, wherein the target interaction information carries an animation object identifier corresponding to a target animation object. The first animation state machine obtains target animation configuration information corresponding to the animation object identifier, wherein the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object. The first animation state machine determines a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information, wherein the target action has a corresponding target action type identifier. And if the action type identifier corresponds to one animation replacement node, the action animations respectively corresponding to the actions of the same action type can be generated on one animation replacement node, so that the purposes of multiplexing the nodes and reducing resource consumption are achieved. The first animation state machine acquires target motion data corresponding to the target motion and target node information of a target animation replacement node corresponding to the target motion type identifier, and sends the target motion data and the target node information to the second animation state machine.
And after the second animation state machine receives the target action data and the target node information, the second animation state machine activates a target animation replacement node based on the target node information to acquire target object data corresponding to the target animation object, and loads the target action data and the target object data through the target animation replacement node to acquire the target action animation corresponding to the target animation object. The second animation state machine can output the target action animation through the target animation replacement node, and the target action animation is displayed on the animation playing interface.
It can be appreciated that the specific processes of step S502 to step S508 may refer to the methods described in the foregoing related embodiments, and will not be repeated here.
According to the animation data processing method, the target action data and the target node information are sent by the first animation state machine, the target action data are action data corresponding to target actions of target animation objects, the target actions are determined by the first animation state machine based on target interaction information carrying animation object identifiers corresponding to the target animation objects and target animation configuration information corresponding to the animation object identifiers, the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation objects, the target node information is node information of target animation replacement nodes, and the target animation replacement nodes are determined based on target action type identifiers corresponding to the target actions; activating a target animation replacement node based on the target node information; acquiring target object data corresponding to a target animation object; and loading the target action data and the target object data through the target animation replacement node to obtain the target action animation corresponding to the target animation object. In this way, the corresponding relation exists between the animation replacement node in the second animation state machine and the action type identifier of the action, so that different actions corresponding to the same action type identifier can share the same animation replacement node, thereby effectively controlling the number of nodes in the second animation state machine and avoiding the expansion and growth of the number of nodes in the second animation state machine. After the first animation state machine determines the triggered target action of the target animation object, the target action data corresponding to the target action is sent to the corresponding target animation node on the second animation state machine to generate the animation. Through the cooperation of the first animation state machine and the second animation state machine, when the second animation state machine is used for generating the animation, the number of nodes in the second animation state machine can be reduced, so that the aim of reducing resource consumption is achieved.
In one embodiment, activating a target animation replacement node based on target node information includes:
acquiring a forward node list corresponding to a target animation replacement node; when the forward animation node in the active state exists in the forward node list, the target animation replacement node is activated based on the target node information.
The forward node list comprises forward animation nodes which have a direct connection relationship with the target animation replacement nodes and belong to the same animation hierarchy as the target animation replacement nodes. The forward node list may include at least one forward animation node. The generation time of the forward motion animation corresponding to the forward animation node in the forward node list is earlier than the generation time of the target motion animation corresponding to the target animation replacement node. The forward animation node may be connected in front of the target animation node.
Specifically, when the second animation state machine activates the target animation replacement node based on the target node information, the forward node list corresponding to the target animation replacement node may be obtained first, and it may be determined whether the forward animation node in the activated state exists in the forward node list. When the forward animation node in the active state exists in the forward node list, the second animation state machine activates the target animation replacement node based on the target node information. Because, in the second animation state machine, connection relationships are established between animation nodes according to a logical order, for example, connection lines exist between animation nodes having connection relationships, and data is sequentially transferred between animation nodes through the connection lines. When the forward animation nodes in the active state exist in the forward node list corresponding to the target animation replacement nodes, the second animation state machine activates the target animation replacement nodes based on the target node information, so that the ordering of animation logic is effectively ensured. When the forward animation node in the activated state does not exist in the forward node list corresponding to the target animation replacement node, the second animation state machine does not activate the target animation replacement node based on the target node information even if the second animation state machine receives the target node information.
It will be appreciated that if the forward node list does not exist for the target animation replacement node, then the second animation state machine may activate the target animation replacement node directly based on the target node information. The forward node list corresponding to the different animation replacement nodes may include the same forward animation node or may include different forward animation nodes.
For example, if the animation node that has been activated currently generates the animation node of the horse riding motion animation, the target motion that is triggered currently is the dancing motion. According to animation logic, an animation object is not allowed to dance on a horse, and therefore, a connection relationship does not exist between a target animation replacement node and an animation node that has been activated currently. Then, the second animation state machine does not activate the target animation replacement node based on the target node information even if the target node information is received.
If the animation node which is activated at present is the animation node which generates the standing-in-place action animation, the target action which is triggered at present is the dancing action. According to animation logic, an animation object can dance, so that a target animation replacement node and an animation node which is activated currently have a connection relationship. Then, after receiving the target node information, the second animation state machine may activate a target animation replacement node based on the target node information.
In this embodiment, when a forward animation node in an activated state exists in a forward node list corresponding to a target animation replacement node, the target animation replacement node is activated based on target node information, so that the ordering of animation data processing can be effectively ensured, and the accuracy of a finally generated target action animation can be ensured.
In one embodiment, loading the target motion data and the target object data through the target motion replacement node to obtain a target motion corresponding to the target motion object, including:
loading target action data and target object data through a target animation replacement node to obtain trigger action animations corresponding to target actions of target animation objects; acquiring a forward motion animation corresponding to a forward animation node in an activated state; acquiring second transition time information; and transitioning from the forward motion animation to the trigger motion animation based on the second transition time information to obtain the target motion animation.
The forward motion animation is generated on the forward motion animation node in the active state corresponding to the target animation replacement node. The connection relation and the sequence exist between the animation nodes, and obviously, the generation time of the forward action animation corresponding to the forward animation node in the activated state is earlier than the generation time of the trigger action animation corresponding to the target animation replacement node. The second transition time information refers to transition time information for transitioning from the preamble action to the target action.
Specifically, after the target animation replacement node is activated, the second animation state machine can load target action data and target object data through the target animation replacement node to obtain a trigger action animation corresponding to a target action of the target animation object, and then generate the target action animation based on the trigger action animation. When the target action animation is generated based on the trigger action animation, if a forward node list exists in the target animation replacement node and a forward animation node in an activated state exists in the forward node list, the second animation state machine can acquire second transition time information, and transition from the forward action animation to the trigger action animation based on the second transition time information to obtain the target action animation. It can be appreciated that the animation transition based on the second transition time information may refer to the specific content of the related embodiment of the animation transition based on the first transition time information, which is not described herein.
Of course, when the target motion animation is generated based on the trigger motion animation, if the forward node list does not exist in the target motion animation replacement node, the second animation state machine may directly generate the target motion animation based on the trigger motion animation.
In one embodiment, the second transition time information may be a default mixing time or a mixing time that matches the target action.
In this embodiment, if the target animation replacement node has a forward animation node in an activated state, the second animation state machine transitions from the forward animation to the trigger animation based on the second transition time information, so as to obtain the target animation, and the transition from the forward animation to the target animation can be smooth, so that the smoothness of the animation is ensured.
In one embodiment, the target node information includes target node trigger information, and acquiring the second transition time information includes:
acquiring target transition time information corresponding to the target node triggering information; and taking the target transition time information as second transition time information.
Wherein, different actions can correspond to different transition time information when the actions are in animation transition with the preamble action. Thus, different transition time information may be configured for different node trigger information. Different node trigger information may correspond to different transition time information. The target transition time information refers to transition time information corresponding to target node trigger information in the target node information.
Specifically, different actions can configure different node trigger information, and action animations of different actions can configure different animation fade-in times. Therefore, the association relationship between the node trigger information corresponding to the action and the mixing time can be established. Then, the second animation state machine can quickly select the corresponding mixing time to fade in the action animation of the target action based on the target node trigger information. After the second animation state machine obtains the target node information, the target node trigger information can be determined from the target node information, the target transition time information corresponding to the target node trigger information is obtained, the target transition time information is used as the second transition time information, and finally, the target action animation is obtained based on the transition from the forward action animation to the trigger action animation of the second transition time information.
Referring to fig. 6A, there are animation replacement node 1, animation replacement node 2, and animation replacement node 3 in the second animation state machine. Each animation replacement node has a corresponding forward node list, the forward node list can comprise at least one forward animation node, and each forward animation node in the forward animation node list has a direct connection relationship with the corresponding animation replacement node. The forward node list corresponding to the animation replacement node 1 is the forward node list 1, the forward node list corresponding to the animation replacement node 2 is the forward node list 2, and the forward node list corresponding to the animation replacement node 3 is the forward node list 3.
In addition, different access lines can be configured for the animation replacement nodes according to requirements, and the different access lines can correspond to different node trigger information and different mixing time. For example, some motion pictures require 0.2 seconds of mixing with forward motion pictures at the time of entry, and some motion pictures do not. Thus, the animation transition effect can be controlled by configuring different incoming links. Referring to fig. 6B, there are two links between the forward node list 1 and the animation replacement node 1, which are link (1) and link (2), respectively. The node trigger information corresponding to the connection (1) is Animation id= 999980, that is, when the first Animation state machine sends node trigger information such as "Animation id= 999980" to the second Animation state machine, the second Animation state machine can activate the Animation replacement node 1 based on the node trigger information. The mixing time corresponding to the connection line (1) is 0.2s, namely, the second transition time information of the forward motion animation and the trigger motion animation is 0.2s. The node trigger information corresponding to the connection (2) is Animation id= 999990, and the mixing time is 0.3s. That is, when the first Animation state machine transmits node trigger information such as "Animation id= 999990" to the second Animation state machine, the second Animation state machine may activate the Animation replacement node 1 based on the node trigger information, but at this time, the forward motion Animation and the second transition time information triggering the motion Animation are 0.3s. In the animation node display interface corresponding to the second animation state machine, when receiving the trigger operation acting on the connection line (1), the animation node display interface can display the information popup window corresponding to the connection line (1), and corresponding node trigger information and mixing time are displayed in the information popup window. Similarly, when the trigger operation acting on the connection line (2) is received, the animation node display interface can display the information popup window corresponding to the connection line (2), and corresponding node trigger information and mixing time are displayed in the information popup window.
Of course, different output wires can be configured for the animation replacement nodes according to requirements, and the different output wires can correspond to different node breaking information. Further, in addition to the animation having different fade-in times when entering, some animations may have different fade-out times when exiting. Referring to fig. 6C, the animation replacement node 1 has two output wires, wire (3) and wire (4), respectively. The node breaking information corresponding to the connection (3) is Interrupt Animation ID = 999910, that is, when the first animation state machine sends the node breaking information such as "Interrupt Animation ID = 999910" to the second animation state machine, the second animation state machine can stop playing the action animation generated on the animation replacement node 1 based on the node breaking information. The mixing time corresponding to the connection (3) is not found, that is, the playing of the motion animation generated on the animation replacement node 1 is directly stopped. The node breaking information corresponding to the connection (4) is Interrupt Animation ID = 999920 and the mixing time is 0.1s, that is, when the first animation state machine sends the node breaking information of "Interrupt Animation ID = 999920" to the second animation state machine, the second animation state machine can stop playing of the action animation generated on the animation replacement node 1 based on the node breaking information. When the playback of the motion picture generated at the motion picture replacement node 1 is stopped, the target motion picture can be gradually faded out within 0.1 s. In the animation node display interface corresponding to the second animation state machine, when receiving the triggering operation acting on the connecting line (3), the animation node display interface can display the information popup corresponding to the connecting line (3), and display corresponding node breaking information in the information popup. Similarly, when a trigger operation acting on the connection line (4) is received, the animation node display interface can display an information popup window corresponding to the connection line (4), and corresponding node breaking information is displayed in the information popup window.
In this embodiment, different node trigger information may correspond to different transition time information, and when the forward motion animation and the trigger motion animation of the target motion are subjected to animation transition, the second animation state machine may acquire the target transition time information corresponding to the target node trigger information as second transition time information, and transition from the forward motion animation to the trigger motion animation based on the second transition time information, so as to finally present a more appropriate and accurate animation transition effect.
In one embodiment, the method further comprises:
when the backward animation node exists in the target animation replacement node, determining action fine granularity parameters of the target animation object through the backward animation node, and adjusting the target action animation based on the action fine granularity parameters to obtain the updated action animation.
The backward animation node is an animation node for controlling detailed information of the action deployment of the target animation object. For example, a footstep IK (Inverse Kinematics, reverse motion) function node, a Look At (Look At) function node, and the like. The step IK function node is used to keep the steps of the animated object against the ground when the animated object is located on an uneven ground, e.g., a roof slope, rock, etc. The fixation function node is used for controlling eyes or heads of the animation object to always stare at the lens and move along with the lens. The backward animation node may have a direct connection relationship with the target animation replacement node, e.g., the backward animation node is directly connected behind the target animation replacement node. The action fine granularity parameter refers to detail requirement information of the target animation object when the target animation object performs action, for example, the head of the target animation object needs to always move along with the lens, and the footsteps of the target animation object need to always cling to the ground. Compared with the target action animation, the updated action animation has more detail information, and the picture displayed by the updated action animation is more reasonable, more accurate and more vivid.
Specifically, if the target animation replacement node has a backward animation node, the target animation replacement node may output the target action animation to the backward animation node. The second animation state machine can determine the action fine granularity parameters of the target animation object through the backward animation node, and adjust the target action animation based on the action fine granularity parameters to obtain the updated action animation.
For example, referring to fig. 7, the backward animation node of the target animation replacement node is an animation node corresponding to a Head-gaze (Head Look) function, and the Head-gaze function node is used to control the Head of the target animation object to always stare at the shot, followed by the rotation of the shot. If the target animation replacement node is connected with the head fixation function node, the head of the target animation object always stares at the lens and rotates along with the lens while the target animation object performs the target action in the finally generated updated action animation. In the target action animation, the target animation object may simply launch the target action, and not have so much detail. Finally, the updated action animation with richer detail content can be displayed on the animation playing interface.
In this embodiment, when the backward animation node exists in the target animation replacement node, the backward animation node controls the action fine granularity parameter of the target animation object, and adjusts the target action animation, so that the updated action animation with richer detail content can be obtained.
In one embodiment, after loading the target motion data and the target object data through the target motion replacement node to obtain the target motion animation corresponding to the target motion animation object, the method further includes:
acquiring an activation suspension instruction; switching the node state of the target animation replacement node from the active state to the standby state based on the activation suspension instruction; and sending instance interrupt information to the first animation state machine, wherein the instance interrupt information is used for stopping running a target animation instance corresponding to a target action of the target animation object, and the target animation instance is used for monitoring action state change of the target action and notifying the second animation state machine.
The activation suspension instruction is used for suspending the activation state of the target animation replacement node and converting the node state of the target animation replacement node from the activation state to the standby state. The activation suspension instruction may be automatically generated when the node state of the upper-layer moving image node corresponding to the target moving image replacement node is changed from the activation state to the standby state.
Specifically, when the node state of the upper-layer animation replacement node of the target animation replacement node is changed, the second animation state machine may automatically generate an activation suspension instruction when the node state of the upper-layer animation replacement node is changed from the activation state to the standby state, and change the node state of the target animation replacement node from the activation state to the standby state according to the activation suspension instruction. At this time, the second animation state machine may generate instance interrupt information, and transmit the instance interrupt information to the first animation state machine. After the first animation state machine receives the instance interrupt information, the operation of the target animation instance corresponding to the target action of the target animation object can be stopped based on the instance interrupt information. Because, if the node state of the upper-level animation replacement node of the target animation replacement node is changed from the active state to the standby state, it is indicated that the target animation object has undergone a drastic action change, for example, a transition from a performance-property action to a combat-property action. At this time, the target animation instance does not need to monitor the motion state change of the target motion. Therefore, the second animation state machine actively transmits instance interrupt information to the first animation state machine, and the instance interrupt information can carry target node information corresponding to the target animation replacement node, so that the first animation state machine can quickly find a corresponding target animation instance based on the target node information, and the operation of the target animation instance is stopped. Thus, the second animation state machine reversely interrupts the operation of the target animation instance on the first animation state machine, and the useless target animation instance can be prevented from continuously occupying the computing resource.
For example, the target motion is a dance motion, the target animation replacement node is currently in an active state, and the upper animation replacement node of the target animation replacement node is also in an active state, where the upper animation replacement node of the target animation replacement node is used to represent the motion that the target animation object can currently perform various performance properties. When the target animation object has a drastic action change and is converted from the action with performance property to the action with fight property, the node state of the upper animation replacement node of the target animation replacement node is converted from the active state to the standby state, and the upper animation node representing the action that the target animation object can currently perform various fight properties is converted from the standby state to the active state. If the node state of the upper-layer moving image replacement node of the target moving image replacement node is changed from the active state to the standby state, the node state of the target moving image replacement node should also be changed from the standby state to the active state. At this time, the target animation instance corresponding to the dance action of the target animation object is not required to be in operation, and the computing resource is not required to be occupied continuously.
It will be appreciated that the relevant content of the target animation instance may be referred to the content described in the foregoing related embodiments, and will not be described herein.
In this embodiment, if the node state of the target animation replacement node is changed from the active state to the standby state, the second animation state machine interrupts the operation of the target animation instance in the first animation state machine in a reverse direction, so that the useless target animation instance can be prevented from continuously occupying computing resources, thereby saving computing resources.
The application scene also provides an application scene, and the application scene applies the animation data processing method. Specifically, the animation data processing method can be applied in a game scene. Specifically, the application of the animation data processing method in the application scene is as follows:
the animation data processing method can be applied to a severe MMO (Massively Multiplayer Online, large-scale multiplayer online game) game, simplifies an animation state machine of a complex role, reduces memory and CPU consumption during operation, and simplifies a data configuration flow. In MMO games, the main game corner acts are rich, the animation state machine is extremely complex and huge, and due to the game types, multiple combat, social contact and other application scenes are multiple, wherein the animation logic of part of the acts is simple and similar. In the conventional method, when a Morpheme Network is used, the number of nodes of animation nodes in a Network animation state machine is linearly increased according to the action amount of planning requirements, and cannot be restrained. By adopting the animation data processing method, the nodes in the Network animation state machine can be reused to the maximum extent, and the animation state machine scale of the game role can be effectively controlled, so that the memory and CPU consumption of an animation system are reduced, and more low-level players are popularized. The Network animation state machine refers to an animation state machine in a Morpheme animation engine. Morpheme is a commercial animation engine supporting functions of cross-platform animation resource compression, playing, rigid body simulation, animation state machine and the like. In the game application scene, the animation state machine is used for butting the game playing method upwards and controlling the calculation of playing, mixing, IK and the like of the animation downwards.
1. Information configuration
1. Establishing animation replacement nodes
A plurality of animation replacement nodes (which may also be referred to as simple animation nodes) are created in a Network animation state machine.
In addition, different access lines can be configured for the animation replacement nodes according to animation requirements, for example, 0.2 second mixing is performed when some animations need to be accessed, and the trend of data is controlled by configuring different node trigger information when some animations do not need to be mixed. Furthermore, different output links can be configured for the animation replacement nodes according to animation requirements, and animation interruption can be controlled by configuring different node interruption information. Furthermore, the animation nodes for controlling the action detail information can be connected after the animation replacement nodes according to the animation requirements.
2. Editing an animation plan form by an animation editor
The terminal is provided with an animation editor, and a game developer can open the animation editor, and edit an animation planning table (namely animation configuration information) corresponding to each animation object in the game in the animation editor. The animated object in the game includes at least one of a Player character (Player), a non-Player character (NPC), and an item (Entity). The game developer can configure the animation configuration information corresponding to the game object in the animation editor, wherein the animation configuration information comprises the animation playing information of at least one action, namely the animation planning table comprises at least one animation item, and one animation item represents the animation playing information of one action. The game developer can add the node path of the animation replacement node in the Network animation state machine (namely the second animation state machine) in the animation planning table, and add the node information such as the node triggering information, the node breaking information and the like corresponding to the animation replacement node. Multiple actions corresponding to the same action type identifier may correspond to the same animation replacement node.
The animation playing information may include information of various fields as shown in table 1.
TABLE 1
Figure BDA0003103986460000381
/>
Referring to fig. 8A, fig. 8A is an interface schematic of an animation plan table configuring a male game character in an animation editor. The terminal may send the animation plan table corresponding to each animation object to the first animation state machine for storage.
In one embodiment, the animation plan table supports hot update of the table, so that the production efficiency can be obviously improved. The hot update refers to synchronizing changes to the animation plan table to the client without closing the game client, and realizing instant change and instant view. In addition, when adding an animation newly, the game developer only needs to edit the animation plan table, and does not need to reconfigure the Network animation state machine and does not need to walk a complicated export flow. Wherein the game client comprises a first animation state machine and a second animation state machine.
2. Simple animation state machine (i.e., first animation state machine) and Network animation state machine (i.e., second animation state machine)
By referencing a large number of Simple and repetitive logic within the Network animation state machine into an autonomously built Simple animation state machine (Simple animation). And replacing the motion data output by the simple animation state machine into a plurality of fixed animation replacement nodes in the Network. Therefore, the aim of node multiplexing can be achieved while all the characteristics and effects of the Network animation state machine are maintained, so that the scale of the Network animation state machine is greatly reduced, and the growth of animation nodes in the Network animation state machine in the future is irrelevant to the number of new demands and is only relevant to the types of the new demands.
Referring to fig. 8B, the animation data processing method includes the steps of:
1. the first animation state machine acquires target interaction information, wherein the target interaction information carries an animation object identifier of a target animation object.
2. The first animation state machine determines whether an upper animation node of the animation replacement node is activated.
The first animation state machine may obtain a target animation plan table corresponding to the target animation object based on the animation object identification. The node paths of the animation replacement nodes corresponding to the actions are recorded in the target animation planning table, and the upper animation nodes of the animation replacement nodes can be determined according to the node paths. The first animation state machine sends a node state query request to the second animation state machine to query whether an upper animation node of the animation replacement node is in an active state.
3. If the upper animation node of the animation replacement node is activated, the first animation state machine judges whether the target interaction information is matched with the triggering condition of the action, so that the currently triggered target action is determined.
For the animation item configured with the node information of the animation replacement node, only when the animation node of the upper layer of the animation replacement node is activated, whether the target interaction information is matched with the triggering condition of the animation item is judged, so that the information traversal quantity is reduced. For example, a node path of an animation replacement node is core|interaction|simpleanimation 1, an upper animation node of the animation replacement node is core|interaction, and only when a node state of the upper animation node of the animation replacement node is an active state, whether an action configuring the animation replacement node is triggered or not is judged, and whether target interaction information is matched with action triggering information of the action or not is judged.
4. If the upper animation node of the animation replacement node is not activated, the target interaction information cannot trigger the generation and playing of the action animation of any action of the target animation object. If the upper animation node of the animation replacement node is activated, but the target interaction information is not matched with the triggering condition of any action of the target animation object, the target interaction information cannot trigger the generation and playing of the action animation of any action of the target animation object at the moment.
5. The first animation state machine creates a target animation instance corresponding to a target action of the target animation object, monitors action state change of the target action based on the target animation instance, and sends corresponding data to the second animation state machine when the action state changes.
6. After the target action is triggered, the first animation state machine sends target action data corresponding to the target action to a target animation replacement node in the second animation state machine through the target animation instance.
The first animation state machine can send the target node information corresponding to the target animation replacement node to the second animation state machine through the target animation instance, so that the second animation state machine activates the target animation replacement node based on the target node information.
7. And the second animation state machine loads the target action data and the target object data corresponding to the target animation object through the target animation replacement node, generates the target action animation corresponding to the target animation object, and plays the target action animation.
8. The second animation state machine detects whether the target animation replacement node has been deactivated.
When the upper-layer animation node of the target animation replacement node in the second animation state machine jumps, the node state of the target animation replacement node is converted from an active state to a standby state. At this time, the second animation state machine needs to reverse interrupt the operation of the target animation instance in the first animation state machine.
If the target motion picture replacement node is in the standby state, the second motion picture state machine can stop playing the target motion picture.
9. The first animation state machine judges whether interrupt interaction information matched with the interrupt condition of the target action is acquired or not.
If the first animation state machine acquires the interrupt interaction information matched with the interrupt condition of the target action, the first animation state machine sends the target node interrupt information to a target animation replacement node in the second animation state machine, so that the second animation state machine can stop playing of the target action animation.
10. Judging whether the playing time of the target action animation reaches the preset playing time.
The first animation state machine may determine whether the playing time of the target motion animation reaches a preset playing time. If the playing time of the target action animation reaches the preset playing time, the first animation state machine sends notification information to the second animation state machine so that the second animation state machine finishes playing the target action animation.
Or the second animation state machine autonomously judges whether the playing time of the target action animation reaches the preset playing time. And if the playing time of the target action animation reaches the preset playing time, the second animation state machine finishes playing the target action animation.
When each frame of video frame is updated, whether the target animation replacement node is not activated, whether interrupt interaction information matched with the interrupt condition of the target action is acquired, and whether the playing time of the target action animation reaches the preset playing time can be judged.
In this embodiment, a lightweight simple animation state machine is realized. The node information of the animation replacement node in the Network animation state machine is configured for the needed animation item in the animation planning table stored in the simple animation state machine, and the node information comprises a node path, node trigger information, node interrupt information and the like. When one animation entry in the animation plan table is triggered, that is, one action of the animation object is triggered, the simple animation state machine transmits node information of the corresponding animation replacement node to the Network animation state machine, so that the designated animation replacement node is triggered through the node trigger information. The simple animation state machine sends corresponding action data to the appointed animation replacement node, and the Network animation state machine normally uses the original logic to realize all functions of animation mixing, IK calculation and the like. The method and the system can control the node quantity of the Morpheme Network effectively while completely retaining the original functions of the Morpheme Network, thereby reducing the consumption of the memory in the running process.
In one embodiment, as shown in FIG. 8C, the software architecture in a computer device may be divided into an animation interface layer, an animation logic layer, and an animation processing layer. The animation processing layer comprises a first animation state machine, a second animation state machine, an animation resource manager and an animation pipeline. The animation interface layer is used for receiving triggering operation of a user, generating initial interaction information and sending the initial interaction information to the animation logic layer. The animation logic layer is used for converting the initial interaction information and converting the information sent by the upper layer into information identifiable by the animation state machine. The animation pipeline is used for controlling the rendering and playing of the animation, and rendering the action animation into video for playing. The animation resource manager is used for managing animation resources.
The animation interface layer receives the interaction operation acted on the target animation object, generates initial interaction information based on the interaction operation, and sends the initial interaction information to the animation logic layer. The animation logic layer converts the initial interaction information to generate target interaction information, and sends the target interaction information to the first animation state machine. For example, an animation playing interface may be displayed on the terminal, where a target animation object is displayed on the animation playing interface. The user may trigger a target animated object on the animated playing interface or trigger related controls on the animated playing interface for controlling the target animated object to generate initial interaction information, e.g., the user clicks a "smart fox" control in the game interface to generate an interaction with the game character. An animation interface layer on the computer device obtains an interactive operation of a user on a target animation object, and generates initial interactive information based on the interactive operation. The animation logic layer and the first animation state machine agree on an information mode of the interactive information so that the first animation state machine can quickly recognize the interactive information. After receiving the initial interaction information, the animation logic layer can perform information mode conversion on the initial interaction information to obtain target interaction information, and then sends the target interaction information to the first animation state machine.
The first animation state machine acquires target animation configuration information corresponding to the target animation object, determines target actions corresponding to the target animation object based on the target interaction information and the target animation configuration information, and determines target animation replacement nodes based on target action type identifiers corresponding to the target actions. The first animation state machine acquires target motion data corresponding to the target motion and target node information of a target animation replacement node corresponding to the target motion type identifier, and sends the target motion data and the target node information to the second animation state machine.
And the second animation state machine activates a target animation replacement node based on the target node information, acquires target object data corresponding to the target animation object, loads the target action data and the target object data through the target animation replacement node, and acquires the target action animation corresponding to the target animation object. And the second animation state machine plays the target action animation at the animation interface layer through the animation pipeline.
The animation resource manager may count the number of concurrent uses of the animation resource. When the number of times of using a certain animation resource is reduced to 0, the animation resource is released in the memory. For example, different players and non-players within a game may use the same type of animated object, have the same skeleton, but differ in appearance, and may trigger the same action, at which point they may share the same action data. When the concurrent use number of one action data is 0, the action data is indicated to be used by no person, and the computer equipment can release the action data in the memory.
It will be appreciated that the specific data processing procedures of the first animation state machine and the second animation state machine may refer to the methods described in the foregoing related embodiments, and will not be repeated herein.
It should be understood that, although the steps in the flowcharts of fig. 2, 4, and 5 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps of fig. 2, 4, 5 may include steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the steps or stages in other steps.
In one embodiment, as shown in fig. 9A, there is provided an animation data processing apparatus, which may employ a software module or a hardware module, or a combination of both, as a part of a computer device, the apparatus specifically including: an interaction information acquisition module 902, a configuration information acquisition module 904, a target action determination module 906, and an action animation generation module 908, wherein:
The interaction information obtaining module 902 is configured to obtain target interaction information, where the target interaction information carries an animation object identifier corresponding to a target animation object.
The configuration information obtaining module 904 is configured to obtain target animation configuration information corresponding to the animation object identifier, where the target animation configuration information is used to configure animation playing information of at least one action corresponding to the target animation object.
The target action determining module 906 is configured to determine, based on the target interaction information and the target animation configuration information, a target action corresponding to the target animation object, where the target action has a corresponding target action type identifier.
The motion animation generation module 908 is configured to obtain target motion data corresponding to the target motion and target node information of a target animation replacement node corresponding to the target motion type identifier, send the target motion data and the target node information to the second animation state machine, so that the second animation state machine activates the target animation replacement node based on the target node information, obtain target object data corresponding to the target animation object, and load the target motion data and the target object data through the target animation replacement node to obtain a target motion animation corresponding to the target animation object.
In one embodiment, the action carries an action type identification. As shown in fig. 9B, the animation data processing device further includes:
the node information configuration module 901 is configured to perform action clustering on each action corresponding to the same action type identifier to obtain action cluster clusters corresponding to each action type identifier, perform node allocation on each action cluster based on at least two candidate replacement nodes established in the second animation state machine in advance to obtain animation replacement nodes corresponding to each action cluster, and configure node information of the corresponding animation replacement nodes in animation playing information of each action. The action animation generation module is also used for acquiring target node information from target animation playing information corresponding to the target action.
In one embodiment, the node information includes a node path and the animation playback information includes action trigger information. The target action determining module is further configured to obtain node information of animation replacement nodes corresponding to the respective actions, determine an upper animation node corresponding to the animation replacement node based on the node path, obtain a node state of the upper animation node corresponding to the animation replacement node, use an action corresponding to the animation replacement node whose node state is an active state as a candidate action, and perform trigger action detection on the target animation object based on the target interaction information and animation play information corresponding to the candidate actions, so as to obtain a target action corresponding to the target animation object.
In one embodiment, the animation playing information includes action triggering information, and the target action determining module is further configured to match the target interaction information with action triggering information corresponding to each candidate action, and use the candidate action corresponding to the action triggering information that is successfully matched as the target action.
In one embodiment, the animation playing information includes a storage path of action data corresponding to the action, and the action animation generating module is further configured to obtain target animation playing information corresponding to the target action, and obtain target action data based on the target storage path in the target animation playing information.
In one embodiment, the target node information includes a target node path and target node trigger information corresponding to the target animation replacement node. The action animation generation module is further used for sending the target action data, the target node path and the target node trigger information to the second animation state machine, so that the second animation state machine sends the target node trigger information to the target animation replacement node based on the target node path, the target action data and the target object data are loaded through the target animation replacement node to obtain target action animation corresponding to the target animation object, and the target node trigger information is used for activating the target animation replacement node.
In one embodiment, the animation playing information comprises action breaking information, and the target node information comprises a target node path and target node breaking information corresponding to the target animation replacing node. The action animation generation module is further used for sending the target node path and the target node breaking information to the second animation state machine when the breaking interaction information matched with the action breaking information corresponding to the target action is obtained in the playing process of the target action animation, so that the second animation state machine sends the target node breaking information to the target animation replacement node based on the target node path, and the target node breaking information is used for breaking the playing of the target action animation.
In one embodiment, the animation playback information includes action jump information. The action animation generation module is further used for acquiring skip action data of skip actions corresponding to the target actions when skip interaction information matched with the action skip information corresponding to the target actions is acquired in the playing process of the target action animation, acquiring first transition time information, and sending the skip action data, the first transition time information and the target node information to the second animation state machine so that the second animation state machine loads the skip action data and the target object data through the target animation replacement nodes corresponding to the target node information, and obtaining skip action animation corresponding to the target animation objects, and transitioning from the target action animation to the skip action animation based on the first transition time information to obtain the fusion action animation.
In one embodiment, the animation playback information includes a storage path corresponding to the action data of the action. The action animation generation module is further used for generating a target animation instance corresponding to the target action of the target animation object based on the target animation playing information corresponding to the target action and the object attribute information corresponding to the animation object identifier, running the target animation instance, acquiring target action data based on a target storage path in the target animation playing information, and sending the target action data and the target node information to the second animation state machine through the target animation instance.
In one embodiment, as shown in fig. 10A, there is provided an animation data processing apparatus, which may employ a software module or a hardware module, or a combination of both, as a part of a computer device, the apparatus specifically including: an information receiving module 1002, a node activating module 1004, an information obtaining module 1006, and an animation generating module 1008, wherein:
an information receiving module 1002, configured to receive target action data and target node information sent by the first animation state machine; the target motion data is motion data corresponding to a target motion of a target animation object, the target motion is determined by a first animation state machine based on target interaction information carrying an animation object identifier corresponding to the target animation object and target animation configuration information corresponding to the animation object identifier, the target animation configuration information is used for configuring animation play information of at least one motion corresponding to the target animation object, the target node information is node information of a target animation replacement node, and the target animation replacement node is determined based on a target motion type identifier corresponding to the target motion.
The node activating module 1004 is configured to activate a target animation replacement node based on target node information.
The information obtaining module 1006 is configured to obtain target object data corresponding to the target animation object.
And the animation generation module 1008 is used for loading the target action data and the target object data through the target animation replacement node to obtain the target action animation corresponding to the target animation object.
In one embodiment, the node activation module is further configured to obtain a forward node list corresponding to the target animation replacement node, and activate the target animation replacement node based on the target node information when the forward animation node in the active state exists in the forward node list.
In one embodiment, the animation generating module is further configured to load the target motion data and the target object data through the target animation replacing node, obtain a trigger motion animation corresponding to the target motion of the target animation object, obtain a forward motion animation corresponding to the forward motion animation node in the activated state, obtain second transition time information, and transition from the forward motion animation to the trigger motion animation based on the second transition time information, so as to obtain the target motion animation.
In one embodiment, the target node information includes target node trigger information, and the animation generating module is further configured to obtain target transition time information corresponding to the target node trigger information, and use the target transition time information as second transition time information.
In one embodiment, the animation generation module is further configured to determine, when the backward animation node exists in the target animation replacement node, a motion fine-granularity parameter of the target animation object through the backward animation node, and adjust the target motion animation based on the motion fine-granularity parameter, to obtain the updated motion animation.
In one embodiment, as shown in fig. 10B, the animation data processing device further includes:
the instance breaking module 1009 is configured to obtain an activation suspension instruction, convert the node state of the target animation replacement node from the activation state to the standby state based on the activation suspension instruction, send instance breaking information to the first animation state machine, where the instance breaking information is used to stop running a target animation instance corresponding to a target action of the target animation object, and the target animation instance is used to monitor an action state change of the target action and notify the second animation state machine.
The specific limitation regarding the animation data processing apparatus may be referred to as limitation of the animation data processing method hereinabove, and will not be described herein. The respective modules in the above-described animation data processing apparatus may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 11. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing various animation configuration information, node information of each animation replacement node and other data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an animation data processing method.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure thereof may be as shown in fig. 12. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an animation data processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structures shown in fig. 11 and 12 are block diagrams of only some of the structures associated with the present application and are not intended to limit the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than those shown, or may combine certain components, or may have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, storing a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the steps in the above-described method embodiments.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (15)

1. A method of processing animation data for application to a first animation state machine, the method comprising:
acquiring target interaction information, wherein the target interaction information carries an animation object identifier corresponding to a target animation object;
obtaining target animation configuration information corresponding to the animation object identifier, wherein the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object;
determining a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information, wherein the target action has a corresponding target action type identifier;
and acquiring target motion data corresponding to the target motion and target node information of a target animation replacement node corresponding to the target motion type identifier, transmitting the target motion data and the target node information to a second animation state machine, enabling the second animation state machine to activate the target animation replacement node based on the target node information, acquiring target object data corresponding to the target animation object, and loading the target motion data and the target object data through the target animation replacement node to obtain target motion animation corresponding to the target animation object.
2. The method of claim 1, wherein the action carries an action type identifier, and wherein prior to the obtaining the target interaction information, the method further comprises:
performing action clustering on each action corresponding to the same action type identifier to obtain action clustering clusters respectively corresponding to each action type identifier;
based on at least two candidate replacement nodes established in the second animation state machine in advance, performing node allocation on each action cluster to obtain animation replacement nodes respectively corresponding to each action cluster;
configuring node information of corresponding animation replacement nodes in animation playing information of each action;
the obtaining the target node information of the target animation replacement node corresponding to the target action type identifier comprises the following steps:
and acquiring the target node information from the target animation playing information corresponding to the target action.
3. The method of claim 1, wherein the node information comprises a node path and the animation playback information comprises action trigger information;
the determining the target action corresponding to the target animation object based on the target interaction information and the target animation configuration information comprises the following steps:
Acquiring node information of animation replacement nodes corresponding to all the actions respectively;
determining an upper animation node corresponding to the animation replacement node based on the node path, and acquiring the node state of the upper animation node corresponding to the animation replacement node;
taking the action corresponding to the animation replacement node with the node state of the upper animation node being the activation state as a candidate action;
and detecting the trigger action of the target animation object based on the target interaction information and the animation playing information corresponding to each candidate action to obtain the target action corresponding to the target animation object.
4. The method of claim 1, wherein the target node information includes a target node path and target node trigger information corresponding to the target animation replacement node;
the step of sending the target action data and the target node information to a second animation state machine, so that the second animation state machine activates the target animation replacement node based on the target node information, obtains target object data corresponding to the target animation object, loads the target action data and the target object data through the target animation replacement node, and obtains target action animation corresponding to the target animation object, comprising:
And sending the target action data, the target node path and the target node trigger information to the second animation state machine, so that the second animation state machine sends the target node trigger information to the target animation replacement node based on the target node path, the target action data and the target object data are loaded through the target animation replacement node, the target action animation corresponding to the target animation object is obtained, and the target node trigger information is used for activating the target animation replacement node.
5. The method of claim 1, wherein the animation playback information comprises action jump information, the method further comprising:
when jump interactive information matched with the action jump information corresponding to the target action is obtained in the playing process of the target action animation, jump action data of the jump action corresponding to the target action is obtained, and first transition time information is obtained;
and transmitting the jump motion data, the first transition time information and the target node information to a second animation state machine, so that the second animation state machine loads the jump motion data and the target object data through a target animation replacement node corresponding to the target node information to obtain a jump motion animation corresponding to the target animation object, and transitioning from the target motion animation to the jump motion animation based on the first transition time information to obtain a fusion motion animation.
6. The method according to any one of claims 1 to 5, wherein the animation playing information includes a storage path corresponding to motion data of a motion;
the obtaining the target motion data corresponding to the target motion and the target node information of the target animation replacement node corresponding to the target motion type identifier, and sending the target motion data and the target node information to a second animation state machine, including:
generating a target animation instance corresponding to the target action of the target animation object based on the target animation playing information corresponding to the target action and the object attribute information corresponding to the animation object identifier;
running the target animation instance, and acquiring the target action data based on a target storage path in the target animation playing information;
and sending the target action data and the target node information to the second animation state machine through the target animation instance.
7. An animation data processing method applied to a second animation state machine, the method comprising:
receiving target action data and target node information sent by a first animation state machine; the target motion data is motion data corresponding to a target motion of a target animation object, the target motion is determined by the first animation state machine based on target interaction information carrying an animation object identifier corresponding to the target animation object and target animation configuration information corresponding to the animation object identifier, the target animation configuration information is used for configuring animation play information of at least one motion corresponding to the target animation object, the target node information is node information of a target animation replacement node, and the target animation replacement node is determined based on a target motion type identifier corresponding to the target motion;
Activating the target animation replacement node based on the target node information;
acquiring target object data corresponding to the target animation object;
and loading the target action data and the target object data through the target animation replacement node to obtain the target action animation corresponding to the target animation object.
8. The method of claim 7, wherein the activating the target animation replacement node based on the target node information comprises:
acquiring a forward node list corresponding to the target animation replacement node;
and when the forward animation nodes in the activated state exist in the forward node list, activating the target animation replacement node based on the target node information.
9. The method according to claim 8, wherein loading the target action data and the target object data by the target animation replacement node to obtain a target action animation corresponding to the target animation object comprises:
loading the target action data and the target object data through the target animation replacement node to obtain a trigger action animation corresponding to the target action of the target animation object;
Acquiring a forward motion animation corresponding to the forward animation node in an activated state;
acquiring second transition time information;
and transitioning from the forward motion animation to the trigger motion animation based on the second transition time information to obtain the target motion animation.
10. The method of claim 7, wherein the method further comprises:
and when the backward animation node exists in the target animation replacement node, determining the action fine granularity parameter of the target animation object through the backward animation node, and adjusting the target action animation based on the action fine granularity parameter to obtain the updated action animation.
11. The method according to any one of claims 7 to 10, wherein after the target motion data and the target object data are loaded by the target motion replacement node to obtain a target motion animation corresponding to the target motion object, the method further comprises:
acquiring an activation suspension instruction;
switching the node state of the target animation replacement node from an active state to a standby state based on the activation suspension instruction;
and sending instance interrupt information to the first animation state machine, wherein the instance interrupt information is used for stopping running a target animation instance corresponding to a target action of the target animation object, and the target animation instance is used for monitoring action state change of the target action and notifying the second animation state machine.
12. An animation data processing device, characterized in that the device comprises:
the interactive information acquisition module is used for acquiring target interactive information, wherein the target interactive information carries an animation object identifier corresponding to a target animation object;
the configuration information acquisition module is used for acquiring target animation configuration information corresponding to the animation object identifier, wherein the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object;
the target action determining module is used for determining a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information, wherein the target action has a corresponding target action type identifier;
the action animation generation module is used for acquiring target action data corresponding to the target action and target node information of a target animation replacement node corresponding to the target action type identifier, sending the target action data and the target node information to a second animation state machine, enabling the second animation state machine to activate the target animation replacement node based on the target node information, acquiring target object data corresponding to the target animation object, and loading the target action data and the target object data through the target animation replacement node to obtain target action animation corresponding to the target animation object.
13. An animation data processing device, characterized in that the device comprises:
the information receiving module is used for receiving the target action data and the target node information sent by the first animation state machine; the target motion data is motion data corresponding to a target motion of a target animation object, the target motion is determined by the first animation state machine based on target interaction information carrying an animation object identifier corresponding to the target animation object and target animation configuration information corresponding to the animation object identifier, the target animation configuration information is used for configuring animation play information of at least one motion corresponding to the target animation object, the target node information is node information of a target animation replacement node, and the target animation replacement node is determined based on a target motion type identifier corresponding to the target motion;
a node activation module for activating the target animation replacement node based on the target node information;
the information acquisition module is used for acquiring target object data corresponding to the target animation object;
and the animation generation module is used for loading the target action data and the target object data through the target animation replacement node to obtain the target action animation corresponding to the target animation object.
14. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 6 or 7 to 11 when the computer program is executed.
15. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method of any one of claims 1 to 6 or 7 to 11.
CN202110631858.8A 2021-06-07 2021-06-07 Animation data processing method, device, computer equipment and storage medium Active CN113379590B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110631858.8A CN113379590B (en) 2021-06-07 2021-06-07 Animation data processing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110631858.8A CN113379590B (en) 2021-06-07 2021-06-07 Animation data processing method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113379590A CN113379590A (en) 2021-09-10
CN113379590B true CN113379590B (en) 2023-06-30

Family

ID=77575986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110631858.8A Active CN113379590B (en) 2021-06-07 2021-06-07 Animation data processing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113379590B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927777A (en) * 2014-04-03 2014-07-16 北京星航机电装备有限公司 Organization and control method of three-dimensional animation process based on Mealy finite state automatas
CN105656688A (en) * 2016-03-03 2016-06-08 腾讯科技(深圳)有限公司 State control method and device
US9463386B1 (en) * 2011-11-08 2016-10-11 Zynga Inc. State machine scripting in computer-implemented games
CN107180444A (en) * 2017-05-11 2017-09-19 腾讯科技(深圳)有限公司 A kind of animation producing method, device, terminal and system
CN108650217A (en) * 2018-03-21 2018-10-12 腾讯科技(深圳)有限公司 Synchronous method, device, storage medium and the electronic device of action state
CN109731334A (en) * 2018-11-22 2019-05-10 腾讯科技(深圳)有限公司 Switching method and apparatus, storage medium, the electronic device of state
CN110413758A (en) * 2019-07-30 2019-11-05 中国工商银行股份有限公司 Dialog box framework construction method and device based on machine learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9463386B1 (en) * 2011-11-08 2016-10-11 Zynga Inc. State machine scripting in computer-implemented games
CN103927777A (en) * 2014-04-03 2014-07-16 北京星航机电装备有限公司 Organization and control method of three-dimensional animation process based on Mealy finite state automatas
CN105656688A (en) * 2016-03-03 2016-06-08 腾讯科技(深圳)有限公司 State control method and device
CN107180444A (en) * 2017-05-11 2017-09-19 腾讯科技(深圳)有限公司 A kind of animation producing method, device, terminal and system
CN108650217A (en) * 2018-03-21 2018-10-12 腾讯科技(深圳)有限公司 Synchronous method, device, storage medium and the electronic device of action state
CN109731334A (en) * 2018-11-22 2019-05-10 腾讯科技(深圳)有限公司 Switching method and apparatus, storage medium, the electronic device of state
CN110413758A (en) * 2019-07-30 2019-11-05 中国工商银行股份有限公司 Dialog box framework construction method and device based on machine learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"分而治之",一种AI和动画系统的架构;yptianma;《https://blog.csdn.net/yptianma/article/details/103268517》;第1-8页 *

Also Published As

Publication number Publication date
CN113379590A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
US20210264952A1 (en) Video editing method, apparatus, and device, and storage medium
KR102319206B1 (en) Information processing method and device and server
JP5937711B2 (en) Program, recording medium, information processing apparatus, and control method
RU2420806C2 (en) Smooth transitions between animations
US10792566B1 (en) System for streaming content within a game application environment
CN111880877B (en) Animation switching method, device, equipment and storage medium
CN113379876B (en) Animation data processing method, device, computer equipment and storage medium
US11511196B2 (en) Predictive data preloading
JP2006528381A (en) Virtual environment controller
US20180296916A1 (en) System and method for spatial and immersive computing
CN113379590B (en) Animation data processing method, device, computer equipment and storage medium
US11328468B2 (en) Animated faces using texture manipulation
US11729479B2 (en) Methods and systems for dynamic summary queue generation and provision
US20240004529A1 (en) Metaverse event sequencing
WO2018049682A1 (en) Virtual 3d scene production method and related device
US11878239B2 (en) Replay editor in video games
WO2023134276A1 (en) Resource preloading method and apparatus, storage medium, and computer device
CN116843802A (en) Virtual image processing method and related product
JP7464336B2 (en) Server-based video help in video games
US10446191B2 (en) Animated motion and effect modifiers in an intelligent title cache system
CN116206016A (en) Method and device for processing special effect event in animation
CN116647733A (en) Virtual model click event processing method and device, electronic equipment and storage medium
US10455296B2 (en) Intelligent title cache system
CN117122928A (en) Particle animation display method, device, equipment, medium and program product
CN116528005A (en) Editing method and device for virtual model animation, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40052793

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant