CN113379590A - Animation data processing method, animation data processing device, computer equipment and storage medium - Google Patents

Animation data processing method, animation data processing device, computer equipment and storage medium Download PDF

Info

Publication number
CN113379590A
CN113379590A CN202110631858.8A CN202110631858A CN113379590A CN 113379590 A CN113379590 A CN 113379590A CN 202110631858 A CN202110631858 A CN 202110631858A CN 113379590 A CN113379590 A CN 113379590A
Authority
CN
China
Prior art keywords
animation
target
action
node
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110631858.8A
Other languages
Chinese (zh)
Other versions
CN113379590B (en
Inventor
陈广宇
杨双才
肖瑞焜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shanghai Co Ltd
Original Assignee
Tencent Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shanghai Co Ltd filed Critical Tencent Technology Shanghai Co Ltd
Priority to CN202110631858.8A priority Critical patent/CN113379590B/en
Publication of CN113379590A publication Critical patent/CN113379590A/en
Application granted granted Critical
Publication of CN113379590B publication Critical patent/CN113379590B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to an animation data processing method, an animation data processing device, computer equipment and a storage medium. The method comprises the following steps: acquiring target interaction information carrying an animation object identifier corresponding to a target animation object; acquiring target animation configuration information corresponding to the animation object identifier; determining a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information; the method comprises the steps of obtaining target action data corresponding to a target action and target node information of a target animation replacing node corresponding to a target action type identification of the target action, sending the target action data and the target node information to a second animation state machine, enabling the second animation state machine to activate the target animation replacing node based on the target node information, obtaining target object data corresponding to a target animation object, and loading the target action data and the target object data through the target animation replacing node to obtain the target action animation corresponding to the target animation object. By adopting the method, the resource consumption during animation processing can be reduced.

Description

Animation data processing method, animation data processing device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for processing animation data, a computer device, and a storage medium.
Background
With the development of computer technology, in addition to static expressions such as texts, dynamic expressions such as animations appear in the information display mode.
In the conventional technology, if an animation of an action needs to be generated, a corresponding animation node needs to be created on an animation state machine in an animation engine, and different actions need to be created on different animation nodes on the animation state machine. However, animation nodes on an animation state machine may expand linearly as animation requirements increase. Finally, more and more nodes occupy more and more memories, resulting in a large amount of resource consumption.
Disclosure of Invention
In view of the above, it is desirable to provide a method, an apparatus, a computer device, and a storage medium for processing animation data, which can reduce resource consumption, in view of the above technical problems.
A method of animation data processing, the method comprising:
acquiring target interaction information, wherein the target interaction information carries an animation object identifier corresponding to a target animation object;
acquiring target animation configuration information corresponding to the animation object identification, wherein the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object;
determining a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information, wherein the target action has a corresponding target action type identifier;
the method comprises the steps of obtaining target action data corresponding to a target action and target node information of a target animation replacing node corresponding to a target action type identification, sending the target action data and the target node information to a second animation state machine, enabling the second animation state machine to activate the target animation replacing node based on the target node information, obtaining target object data corresponding to a target animation object, and loading the target action data and the target object data through the target animation replacing node to obtain a target action animation corresponding to the target animation object.
In one embodiment, the node information includes a node path, the animation playing information includes action triggering information, and determining a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information includes:
acquiring node information of animation replacement nodes corresponding to each action;
determining an upper-layer animation node corresponding to the animation replacement node based on the node path, and acquiring a node state of the upper-layer animation node corresponding to the animation replacement node;
taking the action corresponding to the animation replacement node with the node state of the upper animation node being the activated state as a candidate action;
and performing trigger action detection on the target animation object based on the target interaction information and the animation playing information corresponding to each candidate action to obtain a target action corresponding to the target animation object.
In one embodiment, the animation playing information includes action triggering information, and the triggering action detection is performed on the target animation object based on the target interaction information and the animation playing information corresponding to each candidate action, so as to obtain a target action corresponding to the target animation object, including:
and matching the target interaction information with the action trigger information corresponding to each candidate action, and taking the candidate action corresponding to the action trigger information which is successfully matched as the target action.
In one embodiment, the animation playing information includes a storage path of motion data corresponding to the motion, and the obtaining of target motion data corresponding to the target motion includes:
acquiring target animation playing information corresponding to the target action;
and acquiring target action data based on the target storage path in the target animation playing information.
In one embodiment, the animation playing information includes action interruption information, the target node information includes a target node path corresponding to the target animation replacement node and target node interruption information, and the method further includes:
in the playing process of the target action animation, when interruption interaction information matched with action interruption information corresponding to the target action is acquired, a target node path and target node interruption information are sent to a second animation state machine, so that the second animation state machine sends the target node interruption information to a target animation replacement node based on the target node path, and the target node interruption information is used for interrupting the playing of the target action animation.
An animation data processing apparatus, the apparatus comprising:
the interactive information acquisition module is used for acquiring target interactive information, and the target interactive information carries an animation object identifier corresponding to a target animation object;
the configuration information acquisition module is used for acquiring target animation configuration information corresponding to the animation object identifier, and the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object;
the target action determining module is used for determining a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information, and the target action has a corresponding target action type identifier;
and the action animation generation module is used for acquiring target action data corresponding to the target action and target node information of a target animation replacement node corresponding to the target action type identifier, sending the target action data and the target node information to the second animation state machine, enabling the second animation state machine to activate the target animation replacement node based on the target node information, acquiring target object data corresponding to the target animation object, and loading the target action data and the target object data through the target animation replacement node to obtain the target action animation corresponding to the target animation object.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring target interaction information, wherein the target interaction information carries an animation object identifier corresponding to a target animation object;
acquiring target animation configuration information corresponding to the animation object identification, wherein the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object;
determining a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information, wherein the target action has a corresponding target action type identifier;
the method comprises the steps of obtaining target action data corresponding to a target action and target node information of a target animation replacing node corresponding to a target action type identification, sending the target action data and the target node information to a second animation state machine, enabling the second animation state machine to activate the target animation replacing node based on the target node information, obtaining target object data corresponding to a target animation object, and loading the target action data and the target object data through the target animation replacing node to obtain a target action animation corresponding to the target animation object.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring target interaction information, wherein the target interaction information carries an animation object identifier corresponding to a target animation object;
acquiring target animation configuration information corresponding to the animation object identification, wherein the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object;
determining a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information, wherein the target action has a corresponding target action type identifier;
the method comprises the steps of obtaining target action data corresponding to a target action and target node information of a target animation replacing node corresponding to a target action type identification, sending the target action data and the target node information to a second animation state machine, enabling the second animation state machine to activate the target animation replacing node based on the target node information, obtaining target object data corresponding to a target animation object, and loading the target action data and the target object data through the target animation replacing node to obtain a target action animation corresponding to the target animation object.
A method of animation data processing, the method comprising:
receiving target action data and target node information sent by a first animation state machine; the target action data is action data corresponding to a target action of a target animation object, the target action is determined by a first animation state machine based on target interaction information carrying an animation object identifier corresponding to the target animation object and target animation configuration information corresponding to the animation object identifier, the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object, the target node information is node information of a target animation replacement node, and the target animation replacement node is determined based on a target action type identifier corresponding to the target action;
activating a target animation replacement node based on the target node information;
acquiring target object data corresponding to a target animation object;
and loading the target action data and the target object data through the target animation replacing node to obtain the target action animation corresponding to the target animation object.
In one embodiment, activating a target animation replacement node based on target node information includes:
acquiring a forward node list corresponding to a target animation replacement node;
and when the forward animation nodes in the activated state exist in the forward node list, activating the target animation replacement nodes based on the target node information.
In one embodiment, the step of loading the target action data and the target object data through the target animation replacement node to obtain the target action animation corresponding to the target animation object includes:
loading target action data and the target object data through a target animation replacing node to obtain a trigger action animation corresponding to the target action of the target animation object;
acquiring a forward motion animation corresponding to a forward animation node in an activated state;
acquiring second transition time information;
and transitioning from the forward action animation to the trigger action animation based on the second transition time information to obtain the target action animation.
In one embodiment, the target node information includes target node trigger information, and the obtaining of the second transition time information includes:
acquiring target transition time information corresponding to target node trigger information;
and taking the target transition time information as second transition time information.
An animation data processing apparatus, the apparatus comprising:
the information receiving module is used for receiving target action data and target node information sent by the first animation state machine; the target action data is action data corresponding to a target action of a target animation object, the target action is determined by a first animation state machine based on target interaction information carrying an animation object identifier corresponding to the target animation object and target animation configuration information corresponding to the animation object identifier, the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object, the target node information is node information of a target animation replacement node, and the target animation replacement node is determined based on a target action type identifier corresponding to the target action;
the node activation module is used for activating the target animation replacement node based on the target node information;
the information acquisition module is used for acquiring target object data corresponding to the target animation object;
and the animation generation module is used for loading the target action data and the target object data through the target animation replacing node to obtain the target action animation corresponding to the target animation object.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
receiving target action data and target node information sent by a first animation state machine; the target action data is action data corresponding to a target action of a target animation object, the target action is determined by a first animation state machine based on target interaction information carrying an animation object identifier corresponding to the target animation object and target animation configuration information corresponding to the animation object identifier, the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object, the target node information is node information of a target animation replacement node, and the target animation replacement node is determined based on a target action type identifier corresponding to the target action;
activating a target animation replacement node based on the target node information;
acquiring target object data corresponding to a target animation object;
and loading the target action data and the target object data through the target animation replacing node to obtain the target action animation corresponding to the target animation object.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
receiving target action data and target node information sent by a first animation state machine; the target action data is action data corresponding to a target action of a target animation object, the target action is determined by a first animation state machine based on target interaction information carrying an animation object identifier corresponding to the target animation object and target animation configuration information corresponding to the animation object identifier, the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object, the target node information is node information of a target animation replacement node, and the target animation replacement node is determined based on a target action type identifier corresponding to the target action;
activating a target animation replacement node based on the target node information;
acquiring target object data corresponding to a target animation object;
and loading the target action data and the target object data through the target animation replacing node to obtain the target action animation corresponding to the target animation object.
The animation data processing method, the device, the computer equipment and the storage medium are characterized in that target interaction information is obtained, the target interaction information carries an animation object identifier corresponding to a target animation object, target animation configuration information corresponding to the animation object identifier is obtained, the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object, a target action corresponding to the target animation object is determined based on the target interaction information and the target animation configuration information, the target action has a corresponding target action type identifier, target action data corresponding to the target action and target node information of a target animation replacement node corresponding to the target action type identifier are obtained, the target action data and the target node information are sent to a second animation state machine, so that the second animation state machine activates the target animation replacement node based on the target node information, and acquiring target object data corresponding to the target animation object, and loading the target action data and the target object data through the target animation replacing node to obtain the target action animation corresponding to the target animation object. Therefore, the animation replacing node in the second animation state machine has a corresponding relation with the action type identifier of the action, so that different actions corresponding to the same action type identifier can share the same animation replacing node, thereby effectively controlling the number of nodes in the second animation state machine and avoiding the expansion and growth of the number of nodes in the second animation state machine. And after the first animation state machine determines the target action triggered by the target animation object, sending target action data corresponding to the target action to a corresponding target animation node on the second animation state machine for animation generation. Through the cooperation of the first animation state machine and the second animation state machine, when the second animation state machine is used for generating animation, the number of nodes in the second animation state machine can be reduced, and therefore the purpose of reducing resource consumption is achieved.
Drawings
FIG. 1 is a diagram showing an application environment of an animation data processing method according to an embodiment;
FIG. 2 is a flowchart illustrating a method of processing animation data according to an embodiment;
FIG. 3 is a schematic view of a kicking action in one embodiment;
FIG. 4 is a flow diagram illustrating a determination of a target action in another embodiment;
FIG. 5 is a flowchart showing a method of processing animation data according to another embodiment;
FIG. 6A is a diagram of a forward node list and animation replacement nodes, in one embodiment;
FIG. 6B is a schematic diagram of an interface for an animation node display interface, according to one embodiment;
FIG. 6C is an interface diagram of an animation node presentation interface, according to another embodiment;
FIG. 7 is a diagram of a target animation replacement node and a backward animation node in one embodiment;
FIG. 8A is a diagram illustrating an interface for configuring animation configuration information, according to one embodiment;
FIG. 8B is a flowchart showing a method of processing animation data according to still another embodiment;
FIG. 8C is a diagram illustrating a software architecture of a computer device in one embodiment;
FIG. 9A is a block diagram showing the construction of an animation data processing apparatus according to an embodiment;
FIG. 9B is a block diagram showing the construction of an animation data processing apparatus according to another embodiment;
FIG. 10A is a block diagram showing the construction of an animation data processing apparatus according to an embodiment;
FIG. 10B is a block diagram showing the construction of an animation data processing apparatus according to another embodiment;
FIG. 11 is a diagram of the internal structure of a computer device in one embodiment;
FIG. 12 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms unless otherwise specified. These terms are only used to distinguish one element from another.
The animation data processing method provided by the application can be applied to the application environment shown in FIG. 1. Wherein the first animation state machine 102 communicates with the second animation state machine 104 via a network or interface. The first animation state machine and the second animation state machine can be deployed on a terminal or a server. It can be understood that the first animation state machine and the second animation state machine can be located on the same terminal and the same server, or can be located on different terminals and different servers respectively. The terminal can be, but is not limited to, various personal computers, laptops, smart phones, tablet computers and portable wearable devices, and the server can be implemented by an independent server or a server cluster formed by a plurality of servers or a cloud server.
Specifically, the first animation state machine obtains target interaction information, the target interaction information carries an animation object identifier corresponding to the target animation object, and obtains target animation configuration information corresponding to the animation object identifier, and the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object. And the first animation state machine determines a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information, and the target action has a corresponding target action type identifier. The first animation state machine obtains target action data corresponding to a target action and target node information of a target animation replacing node corresponding to a target action type identification, the target action data and the target node information are sent to a second animation state machine, so that the second animation state machine activates the target animation replacing node based on the target node information, target object data corresponding to a target animation object are obtained, and the target action data and the target object data are loaded through the target animation replacing node to obtain a target action animation corresponding to the target animation object.
The animation data processing method further relates to a block chain technology. For example, a first animation state machine and a second animation state machine that perform animation data processing methods may be deployed on nodes of a blockchain. Animation configuration information corresponding to different animation object identifications and node information of each animation replacement node can be stored in the blockchain.
In one embodiment, as shown in FIG. 2, there is provided an animation data processing method, exemplified by applying the method to the first animation state machine in FIG. 1, the method comprising the steps of:
step S202, target interaction information is obtained, and the target interaction information carries an animation object identifier corresponding to the target animation object.
The animation state machine is used for monitoring and managing the action change of the animation object and controlling the action animation generating the animation object. In the animation engine, the animation state machine interfaces the animation logic layer upwards and controls the animation pipeline downwards.
The first animation state machine is a state machine that is primarily used to monitor and manage the change of motion of an animated object. The second animation state machine is a state machine mainly used for controlling the motion animation for generating an animation object. If the first animation state machine monitors that the animation object has action changes such as triggering, jumping and breaking, the first animation state machine timely informs the second animation state machine and sends related data to the second animation state machine, so that the second animation state machine generates corresponding action animation according to the received data. For example, the first animation state machine stores animation configuration information of an animation object, and the animation configuration information can be used to determine what action change has occurred in the animation object. The first animation state machine may determine what action was triggered by the animation object based on the message sent by the animation logic layer and the locally stored animation configuration information. Furthermore, the first animation state machine may send the motion data of the trigger motion to the second animation state machine, notify the second animation state machine that the animation object needs to generate the motion animation of the trigger motion, and the second animation state machine may generate the corresponding motion animation based on the motion data of the trigger motion. The second animation state machine comprises an animation node used for generating the action animation, and the second animation state machine can generate the corresponding action animation by loading the action data on the animation node.
The second animation state machine may be a conventional animation state machine. It can be understood that the second animation state machine has all the functions of the conventional animation state machine, that is, the second animation state machine retains the relevant characteristics and functions of the conventional animation state machine, but based on the animation data processing method of the present application, through the cooperation of the first animation state machine and the second animation state machine, the number of animation nodes on the second animation state machine is reduced compared with the number of animation nodes on the animation state machine using the conventional method. The first animation state machine may be an animation state machine established outside of the second animation state machine, the first animation state machine being grafted to the second animation state machine. Data interaction based on the first animation state machine and the second animation state machine can control resource consumption on the basis of effectively generating target action animation of the target animation object.
The animation object may be a virtual object, such as a flag in a virtual scene, or a virtual character, such as a player character, a non-player character, etc. in a game. The target animation object refers to an animation object corresponding to the target interaction information. The animation object identifier is an identifier for uniquely identifying an animation object, and may specifically include a character string of at least one character of letters, numbers, and symbols. The target interactive information is information generated by the user and the terminal based on interaction. For example, a treasure box is displayed on an animation playing interface of the terminal, and a user clicks the treasure box on the animation playing interface, so that corresponding target interaction information is generated, and the target interaction information carries an animation object identifier corresponding to the treasure box. The animation playing interface is an interface for playing animation, for example, a game interface for playing game animation, an interactive interface for playing interactive animation, and a product interface for playing product display animation.
It can be understood that the target interaction information may carry an animation object identifier corresponding to at least one target animation object. For example, a user clicks an attack control on a game interface to trigger a plurality of users to attack simultaneously, and then the generated interaction information may carry animation object identifiers corresponding to a plurality of target animation objects respectively.
Specifically, the first animation state machine acquires target interaction information and determines what action is triggered by the target animation object based on the target interaction information. The target interaction information may be specifically generated based on a trigger operation that a user acts on a corresponding interface, for example, the user clicks a trigger control corresponding to an action one on an animation playing interface, and may generate corresponding target interaction information, where the target interaction information is used to indicate that an animation object triggers the action one. The target interaction information may also be generated based on scene change information, for example, in a game interface, when a user controls a game character to enter a scene B from a scene a, corresponding target interaction information is generated, and the target interaction information is used for indicating that an initial action of an animation object belonging to an environment element in the scene B is triggered. The animation objects belonging to the environment elements can be flags in a scene, fish schools in water, wild small animals and the like.
Step S204, acquiring target animation configuration information corresponding to the animation object identification, wherein the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object.
The animation configuration information is used for configuring animation playing information of at least one action corresponding to the animation object. The animation configuration information includes animation playback information of at least one action corresponding to the animation object. The animation configuration information may be configured and stored in the form of a graph. The target animation configuration information is animation configuration information corresponding to the target animation object and is used for configuring animation playing information of at least one action corresponding to the target animation object. The animation playing information of one action comprises starting information of at least one action state of the action, and is used for controlling the starting of various action states of one action of the animation object, so that the generation of the corresponding action animation can be triggered. The action state may include a trigger state, a break state, and a jump state. It is to be understood that an animated object may exhibit at least one action, e.g., a virtual character including a jumping action, an attacking action, a conversational action, etc.
Specifically, animation configuration information corresponding to each animation object may be generated in advance, and the animation configuration information and an animation object identifier of the corresponding animation object may be stored in a first animation state machine in an associated manner. Then, after the first animation state machine obtains the target interaction information, the first animation state machine may obtain corresponding animation configuration information as the target animation configuration information according to the animation object identifier carried by the target interaction information. It is understood that animation objects of the same type may correspond to the same animation configuration information, or may correspond to different animation configuration information.
And step S206, determining a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information, wherein the target action has a corresponding target action type identifier.
The target action refers to the action triggered by the target animation object. The action type identifier is an identifier for uniquely identifying the type of the action, and may specifically include a character string of at least one character of letters, numbers and symbols. For example, the attack action type identifier may correspond to a plurality of different attack actions. The dance action type identifier may correspond to a plurality of different dance actions, such as a "street dance" action, a "modern dance" action, a "classical dance" action, and the like. The target action type identifier refers to an action type identifier of the target action.
Specifically, since the target animation configuration information includes animation playing information of at least one action corresponding to the target animation object, the first animation state machine may perform information matching on the acquired target interaction information and the target animation configuration information, and determine the target action corresponding to the target animation object based on a matching result.
In one embodiment, the animation playback information includes action trigger information. The action trigger information is start information of a trigger state of an action, that is, a trigger condition of an action. The action trigger information may include at least one candidate interaction information for triggering an action, and may also include characteristics, conditions that need to be satisfied by the target interaction information for triggering an action. And the first animation state machine matches the target interaction information with each action trigger information in the target animation configuration information, and takes the action corresponding to the action trigger information which is successfully matched as the target action. For example, when a player triggers an attack corresponding to attack one, the animation logic layer may generate target interaction information ("attack 1" + "1"), based on which the first animation state machine matches action trigger information ("attack 1" + "1") of action attack one in the target animation configuration information, such that the first animation state machine determines that the target action is attack one. It can be understood that the action may be determined as the target action only when the target interaction information acquired by the first animation state machine satisfies all trigger conditions of the action. Or when the target interaction information acquired by the first animation state machine meets at least one trigger condition of a certain action, determining the action as the target action. The action trigger information of each action in the same animation configuration information can be different, so that the condition of disordered action trigger can be avoided.
In one embodiment, the target interaction information may carry a trigger action identifier of an action to be triggered, and the first animation state machine may match the trigger action identifier in the target interaction information with a candidate action identifier corresponding to each action in the target animation configuration information, and take an action corresponding to the candidate action identifier that is successfully matched as the target action.
Step S208, target action data corresponding to the target action and target node information of a target animation replacement node corresponding to the target action type identification are obtained, the target action data and the target node information are sent to a second animation state machine, so that the second animation state machine activates the target animation replacement node based on the target node information, target object data corresponding to the target animation object are obtained, and the target action data and the target object data are loaded through the target animation replacement node to obtain the target action animation corresponding to the target animation object.
The motion data refers to animation information of a motion and is used for determining the posture of each stage in the process from the beginning to the completion of the motion. For example, if the animation object is created based on a skeleton model, the motion data may be animation information of a target skeleton node involved in the animation object triggering a certain motion. The action data may specifically include data such as the number of target skeleton nodes, skeleton identifiers corresponding to the target skeleton nodes, connection relationships, start positions, end positions, movement routes, and movement speeds. Loading the motion data and the object data means loading the motion data and the object data into a memory, and generating a corresponding motion animation in a memory space based on the motion data and the object data. The target motion data is motion data corresponding to the target motion.
The object data refers to animation information of an animated object for determining a basic pose of the animated object. For example, if the animated object is built based on a skeletal model, the object data may be skeletal information of the initial skeletal nodes that make up the animated object. The object data may specifically include data such as a bone identifier, bone shape information, a connection relationship, an initial position, and bone dressing information corresponding to each initial bone node constituting the animation object. It is understood that the number of initial bone nodes is greater than or equal to the number of target bone nodes. For example, when the animation object is a human, the initial bone nodes include bone nodes corresponding to human body components such as limbs, trunk, head, and neck of the human, and when the target moves as a punch, the target bone nodes may be bone nodes corresponding to a part of the human body components involved in the punch, for example, bone nodes corresponding to upper limbs. Further, the object data for different animation objects are different. Even if different animation objects are animation objects of the same type, the object data corresponding to the different animation objects may be different. However, the motion data corresponding to the same motion is the same for animation objects of the same type. For example, different players in the same game may use the same type of game characters, have the same skeleton, but have different appearances, that is, the skeleton dress information is different, but they may trigger the same action, and only one piece of action data corresponding to the same action is needed. The target object data is object data corresponding to the target animation object.
One action type identifier corresponds to one animation replacement node. It will be appreciated that multiple actions corresponding to an action type identifier are of the same or similar animation logic, except that the specific action data is different. Therefore, the action data of a plurality of actions corresponding to one action type identifier can be sent to the same animation replacing node for loading and generating the animation. For example, the overall logic that controls animated objects to perform various dance actions is relatively stable, fixed, and only differs with respect to action data between specific dance actions, such as "street dance", "modern dance", etc. Therefore, a large number of similar animation nodes do not need to be configured in the second animation state machine, so that one dance action corresponds to one animation node. At the moment, only one animation replacing node needs to be configured in the second animation state machine, the animation replacing node and the dance-representing action type identifier have a corresponding relation, and when the corresponding dance action is triggered, action data of the corresponding dance action is loaded through the animation replacing node, so that action animations of different dance actions can be generated through one animation replacing node finally, and the number of animation nodes in the second animation state machine is effectively controlled.
An animation replacement node may correspond to at least one action type identifier. For example, when the actions corresponding to two action type identifiers cannot be triggered simultaneously, the two action type identifiers may correspond to the same animation replacement node. Because, even if the two action type identifications correspond to the same animation replacement node, only one action is triggered at a time point, and the action data of one action is loaded on the animation replacement node. Even if one animation replacing node corresponds to two action type identifications, the situations of action confusion and animation confusion can not occur.
The animation replacement node refers to an animation node for receiving the motion data transmitted by the first animation state machine. Each time new motion data is received, the animation replacement node can replace the old motion data from the loading of the new motion data, and generate a new motion animation to play. The node information comprises data such as node identification, node paths, node trigger information, node interruption information and the like. The positions of the animation replacement nodes can be quickly found based on the node paths. Animation replacement nodes can be activated to load action data and play generated action animations based on the node trigger information. Playing of the action animation generated on the animation replacement node can be interrupted based on the node interruption information.
The target animation replacement node is the animation replacement node corresponding to the target action type identification. The motion animation refers to animation in which an animation object shows a screen that triggers a motion. The target motion animation is an animation of a screen in which the target motion object shows the target motion. The target motion animation includes gesture information that the target animation object exhibits the target motion, e.g., the target motion animation is composed of a series of decomposed gestures in which the target animation object a individually exhibits a kicking motion. When the target animation object is built based on the skeleton model, the target motion animation may be a three-dimensional skeleton animation, the target motion animation may be composed of a series of skeleton pots of the target animation object that change with time, and a pot (gesture) is composed of displacement, rotation, and scaling of a set of bones.
Specifically, after determining a target action type identifier corresponding to the target action, the first animation state machine may determine the target animation replacement node based on the target action type identifier. After determining the target action and the target animation replacement node, the first animation state machine can acquire target action data corresponding to the target action and target node information of the target animation replacement node, and sends the target action data and the target node information to the second animation state machine. After receiving the target action data and the target node information, the second animation state machine can find the target animation replacement node based on the target node information and activate the target animation replacement node. After the target animation replacing node is activated, the second animation state machine can obtain target object data corresponding to the target animation object, and the target action data and the target object data are loaded through the target animation replacing node to obtain the target action animation corresponding to the target animation object. Finally, the second animation state machine may output the generated target action animation through the target animation replacement node. And rendering the target action animation into a video and then playing the video on an animation playing interface.
In one embodiment, the second animation state machine may activate the target animation replacement node based on target node trigger information in the target node information. The second animation state machine can also find the target animation replacing node based on the target node information, and send the node activating instruction to the target animation replacing node, so that the target animation replacing node enters an activating state based on the node activating instruction.
In one embodiment, the obtaining, by the second animation state machine, target object data corresponding to the target animation object, and loading the target motion data and the target object data through the target animation replacement node to obtain the target motion animation corresponding to the target animation object includes: and fusing the target motion data and the target object data through the target animation replacing node to generate a plurality of decomposition postures corresponding to the target motion, and obtaining the target motion animation based on each decomposition posture.
The decomposition posture is an action posture corresponding to the decomposition action of the target action. For example, a kicking action is formed by a series of decomposition gestures corresponding to the kicking process of an animated object from raising to kicking to a target position. The decomposition gestures carry time stamps, and the combination of the decomposition gestures according to the time stamps can form a complete motion animation of one motion. The target action animation can be finally rendered into a video for playing. For example, referring to fig. 3, a target motion animation corresponding to a kicking motion of a target animation object may be rendered as a motion video including video frames corresponding to three decomposition gestures, respectively.
Specifically, when target action data and target object data are loaded and a target action animation corresponding to the target animation object is generated, the second animation state machine can fuse the target action data and the target object data through a target animation replacement node to form fused data corresponding to each decomposition action of the target animation object, and obtain action postures corresponding to each decomposition action of the target object based on the fused data, so that a plurality of decomposition postures corresponding to the target animation object when the target action is performed are obtained. The second animation state machine may generate a target motion animation based on the respective decomposition gestures through the target animation replacement node. Subsequently, when rendering the animation, the target action animation can be rendered into a video consisting of a series of video frames of which the target animation object triggers the target action.
In one embodiment, in the first animation state machine, animation configuration information and node information may be stored separately, i.e., in isolation. Of course, in order to improve the information acquisition efficiency, the node information of the animation replacement node may also be recorded in the animation configuration information.
In the animation data processing method, target animation configuration information corresponding to the animation object identification is obtained by obtaining target interaction information which carries the animation object identification corresponding to the target animation object, the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object, the target action corresponding to the target animation object is determined based on the target interaction information and the target animation configuration information, the target action has a corresponding target action type identification, target action data corresponding to the target action and target node information of a target animation replacing node corresponding to the target action type identification are obtained, the target action data and the target node information are sent to a second animation state machine, so that the second animation state machine activates the target animation replacing node based on the target node information, and the target action data are loaded through the target animation replacing node, and obtaining the target action animation corresponding to the target animation object. Therefore, the animation replacing node in the second animation state machine has a corresponding relation with the action type identifier of the action, so that different actions corresponding to the same action type identifier can share the same animation replacing node, thereby effectively controlling the number of nodes in the second animation state machine and avoiding the expansion and growth of the number of nodes in the second animation state machine. And after the first animation state machine determines the target action triggered by the target animation object, sending target action data corresponding to the target action to a corresponding target animation node on the second animation state machine for animation generation. Through the cooperation of the first animation state machine and the second animation state machine, when the second animation state machine is used for generating animation, the number of nodes in the second animation state machine can be reduced, and therefore the purpose of reducing resource consumption is achieved.
In one embodiment, the action carries an action type identifier, and before the target interaction information is acquired, the method further includes:
performing action clustering on each action corresponding to the same action type identifier to obtain action clustering clusters corresponding to each action type identifier respectively; performing node distribution on each action cluster based on at least two candidate replacement nodes established in a second animation state machine in advance to obtain animation replacement nodes corresponding to each action cluster; in the animation playback information of each action, node information of the corresponding animation replacement node is configured.
Acquiring target node information of a target animation replacement node corresponding to the target action type identifier, wherein the target node information comprises: and acquiring target node information from target animation playing information corresponding to the target action.
The action clustering is used for clustering actions corresponding to the same action type identifier in the same cluster, that is, clustering actions belonging to the same action type. The node allocation is used for allocating animation replacement nodes for the action cluster of each action type. The target animation playing information refers to animation playing information corresponding to the target action.
Specifically, each action has a corresponding action type identifier, and if an animation replacement node corresponding to each action type identifier is determined, the animation replacement node corresponding to each action may also be determined. In order to improve the information acquisition efficiency, the node information of the animation replacement node corresponding to each action may be stored in advance in the animation playback information corresponding to each action. Then, after determining the target action, the first animation state machine may directly obtain the target node information corresponding to the target animation replacement node from the target animation playing information corresponding to the target action.
Before the target interaction information is obtained, in the first animation state machine, action clustering can be performed on each action corresponding to the same action type identifier, so that action cluster clusters corresponding to each action type identifier are obtained. One action cluster comprises a plurality of actions corresponding to one action type identifier. While in the second animation state machine, a plurality of free candidate replacement nodes may be created in advance. The second animation state machine may send node information for each candidate replacement node to the first animation state machine. In the first animation state machine, node allocation may be performed on each action cluster based on at least two candidate replacement nodes established in the second animation state machine in advance, so as to obtain an animation replacement node corresponding to each action cluster. That is, candidate replacement nodes corresponding to the action type identifiers are determined, so that animation replacement nodes corresponding to the action type identifiers are obtained. It can be understood that the node allocation may be performed randomly, one action type identifier corresponds to one animation replacement node, and animation replacement nodes corresponding to different action type identifiers may be the same or different. When the animation replacement nodes corresponding to different action type identifications are the same, the actions corresponding to the action type identifications are ensured not to be triggered at the same time. Finally, the first animation state machine may configure node information of the corresponding animation replacement node in the animation play information of each action.
In one embodiment, the node information configuration system can also be a computer device for node creation and node information configuration. The computer equipment carries out action clustering on each action corresponding to the same action type identifier to obtain action cluster clusters corresponding to each action type identifier, carries out node distribution on each action cluster based on at least two candidate replacement nodes established in a second animation state machine in advance to obtain animation replacement nodes corresponding to each action cluster, and configures node information of the corresponding animation replacement nodes in animation playing information of each action. And the computer equipment sends animation configuration information consisting of the animation playing information of each action to the first animation state machine for storage. Subsequently, after the first animation state machine determines the target action, the first animation state machine can directly acquire target node information corresponding to the target animation replacement node from target animation playing information corresponding to the target action.
In this embodiment, one action type identifier corresponds to one animation replacement node through action clustering and node allocation. In this way, the action data of each action corresponding to the same action type identifier can be sent to the same animation replacement node to generate corresponding actions, so that the number of nodes in the second animation state machine can be effectively controlled. In addition, node information of a corresponding animation replacement node is added to animation playing information of the action, and the node information of the target animation replacement node can be quickly acquired after the target action is determined.
In one embodiment, as shown in fig. 4, the node information includes a node path, and determining a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information includes:
in step S402, node information of the animation replacement node corresponding to each action is acquired.
And S404, determining an upper-layer animation node corresponding to the animation replacement node based on the node path, and acquiring the node state of the upper-layer animation node corresponding to the animation replacement node.
Wherein, in the second animation state machine, the animation nodes are developed in a first level and a second level. The node path may be an animation hierarchy line through which the animation replacement node is found. For example, the node path of the animation replacement node is Core | interaction | simpleination 1, "|" is a divider of each level of animation node, simpleination 1 represents the animation replacement node, and Core | interaction represents the upper layer animation node of the animation replacement node.
The node states include an active state and a standby state. When the animation node is in an activated state, the animation node is activated and is in use, and internal data is in operation. When the animation node is in a standby state, the animation node is not activated and is in a standby state, and internal data does not participate in operation.
Specifically, in order to reduce the data amount of information matching, the first animation state machine may determine the target action through information matching only when an upper-layer animation node of the animation replacement node is in an active state, so as to avoid that the first animation state machine blindly performs information matching to determine the target action. Only when the animation node at the upper layer of the animation replacement node is in an activated state, the animation replacement node can be successfully activated and used.
After the target interaction information is obtained, the first animation state machine may determine, based on the action type identifier corresponding to each action of the target animation object, an animation replacement node corresponding to each action, so as to obtain node information of the animation replacement node corresponding to each action. The first animation state machine may also obtain node information of an animation replacement node corresponding to each action from animation playback information of each action in the target animation configuration information. The node information includes a node path, and since the node path includes an animation hierarchy line through which the animation replacement node is found, the first animation state machine can determine an upper layer animation node corresponding to the animation replacement node based on the node path, thereby obtaining a node state of the upper layer animation node corresponding to the animation replacement node. The first animation state machine can send a node state query request to the second animation state machine, the node state query request carries a node identifier of an animation node of a node state to be queried, the second animation state machine queries a node state of an upper layer animation node corresponding to a needed animation replacement node through the node state query request, and the node state is returned to the first animation state machine.
In step S406, an action corresponding to the animation replacement node whose node state of the upper layer animation node is the active state is set as a candidate action.
Step S408, trigger action detection is carried out on the target animation object based on the target interaction information and the animation playing information corresponding to each candidate action, and a target action corresponding to the target animation object is obtained.
The trigger action detection is used for detecting which specific action of the target animation object is triggered by the target interaction information.
Specifically, after acquiring the node states of the upper animation nodes corresponding to the animation replacement nodes, the first animation state machine may determine whether there is an upper animation node whose node state is an active state. If there is an upper animation node whose node state is an active state, the first animation state machine may take an action corresponding to an animation replacement node whose node state is an active state as a candidate action. Furthermore, the first animation state machine may perform trigger action detection on the target animation object based on the target interaction information and the animation playing information corresponding to each candidate action, and determine a target action corresponding to the target animation object based on a trigger action detection result.
On the whole, the node state of the upper-layer animation node of the animation replacement node is firstly screened, candidate actions are screened from all the actions, then secondary screening is carried out on the basis of information matching of target interaction information and animation playing information corresponding to all the candidate actions, and the target action is finally determined from all the candidate actions.
In one embodiment, the animation playing information includes action triggering information, and the triggering action detection is performed on the target animation object based on the target interaction information and the animation playing information corresponding to each candidate action, so as to obtain a target action corresponding to the target animation object, including:
and matching the target interaction information with the action trigger information corresponding to each candidate action, and taking the candidate action corresponding to the action trigger information which is successfully matched as the target action.
Specifically, the action trigger information refers to start information of a trigger state of an action, that is, a trigger condition of an action. The action trigger information may include at least one candidate interaction information for triggering an action, and may also include characteristics, conditions that need to be satisfied by the target interaction information for triggering an action. When the trigger action is detected, the first animation state machine may match the target interaction information with the action trigger information corresponding to each candidate action, and use the candidate action corresponding to the action trigger information that is successfully matched as the target action. For example, if the target interaction information is a, the action trigger information of the action one corresponding to the target animation object is a, the action trigger information of the action two is B, and the action trigger information of the action three is C, it may be determined that the action one is the target action according to the information matching. Therefore, the target action can be quickly and accurately determined from each candidate action through the matching of the target interaction information and the action trigger information in the animation playing information of each candidate action.
In one embodiment, the target interaction information carries a trigger action identifier of an action to be triggered, and the animation playing information includes a candidate action identifier corresponding to the action. When the trigger action is detected, the first animation state machine may match the trigger action identifier in the target interaction information with the candidate action identifiers corresponding to the candidate actions, and take the candidate action corresponding to the candidate action identifier that is successfully matched as the target action.
In this embodiment, when the target action is determined based on the target interaction information and the target animation configuration information, the target interaction information is matched with the animation playing information corresponding to the candidate action, and an upper layer animation node of the animation replacement node corresponding to the candidate action is in an activated state. Therefore, the target interaction information and the animation playing information of each action do not need to be matched, and only the target interaction information and the animation playing information corresponding to the candidate action need to be matched, so that the information matching times can be greatly reduced, the information traversal number is reduced, the information matching efficiency is improved, and the target action is quickly determined.
In one embodiment, the animation playing information includes a storage path of motion data corresponding to the motion, and the obtaining of target motion data corresponding to the target motion includes:
acquiring target animation playing information corresponding to the target action; and acquiring target action data based on the target storage path in the target animation playing information.
Specifically, when the animation configuration information of the animation object is configured, the motion data of each motion is added to the corresponding animation playing information without directly adding the motion data of each motion to the corresponding animation playing information, so that the data volume of the animation configuration information can be effectively reduced. The animation playing information comprises a storage path of motion data corresponding to the motion, and the motion data can be quickly acquired based on the storage path. Therefore, when acquiring the target motion data corresponding to the target motion, the first animation state machine may acquire the target animation playback information corresponding to the target motion, acquire the target storage path corresponding to the target motion data from the target animation playback information, and acquire the target motion data based on the target storage path. The storage path may specifically be a storage address of the action data in the hard disk.
In one embodiment, the target node information includes a target node path and target node trigger information corresponding to the target animation replacement node. Sending the target action data and the target node information to a second animation state machine so that the second animation state machine activates a target animation replacing node based on the target node information, obtains target object data corresponding to a target animation object, and loads the target action data and the target object data through the target animation replacing node to obtain a target action animation corresponding to the target animation object, wherein the method comprises the following steps:
and sending the target action data, the target node path and the target node trigger information to a second animation state machine so that the second animation state machine sends the target node trigger information to a target animation replacement node based on the target node path, loading the target action data and the target object data through the target animation replacement node to obtain a target action animation corresponding to the target animation object, wherein the target node trigger information is used for activating the target animation replacement node.
The target node path refers to a node path corresponding to the target animation replacement node. The target node trigger information refers to node trigger information corresponding to the target animation replacement node. The node trigger information is used to activate the corresponding animation replacement node. The node trigger information may cause the node state of the animation replacement node to transition from a standby state to an active state. The target node trigger information is used to activate the target animation replacement node. Different animation replacement nodes may correspond to different node trigger information.
It is to be understood that one animation replacement node may correspond to at least one node trigger information. The triggered target actions are different, and the node trigger information sent by the first animation state machine to the second animation state machine can be the same or different. For example, action one and action two belong to the same action type, and both action one and action two correspond to animation replacement node one. When a trigger action one, the first animation state machine may send node trigger information one to the second animation state machine, such that the second animation state machine activates an animation to replace node one based on node trigger information one. When the action one is triggered, the first animation state machine may send node trigger information two to the second animation state machine, so that the second animation state machine activates animation replacement node one based on node trigger information two.
Specifically, node trigger information for activating the animation replacement node may be agreed between the first animation state machine and the second animation state machine, so that when the target animation replacement node needs to be activated after the target action is determined, the first animation state machine may send the target node trigger information to the second animation state machine to activate the target animation replacement node. The sending, by the first animation state machine, the target action data and the target node information to the second animation state machine may specifically be sending the target action data, the target node path, and the target node trigger information to the second animation state machine. After the second animation state machine receives the data, the second animation state machine can find the target animation replacing node based on the target node path, send the target node trigger information to the target animation replacing node, and activate the target animation replacing node through the target node trigger information. And after the target animation replacing node is activated, the second animation state machine can load the target action data and the target object data through the target animation replacing node to obtain the target action animation corresponding to the target animation object.
In this embodiment, through the target node path, the second animation state machine may quickly find the target animation replacement node, and may activate the target animation replacement node through the target node trigger information, so that the target action animation may be quickly generated through the activated target animation replacement node.
In one embodiment, the animation playing information includes action interruption information, the target node information includes a target node path corresponding to the target animation replacement node and target node interruption information, and the method further includes:
in the playing process of the target action animation, when interruption interaction information matched with action interruption information corresponding to the target action is acquired, a target node path and target node interruption information are sent to a second animation state machine, so that the second animation state machine sends the target node interruption information to a target animation replacement node based on the target node path, and the target node interruption information is used for interrupting the playing of the target action animation.
The action interruption information refers to start information of an action interruption state, that is, an action interruption condition. The action interruption information can comprise at least one candidate interaction information used for interrupting an action, and can also comprise characteristics and conditions required to be met by target interaction information used for interrupting the action. The node interruption information is used for interrupting the playing of the action animation generated on the animation replacement node. The target node interruption information is used for interrupting the playing of the target action animation generated on the target animation replacement node.
It will be appreciated that different animation replacement nodes may interrupt information for different nodes. An animation replacement node may interrupt information corresponding to at least one node. The interrupted target actions are different, and the node interruption information sent by the first animation state machine to the second animation state machine can be the same or different. For example, action one and action two belong to the same action type, and both action one and action two correspond to animation replacement node A. When the break action is started, the first animation state machine can send node break information I to the second animation state machine, so that the second animation state machine breaks playing of the action animation of the action I generated on the animation replacement node A based on the node break information I. When the break action one is started, the first animation state machine can send the node break information two to the second animation state machine, so that the second animation state machine replaces the playing of the action animation of the action two generated on the node A based on the node break information two.
Specifically, in the playing process of the target motion animation, the first animation state machine may continuously determine whether interrupt interaction information matched with motion interrupt information corresponding to the target motion is received, that is, whether interaction information meeting an interrupt condition is received. If the first animation state machine receives interruption interaction information matched with action interruption information corresponding to the target action, the first animation state machine can send target node paths and target node interruption information to the second animation state machine. After the second animation state machine receives the data, the second animation state machine can determine a target animation replacing node based on the target node path, send target node interruption information to the target animation replacing node, and interrupt the playing of the target action animation generated on the target animation replacing node through the target node interruption information. At this time, the second animation state machine may interrupt only the motion animation corresponding to the target motion of the target animation object, and continue playing the motion animation in which other animation objects are not interrupted or not played.
In this embodiment, in the playing process of the target motion animation, if the interruption interaction information matched with the motion interruption information of the target motion is obtained, the first animation state machine may send node interruption information to the second animation state machine to stop the playing of the target motion.
In one embodiment, the animation play information includes motion skip information, and the method further includes:
in the playing process of the target action animation, when jump interaction information matched with action jump information corresponding to the target action is acquired, jump action data of the jump action corresponding to the target action is acquired, and first transition time information is acquired; and sending the jump action data, the first transition time information and the target node information to a second animation state machine so that the second animation state machine loads the jump action data and the target object data through a target animation replacing node corresponding to the target node information to obtain a jump action animation corresponding to a target animation object, and transitioning from the target action animation to the jump action animation based on the first transition time information to obtain a fusion action animation.
The action jump information refers to start information of a jump state of an action, that is, a jump condition of an action. The action jump information may include at least one candidate interaction information for jumping from the target action to other actions, and may also include characteristics, conditions, which need to be satisfied by the target interaction information for jumping from the target action to other actions. The jump action refers to an action that a target animation object needs to jump and display after the action jump is started. The jump action data is action data corresponding to a jump action.
The transition time information is a mixing time of animation transition when a jump occurs between the motion animation of the target motion and the motion animation of the jump motion. It will be appreciated that the longer the blending time, the slower and smoother the action jump, and the shorter the blending time, the faster and harder the action jump. The transition time information between different motion animations can set a proper mixing time according to the requirements of the motion. The first transition time information refers to transition time information for transitioning from the target action to the jump action.
Specifically, in the playing process of the target action animation, the first animation state machine may continuously determine whether jump interaction information matched with action jump information corresponding to the target action is received, that is, whether interaction information meeting a jump condition is received. If the first animation state machine receives the jump interaction information matched with the action jump information corresponding to the target action, the first animation state machine can acquire jump action data and first transition time information of the jump action corresponding to the target action, and sends the jump action data, the first transition time information and the target node information to the second animation state machine. In one embodiment, the target animation playing information corresponding to the target action may further include an action identifier of the jump action and first transition time information, so that the first animation state machine may obtain the action identifier and the first transition time information of the jump action from the target animation playing information, obtain jump animation playing information corresponding to the jump action based on the action identifier of the jump action, and obtain jump action data based on the jump animation playing information.
After the second animation state machine receives the jump motion data, the first transition time information and the target node information, the second animation state machine can determine a target animation replacing node based on the target node information, send the jump motion data to the target animation replacing node, and load the jump motion data and the target object data through the target animation replacing node to obtain the jump motion animation corresponding to the target animation object. And then, the second animation state machine is transited from the target action animation to the jump action animation through the target animation replacing node based on the first transition time information, and the fusion action animation is obtained.
In one embodiment, transitioning from the target action animation to the jump action animation based on the first transition time information results in a fused action animation comprising: and based on the first transition time information, transitioning from the trigger action animation to the jump action animation to obtain a fusion action animation, wherein the first transition time information is used for controlling the state parameter corresponding to the target action to be gradually reduced from a first preset value to a second preset value, and controlling the state parameter of the jump action to be gradually increased from the second preset value to the first preset value, and the first preset value is larger than the second preset value.
Wherein, the state parameter is a parameter for describing the action presentation state. The state parameters may also be referred to as state weights. When the state parameter corresponding to an action is a first preset value, the action is clearly presented according to the original action logic, and when the state parameter corresponding to an action is a second preset value, the action is hidden, disappeared and not presented, and at the moment, the animation object presents a basic posture, for example, a standing posture in a large character shape. When the state parameter corresponding to one action is between the first preset value and the second preset value, the animation object is in the middle process from the basic form to the development action. The first preset value may be 1 and the second preset value may be 0.
The fusion action animation is used for showing a dynamic process that the target animation object jumps from the target action to the jump action. The fused action animation includes a target animation object exhibiting gesture information for jumping from a target action to a jump action.
Specifically, after the target action animation and the jump action animation are acquired, the second animation state machine may transition from the target action animation of the target animation object to the jump action animation based on the first transition time information, thereby obtaining the fusion action animation. When the actions are transited, the second animation state machine can control the state parameter corresponding to the target action to gradually decrease from the first preset value to the second preset value based on the first transition time information so as to enable the target action to gradually fade out, and control the state parameter corresponding to the jump action to gradually increase from the second preset value to the first preset value based on the first transition time information so as to enable the jump action to gradually present and gradually fade in. The blend action animation may eventually present an animation effect that smoothly transitions from the target action to the jump action.
In one embodiment, the second animation state machine may employ a freeze transition manner, the second animation state machine freezes the timeline of the trigger action animation, i.e., triggers a local clock pause of the action animation, controls the state weight of the trigger action animation to gradually decrease from 1 to 0, and controls the state weight of the jump action animation to gradually increase from 0 to 1. That is, the current state of the target motion is frozen at the current position of the target animation object, and the frozen current state is mixed with the initial state of the jump motion as time goes on, so that the transition between the two states is performed, and in the mixing process, the initial state moves forward as time goes on.
In one embodiment, the first animation state machine may determine whether interrupt interaction information matching action interrupt information corresponding to the target action is received and whether jump interaction information matching action jump information corresponding to the target action is received while each frame of video is played.
In this embodiment, in the playing process of the target action animation, if the jump interaction information matched with the action jump information of the target action is acquired, the first animation state machine may send the node jump action data and the first transition time information to the second animation state machine, so that the action animation of the jump action can be smoothly jumped from the action animation of the target action.
In one embodiment, the animation playing information comprises a storage path corresponding to motion data of the motion. Acquiring target action data corresponding to the target action and target node information of a target animation replacement node corresponding to the target action type identifier, and sending the target action data and the target node information to a second animation state machine, wherein the target action data and the target node information comprise:
generating a target animation instance corresponding to the target action of the target animation object based on the target animation playing information corresponding to the target action and the object attribute information corresponding to the animation object identification; running a target animation example, and acquiring target action data based on a target storage path in the target animation playing information; and sending the target action data and the target node information to a second animation state machine through the target animation example.
The object attribute information is used to describe attributes of the animation object, and may specifically include attribute information such as name, gender, race, occupation, force value, life value, form, and decoration of the animation object, and the object attribute information corresponding to the same type of animation object may include the same information, for example, form of the animation object, or may include different information, for example, decoration of the animation object. For example, different players may use horses as game characters, and different players may dress up their respective game characters to form horses having different appearances.
The animation instance is used for allocating a memory space in the memory to monitor the action state change of the action of the animation object. Each animation instance is independent, and old animation instances can be interrupted by new animation instances, i.e., stateless between animation instances. When the change of the action state is monitored, the first animation state machine can send corresponding information to the second animation state machine through the animation instance, so that the second animation state machine is informed.
Specifically, the object attribute information and the animation object identifier corresponding to the animation object are stored in association, and therefore, the object attribute information corresponding to the target animation object can be acquired based on the animation object identifier of the target animation object. After the target action is determined, the first animation state machine may obtain target animation playing information corresponding to the target action. The first animation state machine may create a target animation instance corresponding to the target action of the target animation object based on the target animation playback information and the object attribute information corresponding to the target animation object. It can be appreciated that, because the object attribute information of different animation objects is different, the finally generated animation instances are different even if different animation objects trigger the same action. And the first animation state machine runs a target animation example, acquires target action data based on a target storage path in the target animation playing information through the target animation example, and sends the target action data and the target node information to the second animation state machine through the target animation example.
Subsequently, in the running process of the target animation example, when the first animation state machine acquires the interruption interaction information matched with the action interruption information corresponding to the target action, the first animation state machine can send the target node path and the target node interruption information to the second animation state machine through the target animation example. That is, when the target animation instance monitors that the target animation object changes from the trigger state to the interrupt state of the target action, the first animation state machine may send target node path and target node interrupt information to the second animation state machine through the target animation instance to notify the second animation state machine to interrupt the playing of the target action animation.
In the running process of the target animation example, when the first animation state machine acquires jump interaction information matched with action jump information corresponding to the target action, the first animation state machine can send jump action data, first transition time information and target node information to the second animation state machine through the target animation example. That is, when the target animation instance monitors that the target animation object changes from the trigger state of the target action to the jump state, the first animation state machine can send jump action data, first transition time information and target node information to the second animation state machine through the target animation instance to inform the second animation state machine to perform action jump, and generate the fusion action animation.
In the embodiment, the action state change of the target action is monitored by creating the target animation instance, and the second animation state machine is notified in time, so that the information interaction efficiency can be improved. When the target action is triggered, the first animation state machine can quickly send target action data and target node information to the second animation state machine through the target animation instance.
In one embodiment, in order to reduce the playing delay of the animation, the first animation state machine may generate a key action instance corresponding to the key action of the target animation object in advance. When the target animation object is generated, the first animation state machine can start the preloading of a key animation instance corresponding to the key action of the target animation object, and the key animation instance is created in advance before the interaction information matched with the action trigger information of the key action is acquired. Then, when the target motion determined based on the target interaction information is taken as the key motion subsequently, the first animation state machine can directly acquire the corresponding key animation instance, and the target motion data and the target node information are sent to the second animation state machine by running the key animation instance.
In one embodiment, the first animation state machine can additionally create a new thread on the basis of the existing thread, and the target animation instance is run in the new thread, so that the animation loading speed is increased, and the picture pause is reduced. For example, in a game scenario, a first animation state machine may create a new thread outside of the main thread that controls the game logic, running a target animation instance in the new thread, thereby avoiding impacting the running of the main thread.
In one embodiment, after the playing of the target action animation is finished, the target animation instance is stored in an object pool; and when the target interaction information is acquired next time, acquiring a target animation example from the object pool, and multiplexing the target animation example to monitor the action state change of the target action of the target animation object.
The object pool is to create a storage space in the memory to store the animation instance which needs to be used repeatedly. The end of the target action animation playing can be that the playing time of the target action of the target animation object reaches the preset playing time, or that the target action is interrupted or jumped.
Specifically, after the target action animation playback ends, the first animation state machine may store the target animation instance in the object pool. In this way, when the same target interaction information is acquired next time, the first animation state machine may directly acquire the target animation instance from the object pool, and monitor the action state change of the target action of the target animation object by multiplexing the target animation instance. For example, when the trigger state of the target action is started, the target action data and the target node information are sent to a second animation state machine; when the interruption state of the target action is started, the target node path and the target node interruption information are sent to a second animation state machine; and when the jumping state of the target action is started, the jumping action data, the first transition time information and the target node information are sent to a second animation state machine.
Therefore, after the playing of the target motion animation is finished, the target animation instance is stored in the object pool, when the target interaction information is obtained next time, the target animation instance is obtained from the object pool, the target animation instance is reused to monitor the motion state change of the target motion of the target animation object, repeated memory application and release can be avoided, and resource consumption is further reduced.
In one embodiment, as shown in FIG. 5, there is provided an animation data processing method, exemplified by applying the method to the second animation state machine in FIG. 1, the method comprising the steps of:
step S502, receiving target action data and target node information sent by a first animation state machine; the target action data is action data corresponding to a target action of a target animation object, the target action is determined by a first animation state machine based on target interaction information carrying an animation object identifier corresponding to the target animation object and target animation configuration information corresponding to the animation object identifier, the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object, the target node information is node information of a target animation replacement node, and the target animation replacement node is determined based on a target action type identifier corresponding to the target action.
Step S504, the target animation replacing node is activated based on the target node information.
Step S506, target object data corresponding to the target animation object is acquired.
And step S508, target action data and target object data are loaded through the target animation replacing node, and target action animation corresponding to the target animation object is obtained.
Specifically, the first animation state machine obtains target interaction information, and the target interaction information carries an animation object identifier corresponding to the target animation object. And the first animation state machine acquires target animation configuration information corresponding to the animation object identification, wherein the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object. And the first animation state machine determines a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information, and the target action has a corresponding target action type identifier. One action type identifier corresponds to one animation replacing node, so that action animations corresponding to all actions of the same action type can be generated on one animation replacing node, and the purposes of multiplexing the nodes and reducing resource consumption are achieved. And the first animation state machine acquires target action data corresponding to the target action and target node information of a target animation replacement node corresponding to the target action type identification, and sends the target action data and the target node information to the second animation state machine.
And after the second animation state machine receives the target action data and the target node information, the second animation state machine activates a target animation replacing node based on the target node information, obtains target object data corresponding to the target animation object, and loads the target action data and the target object data through the target animation replacing node to obtain the target action animation corresponding to the target animation object. The second animation state machine can output the target action animation through the target animation replacing node and display the target action animation on the animation playing interface.
It is to be understood that the specific process from step S502 to step S508 may refer to the method described in the foregoing related embodiments, and will not be described herein again.
The animation data processing method comprises the steps of receiving target action data and target node information sent by a first animation state machine, wherein the target action data are action data corresponding to a target action of a target animation object, the target action is determined by the first animation state machine based on target interaction information carrying an animation object identifier corresponding to the target animation object and target animation configuration information corresponding to the animation object identifier, the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object, the target node information is node information of a target animation replacement node, and the target animation replacement node is determined based on a target action type identifier corresponding to the target action; activating a target animation replacement node based on the target node information; acquiring target object data corresponding to a target animation object; and loading the target action data and the target object data through the target animation replacing node to obtain the target action animation corresponding to the target animation object. Therefore, the animation replacing node in the second animation state machine has a corresponding relation with the action type identifier of the action, so that different actions corresponding to the same action type identifier can share the same animation replacing node, thereby effectively controlling the number of nodes in the second animation state machine and avoiding the expansion and growth of the number of nodes in the second animation state machine. And after the first animation state machine determines the target action triggered by the target animation object, sending target action data corresponding to the target action to a corresponding target animation node on the second animation state machine for animation generation. Through the cooperation of the first animation state machine and the second animation state machine, when the second animation state machine is used for generating animation, the number of nodes in the second animation state machine can be reduced, and therefore the purpose of reducing resource consumption is achieved.
In one embodiment, activating a target animation replacement node based on target node information includes:
acquiring a forward node list corresponding to a target animation replacement node; and when the forward animation nodes in the activated state exist in the forward node list, activating the target animation replacement nodes based on the target node information.
The forward node list comprises forward animation nodes which have direct connection relation with the target animation replacing node and belong to the same animation level with the target animation replacing node. The forward node list may include at least one forward animation node. And the generation time of the forward motion animation corresponding to the forward animation node in the forward node list is earlier than that of the target motion animation corresponding to the target animation replacing node. The forward animation node may be connected in front of the target animation node.
Specifically, when the second animation state machine activates the target animation replacement node based on the target node information, a forward node list corresponding to the target animation replacement node may be obtained first, and it is determined whether a forward animation node in an activated state exists in the forward node list. And when the forward animation nodes in the activated state exist in the forward node list, the second animation state machine activates the target animation replacing nodes based on the target node information. Because, in the second animation state machine, the animation nodes and the animation nodes can establish a connection relationship according to a logical sequence, for example, a connection line exists between the animation nodes with the connection relationship, and data is sequentially transferred between the animation nodes through the connection line. When the forward animation nodes in the activated state exist in the forward node list corresponding to the target animation replacing node, the second animation state machine activates the target animation replacing node based on the target node information, and therefore the ordering of animation logic is effectively guaranteed. When the forward animation node in the activated state does not exist in the forward node list corresponding to the target animation replacement node, the second animation state machine does not activate the target animation replacement node based on the target node information even if the second animation state machine receives the target node information.
It will be appreciated that if the target animation replacement node does not have a forward node list, then the second animation state machine may activate the target animation replacement node directly based on the target node information. The forward node lists corresponding to different animation replacement nodes may include the same forward animation node or may include different forward animation nodes.
For example, if the currently activated animation node is an animation node generating a horse riding motion animation, the currently triggered target motion is taken as a dancing motion. According to the animation logic, the animation object is not allowed to dance on a horse, so that the target animation replacement node and the animation node which is activated currently do not have a connection relation. Then, the second animation state machine does not activate the target animation replacement node based on the target node information even if the target node information is received.
And if the currently activated animation node is the animation node generating the standing-in-place action animation, the currently triggered target action is taken as a dancing action. According to the animation logic, the animation object can dance while standing, so that the target animation replacement node and the animation node which is activated currently exist in a connection relationship. Then, the second animation state machine, upon receiving the target node information, may activate the target animation replacement node based on the target node information.
In this embodiment, when a forward animation node in an activated state exists in a forward node list corresponding to the target animation replacement node, the target animation replacement node is activated based on the target node information, so that the order of processing animation data can be effectively guaranteed, and the accuracy of the finally generated target action animation can be ensured.
In one embodiment, the step of loading the target action data and the target object data through the target animation replacement node to obtain the target action animation corresponding to the target animation object includes:
loading target action data and target object data through the target animation replacing node to obtain a trigger action animation corresponding to the target action of the target animation object; acquiring a forward motion animation corresponding to a forward animation node in an activated state; acquiring second transition time information; and transitioning from the forward action animation to the trigger action animation based on the second transition time information to obtain the target action animation.
The forward motion animation refers to motion animation generated on the forward animation node in an activated state corresponding to the target animation replacement node. The animation nodes have a connection relation and a sequence, and obviously, the generation time of the forward action animation corresponding to the forward animation node in the activated state is earlier than the generation time of the trigger action animation corresponding to the target animation replacement node. The second transition time information refers to transition time information for transitioning from the preamble action to the target action.
Specifically, after the target animation replacement node is activated, the second animation state machine may load the target motion data and the target object data through the target animation replacement node to obtain a trigger motion animation corresponding to the target motion of the target animation object, and then generate the target motion animation based on the trigger motion animation. When the target action animation is generated based on the trigger action animation, if the target animation replacing node has a forward node list and a forward animation node in an activated state exists in the forward node list, the second animation state machine can obtain second transition time information, and transition is carried out from the forward action animation to the trigger action animation based on the second transition time information to obtain the target action animation. It can be understood that, for performing the animation transition based on the second transition time information, reference may be made to the specific contents of the foregoing related embodiments for performing the animation transition based on the first transition time information, and details are not described here again.
Of course, when generating the target action animation based on the trigger action animation, if the target animation replacement node does not have a forward node list, the second animation state machine may generate the target action animation directly based on the trigger action animation.
In one embodiment, the second transition time information may be a default mixing time or a mixing time matched with the target action.
In this embodiment, if the target animation replacement node has a forward animation node in an active state, the second animation state machine transitions from the forward action animation to the trigger action animation based on the second transition time information to obtain the target action animation, and can smoothly transition from the forward action to the target action, thereby ensuring the fluency of the action animation.
In one embodiment, the target node information includes target node trigger information, and the obtaining of the second transition time information includes:
acquiring target transition time information corresponding to target node trigger information; and taking the target transition time information as second transition time information.
Wherein, different actions can correspond to different transition time information when performing animation transition with the preorder action. Therefore, different transition time information can be configured for different node trigger information. Different node trigger information may correspond to different transition time information. The target transition time information refers to transition time information corresponding to target node trigger information in the target node information.
In particular, different actions may configure different node trigger information, and action animations of different actions may configure different animation fade-in times. Therefore, the node trigger information corresponding to the action and the mixing time can be associated. Then, the second animation state machine can quickly select the corresponding mixing time to perform fade-in of the action animation of the target action based on the target node trigger information. After the second animation state machine acquires the target node information, the second animation state machine may determine the target node trigger information from the target node information, acquire target transition time information corresponding to the target node trigger information, use the target transition time information as second transition time information, and finally transition from the forward action animation to the trigger action animation based on the second transition time information to obtain the target action animation.
Referring to fig. 6A, an animation replacement node 1, an animation replacement node 2, and an animation replacement node 3 exist in the second animation state machine. Each animation replacing node has a corresponding forward node list, the forward node list can include at least one forward animation node, and each forward animation node in one forward animation node list and the corresponding animation replacing node have a direct connection relationship. The forward node list corresponding to the animation replacement node 1 is a forward node list 1, the forward node list corresponding to the animation replacement node 2 is a forward node list 2, and the forward node list corresponding to the animation replacement node 3 is a forward node list 3.
In addition, different access lines can be configured for the animation replacement nodes according to requirements, and the different access lines can correspond to different node trigger information and different mixing time. For example, some motion animations require 0.2 seconds of blending with the forward motion animation when incoming, and some motion animations do not require blending. Thus, the animation transition effect can be controlled by configuring different incoming lines. Referring to fig. 6B, two connection lines, i.e., a connection line (r) and a connection line (c), exist between the forward node list 1 and the animation replacement node 1. When the node trigger information corresponding to the connection is Animation ID 999980, that is, when the first Animation state machine sends node trigger information "Animation ID 999980" to the second Animation state machine, the second Animation state machine may activate Animation replacement node 1 based on the node trigger information. The mixing time corresponding to the connecting line (i) is 0.2s, that is, the second transition time information of the forward motion animation and the trigger motion animation is 0.2 s. And connecting the corresponding node trigger information of the link (c) to 999990, wherein the mixing time is 0.3 s. That is, when the first Animation state machine sends node trigger information "Animation ID 999990" to the second Animation state machine, the second Animation state machine may also activate Animation replacement node 1 based on the node trigger information, but the second transition time information of the forward motion Animation and the trigger motion Animation is 0.3s at this time. In the animation node display interface corresponding to the second animation state machine, when the trigger operation acting on the connecting line I is received, the animation node display interface can display the information popup window corresponding to the connecting line I, and display corresponding node trigger information and mixing time in the information popup window. Similarly, when receiving the triggering operation acting on the connection line II, the animation node display interface can display the information popup window corresponding to the connection line II, and display the corresponding node triggering information and the mixing time in the information popup window.
Of course, different output links can be configured for the animation replacement nodes according to requirements, and different output links can interrupt information corresponding to different nodes. Further, in addition to animations that may have different fade-in times when entering, some animations may also have different fade-out times when exiting. Referring to fig. 6C, the animation replacement node 1 has two output lines, which are a connection line (C) and a connection line (C), respectively. The node interruption information corresponding to the connection line (c) is interruption ID 999910, that is, when the first Animation state machine sends node interruption information "interruption ID 999910" to the second Animation state machine, the second Animation state machine may stop playing the motion Animation generated on the Animation replacement node 1 based on the node interruption information. The corresponding mixing time of the connecting line (c) is zero, that is, the playing of the motion animation generated on the animation replacing node 1 is directly stopped. The node interruption information corresponding to the connection line (r) is interruption ID 999920 and the mixing time is 0.1s, that is, when the first Animation state machine sends node interruption information "interruption ID 999920" to the second Animation state machine, the second Animation state machine can stop playing the motion Animation generated on the Animation replacement node 1 based on the node interruption information. When the playback of the motion animation generated at the animation replacement node 1 is stopped, the target motion animation can be gradually faded out within 0.1 s. And in the animation node display interface corresponding to the second animation state machine, when the trigger operation acting on the connecting line (c) is received, the animation node display interface can display the information popup window corresponding to the connecting line (c), and the corresponding node interruption information is displayed in the information popup window. Similarly, when receiving the trigger operation acting on the connection line (iv), the animation node display interface may display the information popup window corresponding to the connection line (iv), and display the corresponding node interrupt information in the information popup window.
In this embodiment, different node trigger information may correspond to different transition time information, and when performing animation transition on a forward motion animation and a trigger motion animation of a target motion, the second animation state machine may obtain target transition time information corresponding to the target node trigger information as second transition time information, transition from the forward motion animation to the trigger motion animation based on the second transition time information, and finally present a more appropriate and accurate animation transition effect.
In one embodiment, the method further comprises:
and when the target animation replacing node has a backward animation node, determining an action fine-grained parameter of the target animation object through the backward animation node, and adjusting the target action animation based on the action fine-grained parameter to obtain an updated action animation.
The backward animation node is an animation node used for controlling the detailed information of the action exhibition of the target animation object. For example, a footstep IK (Inverse Kinematics) function node, a gaze (Look At) function node, etc. The step IK function node is used to keep the steps of the animated object against the ground when the animated object is located on an uneven ground, e.g., a roof slope, rocks, etc. The gazing function node is used for controlling the eyes or the head of the animation object to always stare at the lens and move along with the lens. The backward animation node may have a direct connection relationship with the target animation replacement node, e.g., the backward animation node is directly connected behind the target animation replacement node. The action fine-grained parameter refers to detail requirement information of the target animation object when the target animation object performs an action, for example, the head of the target animation object needs to move along with a lens all the time, and the steps of the target animation object need to be attached to the ground all the time. Compared with the target action animation, the updating action animation has more detail information, and the picture displayed by the updating action animation is more reasonable, more accurate and more vivid.
Specifically, if the target animation replacement node has a backward animation node, the target animation replacement node may output the target motion animation to the backward animation node. The second animation state machine can determine the action fine-grained parameter of the target animation object through the backward animation node, and adjust the target action animation based on the action fine-grained parameter to obtain the updated action animation.
For example, referring to fig. 7, the backward animation node of the target animation replacement node is an animation node corresponding to a Head gaze (Head load) function, and the Head gaze function node is configured to control the Head of the target animation object to always stare at a shot and then rotate along with the shot. If the target animation replacing node is connected with the head watching function node, the head of the target animation object always stares at the lens and rotates along with the lens when the target animation object in the finally generated updating action animation performs the target action. In the target action animation, the target animation object may simply perform the target action and has not much detail. Finally, the updating action animation with richer detailed content can be displayed on the animation playing interface.
In the embodiment, when the target animation replacing node has the backward animation node, the motion fine-grained parameter of the target animation object is controlled through the backward animation node, the target motion animation is adjusted, and the updated motion animation with richer detailed content can be obtained.
In one embodiment, after the target animation replacing node loads the target action data and the target object data to obtain the target action animation corresponding to the target animation object, the method further includes:
acquiring an activation suspension instruction; converting the node state of the target animation replacement node from an active state to a standby state based on the activation suspension instruction; and sending instance interruption information to the first animation state machine, wherein the instance interruption information is used for stopping running of the target animation instance corresponding to the target action of the target animation object, and the target animation instance is used for monitoring the action state change of the target action and informing the second animation state machine.
The activation suspending instruction is used for suspending the activation state of the target animation replacing node and converting the node state of the target animation replacing node from the activation state to the standby state. The activation suspending instruction may be automatically generated when the node state of the upper animation node corresponding to the target animation replacement node is converted from the active state to the standby state.
Specifically, when the node state of the upper animation replacement node of the target animation replacement node changes and is switched from the active state to the standby state, the second animation state machine may automatically generate an activation suspension instruction, and switch the node state of the target animation replacement node from the active state to the standby state according to the activation suspension instruction. At this point, the second animation state machine may generate instance interruption information, which is sent to the first animation state machine. After the first animation state machine receives the instance interruption information, the running of the target animation instance corresponding to the target action of the target animation object can be stopped based on the instance interruption information. Because, if the node state of the upper animation replacement node of the target animation replacement node is changed from the active state to the standby state, it indicates that the target animation object has undergone a drastic action change, for example, a transition from an action of a performance nature to an action of a battle nature. At this point, the target animation instance has not been required to monitor the action state change of the target action. Therefore, the second animation state machine actively sends instance interruption information to the first animation state machine, and the instance interruption information can carry target node information corresponding to the target animation replacement node, so that the first animation state machine can quickly find the corresponding target animation instance based on the target node information, and the running of the target animation instance is stopped. In this way, the second animation state machine reversely interrupts the running of the target animation instance on the first animation state machine, so that the useless target animation instance can be prevented from continuously occupying computing resources.
For example, the target motion is a dance motion, the target animation replacement node is currently in an active state, and an upper animation replacement node of the target animation replacement node is also in an active state, where the upper animation replacement node of the target animation replacement node is used to represent that the target animation object can currently perform various motion properties. When the target animation object has violent action change and is changed from action of performance nature to action of battle nature, the node state of the upper layer animation replacement node of the target animation replacement node is converted into the standby state from the active state, and the upper layer animation node representing that the target animation object can currently execute actions of various battle nature is converted into the active state from the standby state. If the node state of the upper animation replacement node of the target animation replacement node transitions from the active state to the standby state, the node state of the target animation replacement node should also transition from the standby state to the active state. At this time, the target animation instance corresponding to the dance action of the target animation object does not need to be in operation, and the computing resource does not need to be continuously occupied.
It can be understood that, for the related contents of the target animation example, reference may be made to the contents described in the foregoing related embodiments, and details are not described here.
In this embodiment, if the node state of the target animation replacement node is switched from the active state to the standby state, the second animation state machine reversely interrupts the running of the target animation instance in the first animation state machine, so that the useless target animation instance can be prevented from continuously occupying the computing resources, and the computing resources are saved.
The application also provides an application scene, and the animation data processing method is applied to the application scene. In particular, the animation data processing method can be applied to game scenes. Specifically, the application of the animation data processing method in the application scene is as follows:
the animation data processing method can be applied to a heavy MMO (massive Multiplayer Online game) game, simplifies an animation state machine of a complex role, reduces consumption of a memory and a CPU (Central processing Unit) during operation, and simplifies a data configuration flow. In the MMO game, the action of a game pivot is rich, an animation state machine is extremely complex and huge, due to the game type, a plurality of application scenes such as multi-player battle, social contact and the like are caused, and the animation logics of part of actions are simple and similar. In the traditional method, when a Morpheme Network is used, the node number of animation nodes in a Network animation state machine linearly increases according to the action quantity required by planning, and cannot be inhibited. By adopting the animation data processing method, the nodes in the Network animation state machine can be reused to the maximum extent, and the scale of the animation state machine of the game role can be effectively controlled, so that the consumption of the memory and the CPU of the animation system is reduced, and more low-priority players are popularized. The Network animation state machine is an animation state machine in a Morpheme animation engine. Morpheme is a commercial animation engine and supports functions of cross-platform animation resource compression, playing, rigid body simulation, animation state machine and the like. In a game application scene, the animation state machine upwards connects a game playing method and downwards controls the playing, mixing, IK and other calculations of the animation.
Information configuration
1. Creating animation replacement nodes
A plurality of animation replacement nodes (which may also be referred to as simple animation nodes) are created in the Network animation state machine.
In addition, different entry connecting lines can be configured for the animation replacing nodes according to animation requirements, for example, when some animations need to enter, mixing is carried out for 0.2 second, and when some animations do not need mixing, different node trigger information is configured to control data moving direction. Furthermore, different output connecting lines can be configured for the animation replacing nodes according to animation requirements, and animation interruption is controlled by configuring different node interruption information. Furthermore, animation nodes for controlling the action detail information can be connected after the animation replacing nodes according to animation requirements.
2. Editing animation plan tables by animation editor
The terminal is provided with an animation editor, and a game developer can open the animation editor to edit animation planning tables (namely animation configuration information) corresponding to all animation objects in the game in the animation editor. The animation objects in the game include at least one of a Player character (Player), a non-Player character (NPC), and an item (Entity). The game developer can configure animation configuration information corresponding to the game object in the animation editor, wherein the animation configuration information comprises animation playing information of at least one action, namely the animation planning table comprises at least one animation item, and one animation item represents the animation playing information of one action. The game developer can add the node path of the animation replacement node in the Network animation state machine (namely, the second animation state machine) in the animation planning table, and add node information such as node trigger information and node interruption information corresponding to the animation replacement node. Multiple actions corresponding to the same action type identifier may correspond to the same animation replacement node.
The animation play information may include information of various fields as shown in table 1.
TABLE 1
Figure BDA0003103986460000381
Referring to fig. 8A, fig. 8A is a diagram illustrating an interface of an animation plan table configuring a male game character in an animation editor. The terminal can send the animation plan table corresponding to each animation object to the first animation state machine for storage.
In one embodiment, the animation plan table supports hot table update, which can significantly improve production efficiency. The hot update means that under the condition that the game client is not closed, the change of the animation plan form is synchronized to the client, and the effect of changing and viewing immediately is achieved. In addition, when animation is newly added, game developers only need to edit the animation planning table, do not need to reconfigure a Network animation state machine, and do not need to go through a complex export process. The game client comprises a first animation state machine and a second animation state machine.
Two, a simple animation state machine (i.e., a first animation state machine) and a Network animation state machine (i.e., a second animation state machine)
This is achieved by referring a large amount of Simple and repetitive logic within the Network animation state machine to an autonomously built Simple animation state machine (Simple animation). And replacing the motion data output by the simple animation state machine into a plurality of fixed animation replacing nodes in the Network. Therefore, the purpose of multiplexing the nodes can be achieved while keeping all characteristics and effects of the Network animation state machine, so that the scale of the Network animation state machine is greatly reduced, and the growth of animation nodes in the Network animation state machine in the future is irrelevant to the number of new requirements and is only relevant to the types of the new requirements.
Referring to fig. 8B, the animation data processing method includes the steps of:
1. the first animation state machine obtains target interaction information, and the target interaction information carries the animation object identification of the target animation object.
2. The first animation state machine judges whether an upper animation node of the animation replacement node is activated.
The first animation state machine can obtain a target animation plan table corresponding to the target animation object based on the animation object identification. The node paths of the animation replacement nodes corresponding to the actions are recorded in the target animation planning table, and the upper animation nodes of the animation replacement nodes can be determined according to the node paths. And the first animation state machine sends a node state query request to the second animation state machine to query whether the upper animation node of the animation replacement node is in an activated state.
3. And if the upper animation node of the animation replacement node is activated, the first animation state machine judges whether the target interaction information is matched with the action triggering condition, so that the currently triggered target action is determined.
And aiming at the animation entries configured with the node information of the animation replacement nodes, judging whether the target interaction information is matched with the trigger condition of the animation entries only when the animation nodes on the upper layer of the animation replacement nodes are activated, so as to reduce the information traversal number. For example, a node path of one animation replacement node is Core | interaction | simplex animation1, an upper layer animation node of the animation replacement node is Core | interaction, and only when the node state of the upper layer animation node of the animation replacement node is an active state, it is determined whether an action configured with the animation replacement node is triggered, and whether target interaction information matches action trigger information of the action.
4. If the upper animation node of the animation replacement node is not activated, the target interaction information cannot trigger the generation and playing of the action animation of any action of the target animation object at the moment. If the upper animation node of the animation replacement node is activated, but the target interaction information is not matched with the trigger condition of any action of the target animation object, the target interaction information cannot trigger the generation and playing of the action animation of any action of the target animation object at the moment.
5. The first animation state machine creates a target animation example corresponding to the target action of the target animation object, monitors the action state change of the target action based on the target animation example, and sends corresponding data to the second animation state machine when the action state changes.
6. And after the target action is triggered, the first animation state machine sends target action data corresponding to the target action to a target animation replacing node in the second animation state machine through a target animation example.
The first animation state machine can send target node information corresponding to the target animation replacing node to the second animation state machine through the target animation instance, so that the second animation state machine activates the target animation replacing node based on the target node information.
7. And the second animation state machine loads the target action data and the target object data corresponding to the target animation object through the target animation replacing node, generates the target action animation corresponding to the target animation object, and plays the target action animation.
8. The second animation state machine detects whether the target animation replacement node has been inactivated.
When the upper animation node of the target animation replacing node in the second animation state machine jumps, the node state of the target animation replacing node is converted into the standby state from the active state. At this point, the second animation state machine needs to reverse the run of the target animation instance in the first animation state machine.
If the target animation replacement node is already in the standby state, the second animation state machine can stop the playing of the target motion animation.
9. And the first animation state machine judges whether the interruption interaction information matched with the interruption condition of the target action is acquired or not.
If the first animation state machine obtains the interruption interaction information matched with the interruption condition of the target action, the first animation state machine sends target node interruption information to a target animation replacement node in the second animation state machine, and therefore the second animation state machine can stop playing of the target action animation.
10. And judging whether the playing time of the target action animation reaches the preset playing time or not.
The first animation state machine may determine whether the playing time of the target motion animation reaches a preset playing time. And if the playing time of the target motion animation reaches the preset playing time, the first animation state machine sends notification information to the second animation state machine so that the second animation state machine finishes the playing of the target motion animation.
Or the second animation state machine autonomously judges whether the playing time of the target motion animation reaches the preset playing time. And if the playing time of the target motion animation reaches the preset playing time, the second animation state machine finishes the playing of the target motion animation.
When each frame of video frame is updated, whether the target animation replacement node is not activated, whether interruption interaction information matched with interruption conditions of the target action is acquired or not and whether the playing time of the target action animation reaches the preset playing time or not can be judged once.
In this embodiment, a lightweight simple animation state machine is realized. And configuring node information of animation replacing nodes in the Network animation state machine for required animation items in an animation planning table stored in the simple animation state machine, wherein the node information comprises node paths, node triggering information, node interruption information and the like. When an animation item in the animation planning table is triggered, namely an action of the animation object is triggered, the simple animation state machine sends node information of the corresponding animation replacing node to the Network animation state machine so as to trigger the appointed animation replacing node through the node triggering information. The simple animation state machine sends corresponding action data to a designated animation replacement node, and the Network animation state machine normally uses the original logic to realize all functions of animation mixing, IK calculation and the like. Therefore, the number of nodes of the Morpheme Network is effectively controlled while the original functions of the Morpheme Network are completely reserved, and the consumption of the memory during operation is reduced.
In one embodiment, as shown in FIG. 8C, the software architecture in the computer device may be divided into an animation interface layer, an animation logic layer, and an animation processing layer. The animation processing layer comprises a first animation state machine, a second animation state machine, an animation resource manager and an animation pipeline. The animation interface layer is used for receiving the triggering operation of the user, generating initial interaction information and sending the initial interaction information to the animation logic layer. The animation logic layer is used for converting the initial interaction information and converting the information sent by the upper layer into information which can be identified by an animation state machine. The animation pipeline is used for controlling rendering and playing of the animation, and rendering the action animation into a video for playing. The animation resource manager is used for managing animation resources.
And the animation interface layer receives the interactive operation acted on the target animation object, generates initial interactive information based on the interactive operation and sends the initial interactive information to the animation logic layer. And the animation logic layer converts the initial interaction information to generate target interaction information, and sends the target interaction information to the first animation state machine. For example, an animation playing interface can be displayed on the terminal, and a target animation object is displayed on the animation playing interface. The user can trigger the target animation object on the animation playing interface or trigger the related control for controlling the target animation object on the animation playing interface to generate the initial interaction information, for example, the user clicks the 'clever fox dance' control in the game interface, so as to generate the interaction operation acting on the game character. And the animation interface layer on the computer equipment acquires the interactive operation acted on the target animation object by the user and generates initial interactive information based on the interactive operation. The animation logic layer and the first animation state machine agree on an information mode of the interactive information, so that the first animation state machine can rapidly identify the interactive information. After receiving the initial interaction information, the animation logic layer can perform information mode conversion on the initial interaction information to obtain target interaction information, and then sends the target interaction information to the first animation state machine.
The first animation state machine obtains target animation configuration information corresponding to the target animation object, determines a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information, and determines a target animation replacement node based on a target action type identifier corresponding to the target action. And the first animation state machine acquires target action data corresponding to the target action and target node information of a target animation replacement node corresponding to the target action type identification, and sends the target action data and the target node information to the second animation state machine.
And the second animation state machine activates the target animation replacing node based on the target node information, acquires target object data corresponding to the target animation object, and loads the target action data and the target object data through the target animation replacing node to obtain the target action animation corresponding to the target animation object. And the second animation state machine plays the target action animation on the animation interface layer through the animation pipeline.
The animation resource manager may count the number of concurrent uses of animation resources. When the use frequency of a certain animation resource is reduced to 0, the animation resource is released in the memory. For example, different players, non-players, within a game may use the same type of animated object, having the same skeleton, but different appearances, and they may all trigger the same action, in which case they may share the same action data. When the concurrent use number of the action data is 0, the action data is not used by people, and the computer equipment can release the action data in the memory.
It can be understood that the specific data processing procedures of the first animation state machine and the second animation state machine may refer to the methods described in the foregoing related embodiments, and are not described herein again.
It should be understood that although the steps in the flowcharts of fig. 2, 4 and 5 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 4, and 5 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least some of the other steps or stages.
In one embodiment, as shown in fig. 9A, there is provided an animation data processing apparatus, which may be a part of a computer device using a software module or a hardware module, or a combination of the two, the apparatus specifically includes: an interaction information acquisition module 902, a configuration information acquisition module 904, a target action determination module 906, and an action animation generation module 908, wherein:
an interaction information obtaining module 902, configured to obtain target interaction information, where the target interaction information carries an animation object identifier corresponding to a target animation object.
The configuration information obtaining module 904 is configured to obtain target animation configuration information corresponding to the animation object identifier, where the target animation configuration information is used to configure animation playing information of at least one action corresponding to the target animation object.
And a target action determining module 906, configured to determine a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information, where the target action has a corresponding target action type identifier.
The motion animation generating module 908 is configured to obtain target motion data corresponding to the target motion and target node information of a target animation replacement node corresponding to the target motion type identifier, send the target motion data and the target node information to the second animation state machine, so that the second animation state machine activates the target animation replacement node based on the target node information, obtain target object data corresponding to the target animation object, and load the target motion data and the target object data through the target animation replacement node to obtain a target motion animation corresponding to the target animation object.
In one embodiment, the action carries an action type identification. As shown in fig. 9B, the animation data processing apparatus further includes:
the node information configuring module 901 is configured to perform action clustering on each action corresponding to the same action type identifier to obtain action cluster clusters corresponding to each action type identifier, perform node allocation on each action cluster based on at least two candidate replacement nodes established in the second animation state machine in advance to obtain animation replacement nodes corresponding to each action cluster, and configure node information of the corresponding animation replacement nodes in animation playing information of each action. The action animation generation module is also used for acquiring target node information from target animation playing information corresponding to the target action.
In one embodiment, the node information includes a node path, and the animation playback information includes action trigger information. The target action determining module is further used for obtaining node information of animation replacing nodes corresponding to all actions, determining an upper layer animation node corresponding to the animation replacing node based on the node path, obtaining a node state of the upper layer animation node corresponding to the animation replacing node, taking an action corresponding to the animation replacing node of which the node state of the upper layer animation node is an activated state as a candidate action, and performing trigger action detection on the target animation object based on the target interaction information and animation playing information corresponding to all the candidate actions to obtain a target action corresponding to the target animation object.
In one embodiment, the animation playing information includes action triggering information, and the target action determining module is further configured to match the target interaction information with the action triggering information corresponding to each candidate action, and use a candidate action corresponding to the action triggering information that is successfully matched as the target action.
In one embodiment, the animation playing information includes a storage path of motion data corresponding to the motion, and the motion animation generation module is further configured to obtain target animation playing information corresponding to the target motion, and obtain the target motion data based on the target storage path in the target animation playing information.
In one embodiment, the target node information includes a target node path and target node trigger information corresponding to the target animation replacement node. The action animation generation module is also used for sending the target action data, the target node path and the target node trigger information to a second animation state machine, so that the second animation state machine sends the target node trigger information to a target animation replacement node based on the target node path, the target action data and the target object data are loaded through the target animation replacement node, a target action animation corresponding to the target animation object is obtained, and the target node trigger information is used for activating the target animation replacement node.
In one embodiment, the animation playing information comprises action interruption information, and the target node information comprises a target node path corresponding to the target animation replacing node and target node interruption information. The action animation generation module is further used for sending the target node path and the target node interruption information to the second animation state machine when the interruption interaction information matched with the action interruption information corresponding to the target action is acquired in the playing process of the target action animation, so that the second animation state machine sends the target node interruption information to the target animation replacement node based on the target node path, and the target node interruption information is used for interrupting the playing of the target action animation.
In one embodiment, the animation play information includes motion skip information. The action animation generating module is further used for acquiring jump action data of jump actions corresponding to the target actions when jump interaction information matched with action jump information corresponding to the target actions is acquired in the playing process of the target action animations, acquiring first transition time information, and sending the jump action data, the first transition time information and the target node information to the second animation state machine, so that the second animation state machine loads the jump action data and the target object data through target animation replacing nodes corresponding to the target node information to obtain jump action animations corresponding to the target animation objects, and the jump action animations are transited from the target action animations to the jump action animations based on the first transition time information to obtain fusion action animations.
In one embodiment, the animation playing information comprises a storage path corresponding to motion data of the motion. The action animation generation module is also used for generating a target animation example corresponding to the target action of the target animation object based on the target animation playing information corresponding to the target action and the object attribute information corresponding to the animation object identification, running the target animation example, acquiring target action data based on a target storage path in the target animation playing information, and sending the target action data and the target node information to the second animation state machine through the target animation example.
In one embodiment, as shown in fig. 10A, there is provided an animation data processing apparatus, which may be a part of a computer device using a software module or a hardware module, or a combination of the two, the apparatus specifically includes: an information receiving module 1002, a node activating module 1004, an information obtaining module 1006 and an animation generating module 1008, wherein:
the information receiving module 1002 is configured to receive target action data and target node information sent by a first animation state machine; the target action data is action data corresponding to a target action of a target animation object, the target action is determined by a first animation state machine based on target interaction information carrying an animation object identifier corresponding to the target animation object and target animation configuration information corresponding to the animation object identifier, the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object, the target node information is node information of a target animation replacement node, and the target animation replacement node is determined based on a target action type identifier corresponding to the target action.
A node activation module 1004 for activating the target animation replacement node based on the target node information.
The information obtaining module 1006 is configured to obtain target object data corresponding to the target animation object.
And the animation generation module 1008 is configured to load the target action data and the target object data through the target animation replacement node to obtain a target action animation corresponding to the target animation object.
In an embodiment, the node activation module is further configured to obtain a forward node list corresponding to the target animation replacement node, and activate the target animation replacement node based on the target node information when there is a forward animation node in an activated state in the forward node list.
In an embodiment, the animation generation module is further configured to load the target action data and the target object data through the target animation replacement node, obtain a trigger action animation corresponding to the target action of the target animation object, obtain the forward action animation corresponding to the forward animation node in the activated state, obtain the second transition time information, and transition from the forward action animation to the trigger action animation based on the second transition time information, so as to obtain the target action animation.
In an embodiment, the target node information includes target node trigger information, and the animation generation module is further configured to obtain target transition time information corresponding to the target node trigger information, and use the target transition time information as second transition time information.
In one embodiment, the animation generation module is further configured to determine, by the backward animation node, an action fine-grained parameter of the target animation object when the target animation replacement node has a backward animation node, and adjust the target action animation based on the action fine-grained parameter to obtain an updated action animation.
In one embodiment, as shown in fig. 10B, the animation data processing apparatus further includes:
the instance interruption module 1009 is configured to acquire an activation interruption instruction, convert the node state of the target animation replacement node from the active state to the standby state based on the activation interruption instruction, and send instance interruption information to the first animation state machine, where the instance interruption information is used to stop running a target animation instance corresponding to a target action of the target animation object, and the target animation instance is used to monitor an action state change of the target action and notify the second animation state machine.
For the specific definition of the animation data processing device, reference may be made to the above definition of the animation data processing method, which is not described herein again. The respective modules in the animation data processing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 11. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing various animation configuration information, node information of various animation replacing nodes and other data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an animation data processing method.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 12. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an animation data processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the configurations shown in fig. 11 and 12 are block diagrams of only some of the configurations relevant to the present disclosure, and do not constitute a limitation on the computing devices to which the present disclosure may be applied, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The computer instructions are read by a processor of a computer device from a computer-readable storage medium, and the computer instructions are executed by the processor to cause the computer device to perform the steps in the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (15)

1. An animation data processing method, applied to a first animation state machine, the method comprising:
acquiring target interaction information, wherein the target interaction information carries an animation object identifier corresponding to a target animation object;
acquiring target animation configuration information corresponding to the animation object identification, wherein the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object;
determining a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information, wherein the target action has a corresponding target action type identifier;
acquiring target action data corresponding to the target action and target node information of a target animation replacing node corresponding to the target action type identification, sending the target action data and the target node information to a second animation state machine, enabling the second animation state machine to activate the target animation replacing node based on the target node information, acquiring target object data corresponding to the target animation object, and loading the target action data and the target object data through the target animation replacing node to obtain the target action animation corresponding to the target animation object.
2. The method according to claim 1, wherein the action carries an action type identifier, and before the obtaining of the target interaction information, the method further comprises:
performing action clustering on each action corresponding to the same action type identifier to obtain action clustering clusters corresponding to each action type identifier respectively;
performing node distribution on each action cluster based on at least two candidate replacement nodes established in the second animation state machine in advance to obtain animation replacement nodes corresponding to each action cluster;
configuring node information of corresponding animation replacing nodes in animation playing information of each action;
the obtaining of the target node information of the target animation replacement node corresponding to the target action type identifier includes:
and acquiring the target node information from the target animation playing information corresponding to the target action.
3. The method according to claim 1, wherein the node information includes a node path, and the animation playback information includes action trigger information;
the determining the target action corresponding to the target animation object based on the target interaction information and the target animation configuration information includes:
acquiring node information of animation replacement nodes corresponding to each action;
determining an upper-layer animation node corresponding to the animation replacement node based on the node path, and acquiring a node state of the upper-layer animation node corresponding to the animation replacement node;
taking the action corresponding to the animation replacement node with the node state of the upper animation node being the activated state as a candidate action;
and performing trigger action detection on the target animation object based on the target interaction information and the animation playing information corresponding to each candidate action to obtain a target action corresponding to the target animation object.
4. The method of claim 1, wherein the target node information comprises a target node path and target node trigger information corresponding to the target animation replacement node;
the sending the target action data and the target node information to a second animation state machine to enable the second animation state machine to activate the target animation replacement node based on the target node information, to obtain target object data corresponding to the target animation object, and to load the target action data and the target object data through the target animation replacement node to obtain the target action animation corresponding to the target animation object, includes:
and sending the target action data, the target node path and the target node trigger information to the second animation state machine so that the second animation state machine sends the target node trigger information to the target animation replacing node based on the target node path, loading the target action data and the target object data through the target animation replacing node to obtain a target action animation corresponding to the target animation object, wherein the target node trigger information is used for activating the target animation replacing node.
5. The method of claim 1, wherein the animation playback information includes motion skip information, the method further comprising:
in the playing process of the target action animation, when jump interaction information matched with action jump information corresponding to the target action is acquired, jump action data of the jump action corresponding to the target action is acquired, and first transition time information is acquired;
and sending the jump action data, the first transition time information and the target node information to a second animation state machine so that the second animation state machine loads the jump action data and the target object data through a target animation replacing node corresponding to the target node information to obtain a jump action animation corresponding to the target animation object, and transitioning from the target action animation to the jump action animation based on the first transition time information to obtain a fusion action animation.
6. The method according to any one of claims 1 to 5, wherein the animation playing information comprises a storage path corresponding to motion data of a motion;
the obtaining target action data corresponding to the target action and target node information of a target animation replacement node corresponding to the target action type identifier, and sending the target action data and the target node information to a second animation state machine includes:
generating a target animation instance corresponding to the target action of the target animation object based on the target animation playing information corresponding to the target action and the object attribute information corresponding to the animation object identification;
running the target animation example, and acquiring the target action data based on a target storage path in the target animation playing information;
and sending the target action data and the target node information to the second animation state machine through the target animation instance.
7. A method of animation data processing, applied to a second animation state machine, the method comprising:
receiving target action data and target node information sent by a first animation state machine; the target action data is action data corresponding to a target action of a target animation object, the target action is determined by the first animation state machine based on target interaction information carrying an animation object identifier corresponding to the target animation object and target animation configuration information corresponding to the animation object identifier, the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object, the target node information is node information of a target animation replacement node, and the target animation replacement node is determined based on a target action type identifier corresponding to the target action;
activating the target animation replacement node based on the target node information;
acquiring target object data corresponding to the target animation object;
and loading the target action data and the target object data through the target animation replacing node to obtain the target action animation corresponding to the target animation object.
8. The method of claim 7, wherein activating the target animation replacement node based on the target node information comprises:
acquiring a forward node list corresponding to the target animation replacing node;
and when the forward animation nodes in the activated state exist in the forward node list, activating the target animation replacement node based on the target node information.
9. The method of claim 8, wherein the loading the target motion data and the target object data by the target animation replacement node to obtain the target motion animation corresponding to the target animation object comprises:
loading the target action data and the target object data through the target animation replacing node to obtain a trigger action animation corresponding to the target action of the target animation object;
acquiring a forward motion animation corresponding to the forward animation node in the activated state;
acquiring second transition time information;
and transitioning from the forward action animation to the trigger action animation based on the second transition time information to obtain the target action animation.
10. The method of claim 7, further comprising:
and when the target animation replacing node has a backward animation node, determining an action fine-grained parameter of the target animation object through the backward animation node, and adjusting the target action animation based on the action fine-grained parameter to obtain an updated action animation.
11. The method according to any one of claims 7 to 10, wherein after the target action data and the target object data are loaded by the target animation replacement node to obtain the target action animation corresponding to the target animation object, the method further comprises:
acquiring an activation suspension instruction;
converting the node state of the target animation replacement node from an active state to a standby state based on the activation suspension instruction;
and sending instance interruption information to the first animation state machine, wherein the instance interruption information is used for stopping running of a target animation instance corresponding to the target action of the target animation object, and the target animation instance is used for monitoring the action state change of the target action and informing the second animation state machine.
12. An animation data processing apparatus, characterized in that the apparatus comprises:
the interactive information acquisition module is used for acquiring target interactive information, and the target interactive information carries an animation object identifier corresponding to a target animation object;
a configuration information obtaining module, configured to obtain target animation configuration information corresponding to the animation object identifier, where the target animation configuration information is used to configure animation playing information of at least one action corresponding to the target animation object;
the target action determining module is used for determining a target action corresponding to the target animation object based on the target interaction information and the target animation configuration information, and the target action has a corresponding target action type identifier;
and the action animation generation module is used for acquiring target action data corresponding to the target action and target node information of a target animation replacement node corresponding to the target action type identifier, sending the target action data and the target node information to a second animation state machine so that the second animation state machine activates the target animation replacement node based on the target node information, acquiring target object data corresponding to the target animation object, and loading the target action data and the target object data through the target animation replacement node to obtain the target action animation corresponding to the target animation object.
13. An animation data processing apparatus, characterized in that the apparatus comprises:
the information receiving module is used for receiving target action data and target node information sent by the first animation state machine; the target action data is action data corresponding to a target action of a target animation object, the target action is determined by the first animation state machine based on target interaction information carrying an animation object identifier corresponding to the target animation object and target animation configuration information corresponding to the animation object identifier, the target animation configuration information is used for configuring animation playing information of at least one action corresponding to the target animation object, the target node information is node information of a target animation replacement node, and the target animation replacement node is determined based on a target action type identifier corresponding to the target action;
a node activation module for activating the target animation replacement node based on the target node information;
the information acquisition module is used for acquiring target object data corresponding to the target animation object;
and the animation generation module is used for loading the target action data and the target object data through the target animation replacing node to obtain the target action animation corresponding to the target animation object.
14. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 6 or 7 to 11.
15. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6 or 7 to 11.
CN202110631858.8A 2021-06-07 2021-06-07 Animation data processing method, device, computer equipment and storage medium Active CN113379590B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110631858.8A CN113379590B (en) 2021-06-07 2021-06-07 Animation data processing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110631858.8A CN113379590B (en) 2021-06-07 2021-06-07 Animation data processing method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113379590A true CN113379590A (en) 2021-09-10
CN113379590B CN113379590B (en) 2023-06-30

Family

ID=77575986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110631858.8A Active CN113379590B (en) 2021-06-07 2021-06-07 Animation data processing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113379590B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927777A (en) * 2014-04-03 2014-07-16 北京星航机电装备有限公司 Organization and control method of three-dimensional animation process based on Mealy finite state automatas
CN105656688A (en) * 2016-03-03 2016-06-08 腾讯科技(深圳)有限公司 State control method and device
US9463386B1 (en) * 2011-11-08 2016-10-11 Zynga Inc. State machine scripting in computer-implemented games
CN107180444A (en) * 2017-05-11 2017-09-19 腾讯科技(深圳)有限公司 A kind of animation producing method, device, terminal and system
CN108650217A (en) * 2018-03-21 2018-10-12 腾讯科技(深圳)有限公司 Synchronous method, device, storage medium and the electronic device of action state
CN109731334A (en) * 2018-11-22 2019-05-10 腾讯科技(深圳)有限公司 Switching method and apparatus, storage medium, the electronic device of state
CN110413758A (en) * 2019-07-30 2019-11-05 中国工商银行股份有限公司 Dialog box framework construction method and device based on machine learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9463386B1 (en) * 2011-11-08 2016-10-11 Zynga Inc. State machine scripting in computer-implemented games
CN103927777A (en) * 2014-04-03 2014-07-16 北京星航机电装备有限公司 Organization and control method of three-dimensional animation process based on Mealy finite state automatas
CN105656688A (en) * 2016-03-03 2016-06-08 腾讯科技(深圳)有限公司 State control method and device
CN107180444A (en) * 2017-05-11 2017-09-19 腾讯科技(深圳)有限公司 A kind of animation producing method, device, terminal and system
CN108650217A (en) * 2018-03-21 2018-10-12 腾讯科技(深圳)有限公司 Synchronous method, device, storage medium and the electronic device of action state
CN109731334A (en) * 2018-11-22 2019-05-10 腾讯科技(深圳)有限公司 Switching method and apparatus, storage medium, the electronic device of state
CN110413758A (en) * 2019-07-30 2019-11-05 中国工商银行股份有限公司 Dialog box framework construction method and device based on machine learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YPTIANMA: ""分而治之",一种AI和动画系统的架构", 《HTTPS://BLOG.CSDN.NET/YPTIANMA/ARTICLE/DETAILS/103268517》, pages 1 - 8 *

Also Published As

Publication number Publication date
CN113379590B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
US11135515B2 (en) Information processing method and apparatus and server
CN105976417B (en) Animation generation method and device
US11511196B2 (en) Predictive data preloading
US10792566B1 (en) System for streaming content within a game application environment
CN103889524B (en) Computer readable recording medium storing program for performing, data structure, netscape messaging server Netscape and the information processing terminal of information processing system, information processing method, message handling program, storing information processing program
CN111880877B (en) Animation switching method, device, equipment and storage medium
CN113379876A (en) Animation data processing method, animation data processing device, computer equipment and storage medium
CN111544889B (en) Behavior control method and device of virtual object and storage medium
US11080916B1 (en) Character morphing system
US20240261674A1 (en) State stream game engine
CN114404987A (en) Virtual object control method, device, equipment, medium and program product
US11645805B2 (en) Animated faces using texture manipulation
US11969649B2 (en) Prominent display of targeted game in search results
US20240198222A1 (en) Replay editor in video games
CN113379590B (en) Animation data processing method, device, computer equipment and storage medium
WO2023134276A1 (en) Resource preloading method and apparatus, storage medium, and computer device
CN113018861B (en) Virtual character display method and device, computer equipment and storage medium
CN114768260A (en) Data processing method and device for virtual character in game and electronic equipment
CN116843802A (en) Virtual image processing method and related product
CN116561439A (en) Social interaction method, device, equipment, storage medium and program product
US20240329979A1 (en) Version agnostic centralized state management in video games
WO2024179194A1 (en) Virtual object generation method and apparatus, and device and storage medium
McQuillan A survey of behaviour trees and their applications for game AI
CN118662890A (en) Method, device, terminal and storage medium for displaying object animation of virtual object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40052793

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant