CN112560962B - Gesture matching method and device for bone animation, electronic equipment and storage medium - Google Patents

Gesture matching method and device for bone animation, electronic equipment and storage medium Download PDF

Info

Publication number
CN112560962B
CN112560962B CN202011503059.4A CN202011503059A CN112560962B CN 112560962 B CN112560962 B CN 112560962B CN 202011503059 A CN202011503059 A CN 202011503059A CN 112560962 B CN112560962 B CN 112560962B
Authority
CN
China
Prior art keywords
node
frame data
nodes
frame
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011503059.4A
Other languages
Chinese (zh)
Other versions
CN112560962A (en
Inventor
周兵
肖翔
吴闯
庄放望
张宏龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202011503059.4A priority Critical patent/CN112560962B/en
Publication of CN112560962A publication Critical patent/CN112560962A/en
Application granted granted Critical
Publication of CN112560962B publication Critical patent/CN112560962B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/285Analysis of motion using a sequence of stereo image pairs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention discloses a gesture matching method and device of skeleton animation, electronic equipment and a storage medium, wherein the method comprises the following steps: receiving any frame of data in the bone animation; and obtaining candidate matching nodes corresponding to any frame data from a pre-obtained motion map model, wherein the motion map model comprises a plurality of nodes connected through directed edges according to an excessively-available condition, and the plurality of nodes respectively represent a plurality of motion frames in a motion map sequence. Obtaining the similarity between any frame data and each node in the candidate matching nodes; and obtaining the matching node with any frame data from the candidate matching nodes based on the similarity between any frame data and each node. According to the embodiment of the invention, the accuracy and the matching efficiency of the matched gesture of the skeleton animation can be ensured, and the smooth transition of the gesture of each frame of data in the skeleton animation can be improved.

Description

Gesture matching method and device for bone animation, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of computer graphics processing, in particular to a gesture matching method and device for bone animation, electronic equipment and a storage medium.
Background
Converting the skeletal 3D keypoints into rotational matrices for corresponding joints is a key link driving motion of a virtual person. At present, specific constraint is generally applied to each joint through an inverse kinematics algorithm, then the rotation information corresponding to each joint is obtained through continuous iterative solution, but the constraint to be applied by different joints is different, and different algorithms are different in time consumption and solving targets, so that the situation that certain motions possibly have no solution is caused, and because the constraint is inaccurately grasped, the motions of different parts possibly involve the linkage effect of other joints due to the fact that a human body has a complex structure, and therefore, the matching precision is poor. The deep learning mode reconstructs the rotation data of the future frames according to the 3D key points and the historical frame information by learning a large amount of data, but the required data amount is large, and the phenomenon of accumulated errors is easy to occur, so that the prediction of the subsequent frames is poorer and worse.
Disclosure of Invention
Based on the problems existing in the prior art, the embodiment of the invention provides a gesture matching method and device for bone animation, electronic equipment and a storage medium.
In a first aspect, an embodiment of the present invention provides a gesture matching method for skeletal animation, including:
receiving any frame of data in the bone animation;
and obtaining candidate matching nodes corresponding to any frame data from a pre-obtained motion map model, wherein the motion map model comprises a plurality of nodes connected through directed edges according to an excessively-available condition, and the plurality of nodes respectively represent a plurality of motion frames in a motion map sequence.
Obtaining the similarity between any frame data and each node in the candidate matching nodes;
and obtaining the matching node with any frame data from the candidate matching nodes based on the similarity between any frame data and each node.
Further, any frame data in the skeletal animation is first frame data, and the candidate matching node corresponding to any frame data is obtained from a pre-obtained motion map model, including:
classifying a plurality of nodes in the motion map model to obtain a plurality of groups;
and taking the node in the packet matched with the first frame data as a candidate matching node of the first frame data.
Further, any frame data in the skeletal animation is frame data other than the first frame data, and the obtaining candidate matching nodes corresponding to the any frame data from the pre-obtained motion map model includes:
obtaining a matching node of the previous frame data of the frame data except the first frame data;
and taking the node connected with the matching node of the previous frame data as the candidate matching node of the frame data except the first frame data.
Further, the obtaining the similarity between the arbitrary frame data and each node in the candidate matching nodes includes:
obtaining corresponding triplet characteristics of any frame data and each node in the candidate matching nodes;
based on the corresponding triplet characteristics of any frame data and each node in the candidate matching nodes, obtaining cosine distances between any frame data and each node in the candidate matching nodes;
and obtaining the similarity between any frame data and each node in the candidate matching nodes according to the cosine distance between any frame data and each node in the candidate matching nodes.
Further, the triplet feature for any node includes the speed and direction of movement of the preceding and following nodes of the any node, and the included angle of the bones associated with the any node.
Further, before obtaining the candidate matching node corresponding to the arbitrary frame data from the pre-obtained motion map model, the method further comprises the step of constructing the motion map model, and specifically comprises the following steps:
acquiring a motion map sequence;
mapping each motion frame in the motion map sequence into a node;
screening the nodes mapped by each motion frame in the motion map sequence;
and constructing the motion map model based on the screened mapping relation between each motion frame map and the nodes.
Further, the filtering the node mapped by each motion frame in the motion map sequence includes:
based on the triplet characteristics of each node, cosine distances between each node and other nodes are obtained;
based on the cosine distance, obtaining the similarity between each node and other nodes;
and screening the nodes mapped by each motion frame in the motion map sequence based on the similarity between each node and other nodes.
In a second aspect, an embodiment of the present invention provides a gesture matching apparatus for skeletal animation, including:
the receiving module is used for receiving any frame of data in the bone animation;
and the acquisition module is used for acquiring candidate matching nodes corresponding to any frame data from a pre-obtained motion map model, wherein the motion map model comprises a plurality of nodes connected through directed edges according to the transition condition, and the nodes respectively represent a plurality of motion frames in a motion map sequence.
The similarity calculation module is used for obtaining the similarity between any frame data and each node in the candidate matching nodes;
and the matching module is used for obtaining the matching node with any frame data from the candidate matching nodes based on the similarity between any frame data and each node.
In a third aspect, an embodiment of the present invention further provides an electronic device, including a memory, a processor, and a computer program stored on the memory and capable of running on the processor, where the processor implements the gesture matching method of the skeletal animation according to the first aspect when executing the computer program.
In a fourth aspect, embodiments of the present invention also provide a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the pose matching method of skeletal animation according to the first aspect.
According to the technical scheme, the gesture matching method, the gesture matching device, the electronic equipment and the storage medium for the bone animation, provided by the embodiment of the invention, comprise the steps of obtaining candidate matching nodes corresponding to any frame of data from a motion diagram model, and obtaining the matching nodes corresponding to any frame of data from the candidate matching nodes based on the similarity between any frame of data and each node, so that the accuracy and the matching efficiency of the matched gesture can be ensured, and the smooth transition of the gesture of each frame of data in the bone animation is promoted.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings that are necessary for the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention and that other drawings can be obtained from these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for matching the pose of a skeletal animation provided by an embodiment of the present invention;
FIG. 2 is a block diagram of a skeletal animation gesture matching device according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following describes the embodiments of the present invention further with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
The following describes a gesture matching method, a gesture matching device, an electronic device and a storage medium of a bone animation according to an embodiment of the invention with reference to the accompanying drawings.
FIG. 1 is a flow chart of a method for matching the pose of a skeletal animation according to an embodiment of the present invention. As shown in fig. 1, the gesture matching method for bone animation provided by an embodiment of the present invention specifically includes the following steps:
s101: any frame of data in the skeletal animation is received.
The bone animation is, for example, a bone animation of a human joint, and thus, each frame of data in the bone animation is also referred to as 3D data of the human joint.
S102: candidate matching nodes corresponding to any frame data are obtained from a pre-obtained motion map model, wherein the motion map model comprises a plurality of nodes connected through directed edges according to an excessively-available condition, and the plurality of nodes respectively represent a plurality of motion frames in a motion map sequence.
It may be appreciated that, in obtaining candidate matching nodes corresponding to any frame data from a pre-obtained motion map model, the motion map model may be pre-constructed, which specifically includes: acquiring a motion map sequence; mapping each motion frame in the motion map sequence into a node; screening the nodes mapped by each motion frame in the motion map sequence; and constructing the motion map model based on the screened mapping relation between each motion frame map and the nodes.
Further, screening the node mapped by each motion frame in the motion map sequence includes: based on the triplet characteristics of each node, cosine distances between each node and other nodes are obtained; based on the cosine distance, obtaining the similarity between each node and other nodes; and screening the nodes mapped by each motion frame in the motion map sequence based on the similarity between each node and other nodes.
The process of constructing a motion map model is described in specific examples, wherein the motion map is typically used in a character control system in a game engine to control the motion of virtual characters through input devices, the richness of the motion map affects the flexibility of the characters, and the connectivity algorithm of the map determines the smoothness of the motion. Therefore, in the embodiment of the invention, a motion diagram is constructed to take the coordinates of the 3D data of each frame of human joint in the bone animation as input, and the flexibility and continuous smoothness characteristics of the motion diagram are used for ensuring that the matched frame sequence has better results. The dimension reduction processing can be performed on the 3D data of each frame of human joint in the skeleton animation in advance, in fact, each frame of the 3D data of the human joint in the skeleton animation is mapped into a low-dimension space (i.e., a target space), so that each point in the low-dimension space is a node, and represents one frame of the 3D data of the human joint, namely: each frame of human joint 3D data in each frame of skeletal animation maps one coordinate (i.e., one node) in the low-dimensional space. Similarly, the nodes in the motion map model may be configured to perform the dimension reduction processing on the motion frames in the motion map sequence in advance, and map the motion frames into the low-dimensional space.
Specifically, the flow of the construction of the motion map model is as follows:
firstly, mapping a motion frame in a motion diagram sequence into a node, and respectively calculating the moving speed and moving direction of a history frame (namely a previous frame) and a future frame (namely a next frame) along the moving direction of the node, and the included angles of two bones associated with each joint of the node, so as to obtain a triplet vector of the node characteristic. The following formula is given:
F i =(V i-1 ,D i-1 ,A i ,V i+1 ,D i+1 );
wherein F is i Representing the triplet vector stored by the ith node, i-1 representing the history frame of the ith frame, i+1 representing the future frame of the ith frame, V representing the moving speed, D representing the moving direction, and a representing the angle of each joint.
Based on the triplet vector, the similarity of all frames is calculated by using cosine distance, and the similarity calculation method of the ith and j nodes is as follows:
S ij =cos(F i ,F j );
the method comprises the following steps: two thresholds are set: the nodes are screened through the similar threshold value and the transitive threshold value, and the flow is as follows: 1. calculating the similarity of a triplet vector formed by the future moving direction and the included angle of the current frame and the historical moving direction and the included angle of the rest frames, and counting the transitionable conditions of the rest all frames for each frame by using a transitionable threshold, namely: the output degree of each node; 2. grouping all frames by using a similarity threshold, wherein the similarity of all frames in each group is extremely high, counting the output of each group of nodes, reserving the node with the maximum output, and removing the rest nodes; 3. if the output degree of frames represented by some nodes is smaller than a certain threshold value, the frames are also rejected. Thus, refinement and connectivity of the motion map model can be ensured.
And connecting the rest nodes by utilizing directed edges according to the transitionable condition to form the motion graph model.
After the motion map model is built, candidate matching nodes corresponding to any frame data are obtained from the motion map model obtained in advance, and two situations exist, namely, any frame data in the bone animation is first frame data, and any frame data in the bone animation is frame data except the first frame data. For both cases, the candidate matching nodes corresponding to the arbitrary frame data are obtained from the motion map model obtained in advance in different manners, for example: when any frame data in the skeleton animation is first frame data, obtaining candidate matching nodes corresponding to the any frame data from a pre-obtained motion diagram model, wherein the candidate matching nodes comprise: classifying a plurality of nodes in the motion map model to obtain a plurality of groups; taking a node in a packet matched with the first frame data as a candidate matching node of the first frame data; when any frame data in the skeleton animation is frame data except the first frame data, obtaining candidate matching nodes corresponding to the any frame data from a pre-obtained motion diagram model, wherein the candidate matching nodes comprise: obtaining a matching node of the previous frame data of the frame data except the first frame data; and taking the node connected with the matching node of the previous frame data as the candidate matching node of the frame data except the first frame data.
S103: and obtaining the similarity between any frame data and each node in the candidate matching nodes.
In a specific example, obtaining the similarity of any frame data to each of the candidate matching nodes includes: obtaining corresponding triplet characteristics of any frame data and each node in the candidate matching nodes; based on the corresponding triplet characteristics of any frame data and each node in the candidate matching nodes, obtaining cosine distances between any frame data and each node in the candidate matching nodes; and obtaining the similarity between any frame data and each node in the candidate matching nodes according to the cosine distance between any frame data and each node in the candidate matching nodes. The triplet characteristic for any node comprises the moving speed and moving direction of the previous node and the next node of the any node and the included angle of bones associated with the any node.
S104: and obtaining the matching node with any frame data from the candidate matching nodes based on the similarity between any frame data and each node.
Specifically, in steps S102 to S104, the first frame of human body 3D key point data (i.e. the first frame of human body joint 3D data in the skeletal animation) is matched: classifying the current actions in the motion diagram model, and calculating cosine similarity of the 3D data of the human joints of the first frame and the data representing the angles of the human joints in the triplet characteristics of the nodes representing the similar motions in the motion diagram.
The first frame of human joint 3D data is to be matched with all nodes in the motion map, but the nodes may be very many, so a rough classification is made on the current motion, such as whether this motion is walking, running, jumping or other initial pose, and because the data set is acquired in real time, it is also possible to manually label, such as which nodes are walking and which nodes are running. The searching time of the first frame of human joint 3D data is greatly reduced through a classification method. Because the first frame of human joint 3D data has no history frame information (i.e. the previous frame), the cosine similarity of the first frame of human joint 3D data and the data representing the angle of the human joint in the node triplet characteristic representing the similar motion in the motion diagram is only needed to be calculated.
The rest of the frame human body 3D key point data (i.e. frame data except the first frame data) are matched: and determining the node to which the historical frame data of the current frame belongs, and taking the connectable node as a candidate matching node of the current frame. And finding out the node with the highest similarity from the candidate matching nodes through cosine distance, namely the matching node information of the current frame.
The directed edges of the motion diagram model also ensure the continuity of transition, and when the node to which the historical frame data of the current frame belongs is determined, the connectable node can be used as the candidate matching node of the current frame. The connection in the motion diagram is to search the connectable information by future information, so after the new human body 3D key point data is obtained, a delay matching method can be adopted, a plurality of frames of data are delayed, the moving directions of a plurality of frames are calculated, the motion reverse and other jitter problems caused by noise are eliminated by using a moving average method, at the moment, the history of the current frame, the future moving direction and the speed are both ensured to be on a smooth result, and then the three-tuple feature vector formed by the smoothed speed, the smoothed speed and the smoothed direction and the joint included angle of the current frame is utilized, and the node with the highest similarity is found in the candidate transition nodes through cosine distance, namely the matching node information of the current frame. Note that the candidate transition nodes described above are the remaining nodes of the node directed connection determined by the previous frame data.
After finding all the nodes matched with the human body 3D key point data in the motion diagram model, the method comprises the following steps: matching all the matched gestures of the human body 3D key point data in the skeleton animation. Because the motion map model is constructed based on the original motion capture data, the motion capture data records 3D joints and corresponding rotation information, and each node represents a mapping relationship of a frame of 3D joints to rotation data after being stored in the nodes of the motion map model.
According to the gesture matching method for the skeleton animation, provided by the embodiment of the invention, the candidate matching nodes corresponding to any frame of data are obtained from the motion diagram model, and the matching nodes corresponding to any frame of data are obtained from the candidate matching nodes based on the similarity between any frame of data and each node, so that the accuracy and the matching efficiency of the matched gesture can be ensured, and the smooth transition of the gesture of each frame of data in the skeleton animation is promoted.
Fig. 2 is a schematic structural diagram of a gesture matching device for bone animation according to an embodiment of the present invention, and as shown in fig. 2, the gesture matching device for bone animation according to an embodiment of the present invention includes: a receiving module 210, an obtaining module 220, a similarity calculating module 230 and a matching module 240, wherein:
a receiving module 210, configured to receive any frame data in a skeletal animation;
and the obtaining module 220 is configured to obtain candidate matching nodes corresponding to the arbitrary frame data from a pre-obtained motion map model, where the motion map model includes a plurality of nodes connected by directed edges according to an excessive situation, and the plurality of nodes respectively represent a plurality of motion frames in a motion map sequence.
A similarity calculation module 230, configured to obtain a similarity between the arbitrary frame data and each node in the candidate matching nodes;
and a matching module 240, configured to obtain a matching node with the arbitrary frame data from the candidate matching nodes based on the similarity between the arbitrary frame data and each node.
According to the gesture matching method for the skeleton animation, provided by the embodiment of the invention, the candidate matching nodes corresponding to any frame of data are obtained from the motion diagram model, and the matching nodes corresponding to any frame of data are obtained from the candidate matching nodes based on the similarity between any frame of data and each node, so that the accuracy and the matching efficiency of the matched gesture can be ensured, and the smooth transition of the gesture of each frame of data in the skeleton animation is promoted.
It should be noted that, a specific implementation manner of the gesture matching device for bone animation according to the embodiment of the present invention is similar to a specific implementation manner of the gesture matching method for bone animation according to the embodiment of the present invention, please refer to the description of the method section specifically, and in order to reduce redundancy, details are not described here.
Based on the same inventive concept, a further embodiment of the present invention provides an electronic device, see fig. 3, comprising in particular: a processor 301, a memory 302, a communication interface 303, and a communication bus 304;
wherein, the processor 301, the memory 302, and the communication interface 303 complete communication with each other through the communication bus 304; the communication interface 303 is used for realizing information transmission between devices;
the processor 301 is configured to invoke a computer program in the memory 302, where the processor executes the computer program to implement all the steps of the gesture matching method for skeletal animation, for example, the processor executes the computer program to implement the following steps: receiving any frame of data in the bone animation; and obtaining candidate matching nodes corresponding to any frame data from a pre-obtained motion map model, wherein the motion map model comprises a plurality of nodes connected through directed edges according to an excessively-available condition, and the plurality of nodes respectively represent a plurality of motion frames in a motion map sequence. Obtaining the similarity between any frame data and each node in the candidate matching nodes; and obtaining the matching node with any frame data from the candidate matching nodes based on the similarity between any frame data and each node.
Based on the same inventive concept, a further embodiment of the present invention provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements all the steps of the above-mentioned gesture matching method of bone animation, for example, the processor implements the following steps when executing the computer program: receiving any frame of data in the bone animation; and obtaining candidate matching nodes corresponding to any frame data from a pre-obtained motion map model, wherein the motion map model comprises a plurality of nodes connected through directed edges according to an excessively-available condition, and the plurality of nodes respectively represent a plurality of motion frames in a motion map sequence. Obtaining the similarity between any frame data and each node in the candidate matching nodes; and obtaining the matching node with any frame data from the candidate matching nodes based on the similarity between any frame data and each node.
Further, the logic instructions in the memory described above may be implemented in the form of software functional units and stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules can be selected according to actual needs to achieve the purpose of the embodiment of the invention. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on such understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the index monitoring method described in the respective embodiments or some parts of the embodiments.
Furthermore, in the present disclosure, such as "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
Moreover, in the present invention, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Furthermore, in the description herein, reference to the terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (5)

1. A method for matching the gesture of a skeletal animation, comprising:
receiving any frame of data in the bone animation;
obtaining candidate matching nodes corresponding to any frame data from a pre-obtained motion map model, wherein the motion map model comprises a plurality of nodes connected through directed edges according to transitionable conditions, and the plurality of nodes respectively represent a plurality of motion frames in a motion map sequence;
obtaining the similarity between any frame data and each node in the candidate matching nodes;
obtaining a matching node with any frame data from the candidate matching nodes based on the similarity between the any frame data and each node;
before obtaining the candidate matching node corresponding to any frame data from the pre-obtained motion map model, the method further comprises the step of constructing the motion map model, and specifically comprises the following steps:
acquiring a motion map sequence;
mapping each motion frame in the motion map sequence into a node;
based on the triplet characteristics of each node, cosine distances between each node and other nodes are obtained; the triplet characteristic for any node comprises the moving speed and moving direction of the previous node and the next node of the any node and the included angle of bones associated with the any node;
based on the cosine distance, obtaining the similarity between each node and other nodes;
screening the nodes mapped by each motion frame in the motion map sequence based on the similarity between each node and other nodes;
constructing the motion map model based on the screened mapping relation between each motion frame and the node;
when any frame data in the skeletal animation is the first frame data, the step of obtaining candidate matching nodes corresponding to any frame data from a pre-obtained motion map model comprises the following steps:
classifying a plurality of nodes in the motion map model to obtain a plurality of groups;
taking a node in a packet matched with the first frame data as a candidate matching node of the first frame data;
when any frame data in the skeletal animation is frame data except the first frame data, the step of obtaining candidate matching nodes corresponding to the any frame data from a pre-obtained motion map model comprises the following steps:
obtaining a matching node of the previous frame data of the frame data except the first frame data;
and taking the node connected with the matching node of the previous frame data as the candidate matching node of the frame data except the first frame data.
2. The method for matching the pose of a skeletal animation according to claim 1, wherein said obtaining the similarity of said arbitrary frame data to each of said candidate matching nodes comprises:
obtaining corresponding triplet characteristics of any frame data and each node in the candidate matching nodes;
based on the corresponding triplet characteristics of any frame data and each node in the candidate matching nodes, obtaining cosine distances between any frame data and each node in the candidate matching nodes;
and obtaining the similarity between any frame data and each node in the candidate matching nodes according to the cosine distance between any frame data and each node in the candidate matching nodes.
3. A skeletal animation pose matching device, comprising:
the receiving module is used for receiving any frame of data in the bone animation;
the acquisition module is used for acquiring candidate matching nodes corresponding to any frame data from a pre-obtained motion map model, wherein the motion map model comprises a plurality of nodes connected through directed edges according to transitionable conditions, and the nodes respectively represent a plurality of motion frames in a motion map sequence;
the similarity calculation module is used for obtaining the similarity between any frame data and each node in the candidate matching nodes;
a matching module, configured to obtain a matching node with any frame data from the candidate matching nodes based on a similarity between the any frame data and each node;
the motion picture sequence acquisition module is used for acquiring a motion picture sequence;
the node mapping module is used for mapping each motion frame in the motion map sequence into a node;
the cosine distance determining module is used for obtaining cosine distances between each node and other nodes based on the triplet characteristics of each node; the triplet characteristic for any node comprises the moving speed and moving direction of the previous node and the next node of the any node and the included angle of bones associated with the any node;
the node similarity determining module is used for obtaining the similarity between each node and other nodes based on the cosine distance;
the node screening module is used for screening the nodes mapped by each motion frame in the motion diagram sequence based on the similarity between each node and other nodes;
the motion map model construction module is used for constructing the motion map model based on the screened mapping relation between each motion frame and the node;
when any frame of data in the skeletal animation is the first frame of data, the acquisition module comprises:
the node classification unit is used for classifying a plurality of nodes in the motion diagram model to obtain a plurality of groups;
a first candidate matching node determining unit configured to take a node in a packet that matches the first frame data as a candidate matching node of the first frame data;
when any frame data in the skeletal animation is frame data other than the first frame data, the acquiring module further includes:
a matching node acquisition unit configured to acquire a matching node of the previous frame data of the frame data other than the first frame data;
and a second candidate matching node determination unit configured to use a node connected to a matching node of the previous frame data as a candidate matching node of the frame data other than the first frame data.
4. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the pose matching method of skeletal animation according to any of claims 1 or 2 when the computer program is executed by the processor.
5. A non-transitory computer readable storage medium having stored thereon a computer program, characterized in that the computer program, when executed by a processor, implements a pose matching method of skeletal animation according to any of claims 1 or 2.
CN202011503059.4A 2020-12-17 2020-12-17 Gesture matching method and device for bone animation, electronic equipment and storage medium Active CN112560962B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011503059.4A CN112560962B (en) 2020-12-17 2020-12-17 Gesture matching method and device for bone animation, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011503059.4A CN112560962B (en) 2020-12-17 2020-12-17 Gesture matching method and device for bone animation, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112560962A CN112560962A (en) 2021-03-26
CN112560962B true CN112560962B (en) 2024-03-22

Family

ID=75063496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011503059.4A Active CN112560962B (en) 2020-12-17 2020-12-17 Gesture matching method and device for bone animation, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112560962B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313794B (en) * 2021-05-19 2022-11-08 深圳市慧鲤科技有限公司 Animation migration method and device, equipment and storage medium
CN113822972B (en) * 2021-11-19 2022-05-27 阿里巴巴达摩院(杭州)科技有限公司 Video-based processing method, device and readable medium
CN116310012B (en) * 2023-05-25 2023-07-25 成都索贝数码科技股份有限公司 Video-based three-dimensional digital human gesture driving method, device and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104484034A (en) * 2014-11-27 2015-04-01 韩慧健 Gesture motion element transition frame positioning method based on gesture recognition
WO2019109729A1 (en) * 2017-12-08 2019-06-13 华为技术有限公司 Bone posture determining method and device, and computer readable storage medium
CN110020633A (en) * 2019-04-12 2019-07-16 腾讯科技(深圳)有限公司 Training method, image-recognizing method and the device of gesture recognition model
CN110310350A (en) * 2019-06-24 2019-10-08 清华大学 Action prediction generation method and device based on animation
CN111260764A (en) * 2020-02-04 2020-06-09 腾讯科技(深圳)有限公司 Method, device and storage medium for making animation
CN111292401A (en) * 2020-01-15 2020-06-16 腾讯科技(深圳)有限公司 Animation processing method and device, computer storage medium and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104484034A (en) * 2014-11-27 2015-04-01 韩慧健 Gesture motion element transition frame positioning method based on gesture recognition
WO2019109729A1 (en) * 2017-12-08 2019-06-13 华为技术有限公司 Bone posture determining method and device, and computer readable storage medium
CN110020633A (en) * 2019-04-12 2019-07-16 腾讯科技(深圳)有限公司 Training method, image-recognizing method and the device of gesture recognition model
CN110310350A (en) * 2019-06-24 2019-10-08 清华大学 Action prediction generation method and device based on animation
CN111292401A (en) * 2020-01-15 2020-06-16 腾讯科技(深圳)有限公司 Animation processing method and device, computer storage medium and electronic equipment
CN111260764A (en) * 2020-02-04 2020-06-09 腾讯科技(深圳)有限公司 Method, device and storage medium for making animation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
一种人体运动骨骼提取和动画自动生成方法;吴伟和;郝爱民;赵永涛;万巧慧;李帅;;计算机研究与发展(第07期);全文 *
基于关键帧的三维人体运动检索;潘红;肖俊;吴飞;郭同强;;计算机辅助设计与图形学学报(第02期);全文 *
语义中间骨架驱动的自动异构运动重定向;谢文军;陆劲挺;刘晓平;;计算机辅助设计与图形学学报(第05期);全文 *

Also Published As

Publication number Publication date
CN112560962A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
CN112560962B (en) Gesture matching method and device for bone animation, electronic equipment and storage medium
CN109344899B (en) Multi-target detection method and device and electronic equipment
Kowalczuk et al. Real-time stereo matching on CUDA using an iterative refinement method for adaptive support-weight correspondences
CN110363817B (en) Target pose estimation method, electronic device, and medium
CN111401106B (en) Behavior identification method, device and equipment
US20220398845A1 (en) Method and device for selecting keyframe based on motion state
CN112651345B (en) Human body posture recognition model optimization method and device and terminal equipment
CN112651997A (en) Map construction method, electronic device, and storage medium
CN109447006A (en) Image processing method, device, equipment and storage medium
CN113920109A (en) Medical image recognition model training method, recognition method, device and equipment
CN110009663A (en) A kind of method for tracking target, device, equipment and computer readable storage medium
Bors et al. Object classification in 3-D images using alpha-trimmed mean radial basis function network
US20200036961A1 (en) Constructing a user's face model using particle filters
CN110135428A (en) Image segmentation processing method and device
KR102237124B1 (en) Motion Retargetting Method and Apparatus based on Neural Network
CN114792401A (en) Training method, device and equipment of behavior recognition model and storage medium
CN111797714A (en) Multi-view human motion capture method based on key point clustering
CN111291611A (en) Pedestrian re-identification method and device based on Bayesian query expansion
KR100920229B1 (en) Fast systolic array system of a belief propagation and method for processing a message using the same
CN112560959B (en) Gesture matching method and device for bone animation, electronic equipment and storage medium
CN115170599A (en) Method and device for vessel segmentation through link prediction of graph neural network
CN110738082B (en) Method, device, equipment and medium for positioning key points of human face
CN111833395B (en) Direction-finding system single target positioning method and device based on neural network model
Li SuperGlue-Based Deep Learning Method for Image Matching from Multiple Viewpoints
CN111476115B (en) Human behavior recognition method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant