CN114972441A - Motion synthesis framework based on deep neural network - Google Patents
Motion synthesis framework based on deep neural network Download PDFInfo
- Publication number
- CN114972441A CN114972441A CN202210735748.0A CN202210735748A CN114972441A CN 114972441 A CN114972441 A CN 114972441A CN 202210735748 A CN202210735748 A CN 202210735748A CN 114972441 A CN114972441 A CN 114972441A
- Authority
- CN
- China
- Prior art keywords
- motion
- joint
- sequence
- frame
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000015572 biosynthetic process Effects 0.000 title claims abstract description 52
- 238000003786 synthesis reaction Methods 0.000 title claims abstract description 52
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 21
- 238000000605 extraction Methods 0.000 claims abstract description 13
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 7
- 238000001308 synthesis method Methods 0.000 claims abstract description 5
- 238000005070 sampling Methods 0.000 claims description 12
- 230000008859 change Effects 0.000 claims description 9
- 238000010606 normalization Methods 0.000 claims description 6
- 239000000126 substance Substances 0.000 claims description 6
- 238000000034 method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 2
- 208000012661 Dyskinesia Diseases 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000001671 psychotherapy Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/211—Selection of the most significant subset of features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention relates to the technical field of computers, in particular to a motion synthesis framework based on a deep neural network, which comprises the following steps: preparing training data and standardizing joint coordinates; extracting a motion rule of the motion sequence; training a motion law extraction network; training a motion synthesis network to establish a relation between a motion sequence head frame and a motion sequence tail frame and a motion rule; generating a corresponding motion rule according to the head and tail frames; the invention is used for synthesizing real human body motion under the condition of giving a section of head and tail frames of the motion sequence, and is used for solving the problems of complex control and limited synthesis content of the existing motion synthesis method.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a motion synthesis framework based on a deep neural network.
Background
The motion data acquired by the capturing device can be used for studying the characteristics of human motion, such as motion pattern recognition and motion pattern tracking, and the like, and also derive other promising applications, including the fields of animation, robot driving, motion rehabilitation and the like. However, the motion capture is very expensive and is easily limited by the range of actor's performance, so the motion synthesis technique is an effective means to solve the problem of high cost of motion capture.
The existing motion synthesis algorithm faces two main problems, one is to avoid the non-professional operation of a user on the synthesis process, but reduce the coordination of the motion synthesis result, so that the content of the synthesis result is limited, the requirements of the user are difficult to meet, and the imagination is difficult to exert. The other direction is that users often need to have professional motion synthesis knowledge to successfully complete the motion synthesis task. The invention provides a motion synthesis framework based on a deep neural network, which builds a depth model to establish the relation between a head frame and a tail frame and a motion rule, synthesizes a corresponding motion sequence according to the given head frame and the given tail frame and enhances the controllability of motion synthesis.
Disclosure of Invention
The present invention is directed to a motion synthesis framework based on a deep neural network, so as to solve the problems mentioned in the background art.
The technical scheme of the invention is as follows: a motion synthesis framework based on a deep neural network comprises training data, joint coordinates, a motion rule of a motion sequence, a relation between the motion sequence and the motion rule and a relation between a head frame and a tail frame of the motion sequence and the motion rule, and the motion synthesis method of the motion synthesis framework comprises the following steps:
s1, preparing training data and normalizing joint coordinates: collecting a plurality of motion sequences with a single motion type as training data and converting the motion sequences into joint coordinates, then carrying out standardization processing on the joint coordinates, and taking the relative coordinates of each joint relative to a father joint of the joint as the characteristics of the joint;
s2, extracting the motion rule of the partial motion sequence involved in the S1: calculating the angle between the position of a certain joint at any moment and the initial frame, and taking the change curve of the angle as the motion rule of the motion sequence;
s3, training the depth network according to the standardized motion data and establishing the relation between the motion sequence and the motion law: taking a motion sequence and the motion rules extracted in the S2 as a training data pair, and adopting an LSTM-based deep network for training to construct the relationship between the motion sequence and the motion rules;
s4, extracting the motion rules of all motion sequences by using the motion rule extraction network related to S3;
s5, training the depth network according to the standardized motion data and establishing the relation between the first frame and the last frame of the motion sequence and the motion law: taking a motion sequence head and tail frame and the motion rule extracted in S4 as a training data pair, and adopting a depth network based on LSTM to train to construct the relationship between the motion head and tail frame and the motion rule;
s6, generating a corresponding motion rule, namely a polynomial coefficient, through the trained network according to the given head and tail frames in the S5;
and S7, according to the polynomial coefficient obtained in S6, the position of each joint at any time is obtained, and a complete motion sequence is synthesized.
Preferably, the position of each joint in the joint coordinates in S1 is represented by a three-dimensional vectorExpressing and normalizing it, wherein the coordinates are normalizedIs defined as:。
preferably, the law of motion of the partial motion sequence involved in S1 is extracted in S2, and the law of motion is definedIs the angle of the position of the current frame joint relative to the starting position, angleCarrying out normalization processing, and setting the joint position of the initial frame to the joint position corresponding to the end frame as the positive direction;
angle of the joint positionThe correspondence with the three-dimensional coordinates is expressed as:
wherein the content of the first and second substances,andrespectively representing the position of each joint of the start frame and the end frame,indicating the angular change of the end frame relative to the start frame; then by the least squares method:
preferably, in S3, the deep network is trained according to the normalized motion data and the relationship between the motion sequence and the motion law is established: the input is a motion sequence, the output is a polynomial coefficient corresponding to a motion law, the time sequence characteristics of the preprocessed motion sequence are extracted through a three-layer LSTM network, the motion law corresponding to the motion law is output through a full connection layer, and the corresponding loss function is expressed as follows:
whereinCalculated joint angle for network in timeThe value of (a) is,for actual joint angle in timeThe value of (a) is,representing the number of sampling time points of the selected sequence,representing the number of input motion sequences.
Preferably, in S4, the motion law extraction network referred to in S3 is used to extract the motion laws of all motion sequences:
the network input is a motion sequence, and the output is a joint angleWith respect to timeIs represented byWhereinRepresenting the number of sampling time points of the selected sequence,representing the number of input motion sequences.
Preferably, in S5, the deep network is trained according to the normalized motion data and the association between the head and end frames and the motion law is established:
angle obtained in S2With respect to timeOf (2)The corresponding relation of which can be used as a functionRepresents, i.e.:
said functionBy a polynomial of order 5, usingRepresenting the coefficients of the joint point corresponding polynomial.
The number of the synthesis modules is consistent with that of the human joints, namely, a single module is responsible for feature extraction of a single joint motion rule, a given head frame and a given tail frame are input, and a required motion rule is output. The synthesis module comprises three layers of LSTM units, namely a batch normalization layer and a final full-connection layer, and the LSTM network is responsible for extracting characteristic information of a first frame and a last frame; the loss function for each synthesis module is represented as:
whereinDenotes the firstThe first and last frames of an inputAt each of the sampling time points, the sampling time point,represents the angle value extracted by the S4 motion law extraction network,and expressing polynomial coefficients corresponding to the motion law generated by the motion synthesis network.
Preferably, in S6, a corresponding motion law, i.e. polynomial coefficient, is generated through the trained network according to the given motion head and end frames:
the network inputs the frame head and tail of the motion sequence, and outputs polynomial coefficients corresponding to the motion law, namely:
Preferably, in S7, the human motion is synthesized according to the motion law. For the firstFirst of frameAngle corresponding to each jointIt can be expressed as:
wherein the content of the first and second substances,,is the firstPolynomial coefficients of the individual joint motion curves; then converting the angle into corresponding three-dimensional coordinates according to the idea of spherical interpolationThe formula is as follows:
,indicating the normalized coordinates of the first and last frames of a motion sequence,is the firstThe angle change from the starting frame to the ending frame of each joint; and when the normalized position of each joint is obtained, calculating the absolute position coordinate of each joint according to the structure of the human body and the length of the skeleton, and finally reconstructing the real human body motion.
The invention provides a motion synthesis framework based on a deep neural network by improving, compared with the prior art, the invention has the following improvements and advantages:
one is as follows: the motion synthesis framework based on the deep neural network is used for synthesizing real human body motion under the condition of giving a head frame and a tail frame of a motion sequence, and is used for solving the problems that the existing motion synthesis method is complex in control and limited in synthesis content;
the second step is as follows: the motion synthesis framework based on the deep neural network can generate natural intermediate motion according to the head and tail frames of the motion sequence provided by the user, not only ensures the convenience of operation, but also can synthesize rich motion content by controlling the head and tail frames;
and thirdly: the motion synthesis framework based on the deep neural network can be applied to a plurality of fields, wherein in the field of the film and television industry, the motion synthesis framework can be used for synthesizing 3D human body motion to drive virtual characters; in the field of robots, the motion synthesis framework can synthesize special actions to drive the humanoid robot; in the field of medical rehabilitation, the motion synthesis framework can be used for synthesizing the normal motion posture of a patient with dyskinesia so as to assist psychotherapy.
Drawings
The invention is further explained below with reference to the figures and examples:
FIG. 1 is a block flow diagram of the present invention;
FIG. 2 is a normalized graph of the present invention;
FIG. 3 is a diagram of a law of motion extraction network of the present invention;
fig. 4 is a diagram of a motion synthesis network of the present invention.
Detailed Description
The present invention is described in detail below, and technical solutions in the embodiments of the present invention are clearly and completely described, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a motion synthesis framework based on a deep neural network through improvement, and the technical scheme of the invention is as follows:
as shown in fig. 1, a motion synthesis framework based on a deep neural network includes training data, joint coordinates, a motion rule of a motion sequence, a relationship between the motion sequence and the motion rule, and a relationship between a head frame and a tail frame of the motion sequence and the motion rule, and the motion synthesis method of the motion synthesis framework includes the following steps:
s1, preparing training data and normalizing joint coordinates: collecting a plurality of motion sequences with a single motion type as training data and converting the motion sequences into joint coordinates, then carrying out standardization processing on the joint coordinates, and taking the relative coordinates of each joint relative to a father joint of the joint as the characteristics of the joint;
s2, extracting the motion rule of the partial motion sequence involved in the S1: calculating the angle between the position of a certain joint at any moment and the initial frame, and taking the change curve of the angle as the motion rule of the motion sequence;
s3, training the depth network according to the standardized motion data and establishing the relation between the motion sequence and the motion law: taking a motion sequence and the motion rules extracted in the S2 as a training data pair, and adopting an LSTM-based deep network for training to construct the relationship between the motion sequence and the motion rules;
s4, extracting the motion rules of all motion sequences by using the motion rule related to S3;
s5, training the depth network according to the standardized motion data and establishing the relation between the first frame and the last frame of the motion sequence and the motion law: taking a motion sequence head and tail frame and the motion rule extracted in S4 as a training data pair, and adopting a depth network based on LSTM to train to construct the relationship between the motion head and tail frame and the motion rule;
s6, generating a corresponding motion rule, namely a polynomial coefficient, through the trained network according to the given head and tail frames in the S5;
and S7, according to the polynomial coefficient obtained in S6, obtaining the position of each joint at any time, thereby synthesizing a complete motion sequence.
Wherein the position of each joint in the joint coordinates in S1 is a three-dimensional vectorExpressed and normalized as shown in fig. 2, where the coordinates are normalizedIs defined as:。
wherein, the motion rule of the partial motion sequence involved in the S1 is extracted from the S2, and the definition is definedIs the angle of the position of the current frame joint relative to the starting position, angleCarrying out normalization processing, and setting the joint position of the initial frame to the joint position corresponding to the end frame as the positive direction;
wherein the content of the first and second substances,andrespectively representing the position of each joint of the start frame and the end frame,indicating the angular change of the end frame relative to the start frame; then by the least squares method:
in S3, the deep network is trained according to the normalized motion data, and the relationship between the motion sequence and the motion law is established: as shown in fig. 3, the input is a motion sequence, the output is a polynomial coefficient corresponding to a motion law, the time sequence feature of the preprocessed motion sequence is extracted through three layers of LSTM networks, the motion law corresponding to the motion sequence is output through a full connection layer, and the corresponding loss function is expressed as:
whereinCalculated joint angle in time for the networkThe value of (a) is,for actual joint angle in timeThe value of (a) is set to (b),representing the number of sampling time points of the selected sequence,representing the number of input motion sequences.
In the step S4, the motion law extraction network involved in the step S3 is used to extract the motion laws of all motion sequences:
the network input is a motion sequence and the output is a joint angleWith respect to timeIs represented byWhereinIndicating the number of sampling time points of the selected sequence,representing the number of input motion sequences.
In S5, the deep network is trained according to the normalized motion data, and the association between the first and last frames and the motion law is established:
angle obtained in S2With respect to timeOf (2) aThe corresponding relation of which can be used as a functionRepresents, i.e.:
by a polynomial of order 5, usingRepresenting the coefficients of the joint point corresponding polynomial.
As shown in fig. 4, the number of the synthesis modules is consistent with that of the human joints, that is, a single module is responsible for feature extraction of a single joint motion law, and the input is a given head and tail frame, and the output is a required motion law. The synthesis module comprises three layers of LSTM units, namely a batch normalization layer and a final full-connection layer, and the LSTM network is responsible for extracting characteristic information of a first frame and a last frame; the loss function for each synthesis module is represented as:
whereinIs shown asThe first and last frames of an inputAt each of the sampling time points, the sampling time point,represents the angle value extracted by the S4 motion law extraction network,and expressing polynomial coefficients corresponding to the motion law generated by the motion synthesis network.
In S6, a corresponding motion law, that is, a polynomial coefficient, is generated through a trained network according to a given motion start frame and end frame:
the network inputs the frame head and tail of the motion sequence, and outputs polynomial coefficients corresponding to the motion law, namely:
In S7, the human body motion is synthesized according to the motion law. For the firstFirst of frameAngle corresponding to each jointIt can be expressed as:
wherein the content of the first and second substances,,is the firstPolynomial coefficients of individual joint motion curves; then converting the angle into corresponding three-dimensional coordinates according to the idea of spherical interpolationThe formula is as follows:
,indicating the normalized coordinates of the first and last frames of a motion sequence,is the firstThe angle change from the starting frame to the ending frame of each joint; and when the normalized position of each joint is obtained, calculating the absolute position coordinate of each joint according to the structure of the human body and the length of the skeleton, and finally reconstructing the real human body motion.
Claims (10)
1. A motion synthesis framework based on a deep neural network is characterized in that: preparing training data and standardizing joint coordinates; extracting a motion rule of the motion sequence; training a motion law extraction network; training a motion synthesis network to establish a relation between a motion sequence head frame and a motion sequence tail frame and a motion rule; generating a corresponding motion rule according to the head and tail frames; converting the generated motion rule into the position of each joint at any moment, and synthesizing a complete motion sequence, wherein the motion synthesis method of the motion synthesis framework comprises the following steps:
s1, preparing training data and normalizing joint coordinates: collecting a plurality of motion sequences with a single motion type as training data and converting the motion sequences into joint coordinates, then carrying out standardization processing on the joint coordinates, and taking the relative coordinates of each joint relative to a father joint of the joint as the characteristics of the joint;
s2, extracting the motion rule of the partial motion sequence involved in the S1: calculating the angle between the position of a certain joint at any moment and the initial frame, and taking the change curve of the angle as the motion rule of the motion sequence;
s3, training the depth network according to the standardized motion data and establishing the relation between the motion sequence and the motion law: taking a motion sequence and the motion rule extracted in the S2 as a training data pair, and adopting LSTM-based deep network training to construct the relationship between the motion sequence and the motion rule;
s4, extracting the motion rules of all motion sequences by using the motion rule related to S3;
s5, training the depth network by using the standardized motion data and establishing the relation between the first frame and the last frame of the motion sequence and the motion law: taking a motion sequence head and tail frame and the motion rule extracted in S4 as a training data pair, and adopting LSTM-based deep network training to construct the relationship between the motion head and tail frame and the motion rule;
s6, generating a corresponding motion law, namely a polynomial coefficient, through the trained network by using the given head and tail frames in the S5;
and S7, according to the polynomial coefficient obtained in S6, obtaining the position of each joint at any time, thereby synthesizing a complete motion sequence.
3. the deep neural network-based motion synthesis framework of claim 1, wherein: s2, extracting the motion rule of the part motion sequence involved in S1, and definingIs the angle of the position of the current frame joint relative to the starting position, angleCarrying out normalization processing, and setting the joint position of the initial frame to the joint position corresponding to the end frame as the positive direction;
angle of the joint positionThe correspondence with the three-dimensional coordinates is expressed as:
wherein the content of the first and second substances,andrespectively representing the position of each joint of the start frame and the end frame,indicating the angular change of the end frame relative to the start frame; and solving by least squares:
4. the deep neural network-based motion synthesis framework of claim 1, wherein: in S3, the standardized motion data is used to train the depth network and establish the relationship between the motion sequence and the motion law, the input is the motion sequence, the output is the polynomial coefficient corresponding to the motion law, the preprocessed motion sequence first passes through the three-layer LSTM network to extract its time-sequence characteristics, and then passes through the full connection layer to output the corresponding motion law, and the corresponding loss function is expressed as:
5. The deep neural network-based motion synthesis framework of claim 1, wherein: in S4, the motion law extraction network involved in S3 is used to extract the motion laws of all motion sequences:
6. The deep neural network-based motion synthesis framework of claim 1, wherein: and S5, training the deep network by using the standardized motion data and establishing the relation between the head and the tail frames and the motion law:
angle obtained in S2With respect to timeOf (2) aThe corresponding relation of which can be used as a functionRepresents, i.e.:
said functionBy a polynomial of order 5, usingCoefficients representing the joint point corresponding polynomial;
the number of the synthesis modules is consistent with that of the human joints, namely, a single module is responsible for feature extraction of a single joint motion rule, a given head frame and a given tail frame are input, and a required motion rule is output.
7. The synthesis module comprises three layers of LSTM units, namely a batch normalization layer and a final full-connection layer, and the LSTM network is responsible for extracting characteristic information of a first frame and a last frame; the loss function for each synthesis module is represented as:
whereinDenotes the firstThe first and last frames of an inputAt each of the sampling time points, the sampling time point,represents the angle value extracted by the S4 motion law extraction network,and expressing polynomial coefficients corresponding to the motion law generated by the motion synthesis network.
8. The deep neural network-based motion synthesis framework of claim 1, wherein: in S6, a corresponding motion law, i.e., polynomial coefficient, is generated through the trained network according to the given motion head and end frames:
the network inputs the frame head and tail of the motion sequence, and outputs polynomial coefficients corresponding to the motion law, namely:
9. The deep neural network-based motion synthesis framework of claim 1, wherein: in S7, the human body motion is synthesized according to the motion law.
wherein the content of the first and second substances,,is the firstPolynomial coefficients of individual joint motion curves; then converting the angle into corresponding three-dimensional coordinates according to the idea of spherical interpolationThe formula is as follows:
,indicating the normalized coordinates of the first and last frames of a motion sequence,is the firstThe angle change from the starting frame to the ending frame of each joint; and when the normalized position of each joint is obtained, calculating the absolute position coordinate of each joint according to the structure of the human body and the length of the skeleton, and finally reconstructing the real human body motion.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210735748.0A CN114972441A (en) | 2022-06-27 | 2022-06-27 | Motion synthesis framework based on deep neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210735748.0A CN114972441A (en) | 2022-06-27 | 2022-06-27 | Motion synthesis framework based on deep neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114972441A true CN114972441A (en) | 2022-08-30 |
Family
ID=82965826
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210735748.0A Pending CN114972441A (en) | 2022-06-27 | 2022-06-27 | Motion synthesis framework based on deep neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114972441A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110853670A (en) * | 2019-11-04 | 2020-02-28 | 南京理工大学 | Music-driven dance generating method |
CN111310641A (en) * | 2020-02-12 | 2020-06-19 | 南京信息工程大学 | Motion synthesis method based on spherical nonlinear interpolation |
CN111681321A (en) * | 2020-06-05 | 2020-09-18 | 大连大学 | Method for synthesizing three-dimensional human motion by using recurrent neural network based on layered learning |
WO2021234151A1 (en) * | 2020-05-22 | 2021-11-25 | Motorica Ab | Speech-driven gesture synthesis |
CN114170353A (en) * | 2021-10-21 | 2022-03-11 | 北京航空航天大学 | Multi-condition control dance generation method and system based on neural network |
-
2022
- 2022-06-27 CN CN202210735748.0A patent/CN114972441A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110853670A (en) * | 2019-11-04 | 2020-02-28 | 南京理工大学 | Music-driven dance generating method |
CN111310641A (en) * | 2020-02-12 | 2020-06-19 | 南京信息工程大学 | Motion synthesis method based on spherical nonlinear interpolation |
WO2021234151A1 (en) * | 2020-05-22 | 2021-11-25 | Motorica Ab | Speech-driven gesture synthesis |
CN111681321A (en) * | 2020-06-05 | 2020-09-18 | 大连大学 | Method for synthesizing three-dimensional human motion by using recurrent neural network based on layered learning |
CN114170353A (en) * | 2021-10-21 | 2022-03-11 | 北京航空航天大学 | Multi-condition control dance generation method and system based on neural network |
Non-Patent Citations (5)
Title |
---|
GUIYU XIA 等: "A Deep Learning Framework for Start–End Frame Pair-Driven Motion Synthesis" * |
WENLIN ZHUANG 等: "Towards 3D Dance Motion Synthesis and Control" * |
YI ZHOU 等: "AUTO-CONDITIONED RECURRENT NETWORKS FOR EXTENDED COMPLEX HUMAN MOTION SYNTHESIS" * |
庄文林: "人体运动建模与合成" * |
彭淑娟 等: "人体运动生成中的深度学习模型综述" * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102081854B1 (en) | Method and apparatus for sign language or gesture recognition using 3D EDM | |
WO2021169839A1 (en) | Action restoration method and device based on skeleton key points | |
Beymer et al. | Example based image analysis and synthesis | |
CN107239728A (en) | Unmanned plane interactive device and method based on deep learning Attitude estimation | |
CN110096156B (en) | Virtual reloading method based on 2D image | |
CN111553968B (en) | Method for reconstructing animation of three-dimensional human body | |
CN113706699B (en) | Data processing method and device, electronic equipment and computer readable storage medium | |
US20160004905A1 (en) | Method and system for facial expression transfer | |
CN110310351B (en) | Sketch-based three-dimensional human skeleton animation automatic generation method | |
CN112837215B (en) | Image shape transformation method based on generation countermeasure network | |
CN111062326A (en) | Self-supervision human body 3D posture estimation network training method based on geometric drive | |
CN109116981A (en) | A kind of mixed reality interactive system of passive touch feedback | |
Zhu et al. | Human motion generation: A survey | |
CN108908353B (en) | Robot expression simulation method and device based on smooth constraint reverse mechanical model | |
CN111310641A (en) | Motion synthesis method based on spherical nonlinear interpolation | |
Liu et al. | Real-time robotic mirrored behavior of facial expressions and head motions based on lightweight networks | |
CN112634413B (en) | Method, apparatus, device and storage medium for generating model and generating 3D animation | |
CN113989928A (en) | Motion capturing and redirecting method | |
CN111539288B (en) | Real-time detection method for gestures of both hands | |
Tang et al. | Real-time conversion from a single 2D face image to a 3D text-driven emotive audio-visual avatar | |
Agarwal et al. | Imitating human movement with teleoperated robotic head | |
CN113079136A (en) | Motion capture method, motion capture device, electronic equipment and computer-readable storage medium | |
CN114972441A (en) | Motion synthesis framework based on deep neural network | |
US11734889B2 (en) | Method of gaze estimation with 3D face reconstructing | |
JPH10255070A (en) | Three-dimensional image generating device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220830 |