CN111310641A - Motion synthesis method based on spherical nonlinear interpolation - Google Patents

Motion synthesis method based on spherical nonlinear interpolation Download PDF

Info

Publication number
CN111310641A
CN111310641A CN202010087956.5A CN202010087956A CN111310641A CN 111310641 A CN111310641 A CN 111310641A CN 202010087956 A CN202010087956 A CN 202010087956A CN 111310641 A CN111310641 A CN 111310641A
Authority
CN
China
Prior art keywords
motion
joint
frame
dictionary
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010087956.5A
Other languages
Chinese (zh)
Inventor
夏贵羽
薛鹏
马芙蓉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202010087956.5A priority Critical patent/CN111310641A/en
Publication of CN111310641A publication Critical patent/CN111310641A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a motion synthesis method based on spherical nonlinear interpolation. The invention relates to the field of computers, which comprises the specific steps of preparing data training and standardizing joint coordinates, extracting a motion rule of a motion sequence, learning a motion rule dictionary and a motion frame dictionary according to standardized motion data, using an orthogonal matching pursuit algorithm to obtain sparse representation coefficients of a head frame and a tail frame on the motion frame dictionary, reconstructing the motion rule on the motion rule dictionary and synthesizing a complete motion sequence; the invention has the following results: in the field of film and television industry, the method can be used for synthesizing 3D human body movement to drive virtual characters; in the field of robots, special actions can be synthesized to drive the humanoid robot; in the field of medical rehabilitation, the device can be used for synthesizing the normal movement posture of a patient with dyskinesia so as to assist psychotherapy.

Description

Motion synthesis method based on spherical nonlinear interpolation
Technical Field
The invention relates to the field of computers, mainly aims at human motion modeling, and particularly relates to a motion synthesis method based on spherical nonlinear interpolation.
Background
The motion data acquired by the capturing device may not only be used for studying the characteristics of the body motion, such as motion pattern recognition and motion pattern tracking, but also derive some other promising applications, including animation, robot driving, motion rehabilitation, etc. However, the cost of motion capture is extremely high and the process is complex, so the motion synthesis technology becomes an effective means for solving the problem of high cost of motion data acquisition.
The existing motion synthesis algorithm tends to develop towards two directions, one direction is to avoid negative influence on the synthesis process and the synthesis result caused by non-professional operation of a user, and the method is usually only reserved for few interfaces of the user to control motion synthesis, so that the content of the synthesis result is limited, the requirements of the user are difficult to meet, and the imagination is difficult to exert. The control process of the motion synthesis in the other direction is too complex, the use threshold of the method is high, and users often need to have professional motion synthesis knowledge to successfully complete the motion synthesis task. The spherical nonlinear interpolation algorithm provided by the invention can generate natural intermediate motion according to the head and tail frames of the motion sequence provided by the user, thereby not only ensuring the convenience of operation, but also synthesizing rich motion contents by controlling the head and tail frames.
Disclosure of Invention
To solve the above problems; the invention provides a motion synthesis method based on spherical nonlinear interpolation, which is used for synthesizing real human motion under the condition of giving a head frame and a tail frame of a motion sequence and is used for solving the problems of complex control and limited synthesis content of the existing motion synthesis method.
The technical scheme of the invention is as follows: a motion synthesis method based on spherical nonlinear interpolation specifically comprises the following steps:
step 1.1: preparation data training and normalization of joint coordinates: collecting a plurality of motion sequences with a single motion type as training data, and carrying out standardization processing on joint coordinates, namely using relative coordinates of each joint relative to a parent joint of the joint as a representation method of the characteristics of the joint;
step 1.2: extracting a motion rule of the motion sequence: calculating the angle between the position of any moment of a certain joint and the initial frame, constructing a polynomial function relation between the angle and a time variable, and taking a polynomial coefficient as a motion rule of the motion sequence;
step 1.3: learning a motion law dictionary and a motion frame dictionary from the normalized motion data: taking the head frame and the tail frame of a motion sequence and the motion rules extracted in the step 1.2 as a training data pair, and simultaneously training a motion rule dictionary and a motion frame dictionary in a joint dictionary learning mode to construct the relationship between the motion rule dictionary and the motion frame dictionary;
step 1.4: according to the motion rule dictionary and the motion frame dictionary learned in the step 1.3, obtaining sparse representation coefficients of the head and tail frames on the motion frame dictionary by using an orthogonal matching tracking algorithm according to the given motion head and tail frames;
step 1.5: reconstructing a motion rule, namely a polynomial coefficient, on a motion rule dictionary by using the sparse representation coefficient obtained in the step 1.4;
step 1.6: and (4) according to the polynomial coefficients obtained in the step 1.5, obtaining the position of each joint at any moment, thereby synthesizing a complete motion sequence.
The invention has the beneficial effects that: the invention can be mainly applied in three fields: (1) in the field of the film and television industry, the method can be used for synthesizing 3D human body movement to drive virtual characters; (2) in the field of robots, the method can synthesize special actions to drive the humanoid robot; (3) in the field of medical rehabilitation, the method can be used for synthesizing the normal movement posture of a patient with dyskinesia so as to assist psychotherapy.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the attached drawings:
as shown in FIG. 1; firstly, a motion rule and a motion head and tail frame are extracted from training data, and then two dictionaries, namely a motion frame dictionary and a motion rule dictionary, are learned through combined dictionary learning. And taking the motion head and tail frames as input, acquiring sparse representation in a motion frame dictionary, reconstructing a motion rule in a motion rule dictionary according to the sparse representation, and finally generating a complete motion sequence through spherical interpolation.
The invention discloses a motion synthesis method based on spherical nonlinear interpolation, which specifically comprises the following steps:
step 1.1: preparation data training and normalization of joint coordinates: collecting a plurality of motion sequences with a single motion type as training data, and carrying out standardization processing on joint coordinates, namely using relative coordinates of each joint relative to a parent joint of the joint as a representation method of the characteristics of the joint;
wherein; three-dimensional vector for position of each joint in the joint coordinates
Figure BDA0002382695880000021
Expressed and normalized, where the normalized coordinate x is defined as:
Figure BDA0002382695880000022
jprepresenting the parent joint coordinates of j.
Step 1.2: extracting a motion rule of the motion sequence: calculating the angle between the position of any moment of a certain joint and the initial frame, constructing a polynomial function relation between the angle and a time variable, and taking a polynomial coefficient as a motion rule of the motion sequence;
defining tau as the angle of the position of the joint of the current frame relative to the position of the starting frame, carrying out normalization processing on the angle tau, and defining the position of the joint of the starting frame to the position of the corresponding joint of the ending frame as a positive direction;
wherein, the corresponding relation between the angle tau of the joint position and the three-dimensional coordinate is expressed as:
Figure BDA0002382695880000031
wherein x issAnd xeRespectively representing the position of each joint of the starting frame and the ending frame, and theta represents the angle change of the ending frame relative to the starting frame; then obtaining the angle by least square methodSequence of tau with respect to time t
Figure BDA0002382695880000032
Namely:
Figure BDA0002382695880000033
in the above formula, xrIs the coordinates of the real location; then, a corresponding function curve is fitted, and the correspondence between the angle τ and the time t is represented by a function g (t), that is:
τ=g(t) (4)
said function g (t) being a polynomial of order 5, using
Figure BDA00023826958800000310
Representing the coefficients of the joint point corresponding polynomial.
Step 1.3: learning a motion law dictionary and a motion frame dictionary from the normalized motion data: taking the head frame and the tail frame of a motion sequence and the motion rules extracted in the step 1.2 as a training data pair, and simultaneously training a motion rule dictionary and a motion frame dictionary in a joint dictionary learning mode to construct the relationship between the motion rule dictionary and the motion frame dictionary;
expressing a law of motion dictionary as
Figure BDA0002382695880000034
The motion frame dictionary is expressed as
Figure BDA0002382695880000035
n is the number of atoms of the dictionary; the dictionary learning objective function is expressed as:
Figure BDA0002382695880000036
s.t.||ωi||0≤Q i=1,2,3,K,Ntra
||di||2≤1j=1,2,3,K,n
||qi||2≤1j=1,2,3,K,n
wherein
Figure BDA0002382695880000037
Dp=[d1,d2,Kdn],Df=[q1,q2,K,qn],
The extracted motion law is expressed as
Figure BDA0002382695880000038
NtraRepresenting the number of motion sequences collected and,
Figure BDA0002382695880000039
representing the law of motion of d joints in the ith motion sequence, pijShows the motion law of the jth joint of the ith motion sequence,
Figure BDA0002382695880000046
head and tail frames, f 'representing all motion data'iRepresenting the i-th group of head and tail frames in the training set X,
Figure BDA0002382695880000041
fsdenotes a start frame, feIndicates the joint position of the end frame,
Figure BDA0002382695880000042
represents a motion frame, where d is the number of joints of the human body, xjRepresents the position of the j-th joint;
in solving for W, Df,DpIn the process, an alternative iteration method is adopted, which comprises the following steps:
(1) and fixing Df,DpObtaining W:
calculate ωiThe algorithm of (c) is as followsiColumn i in W):
Figure BDA0002382695880000043
in the orthogonal matching pursuit algorithm, γ needs to be calculated first, and is defined as:
Figure BDA0002382695880000044
wherein:
yTD=(fi′)TDf+β(pi′)TDp(7)
Figure BDA0002382695880000045
and from this, the s +1 th iteration (ω)i)sThe values of (a) are as follows:
Figure BDA0002382695880000051
wherein ΛsIndicates the atomic number selected for the s-th iteration,
Figure BDA0002382695880000052
namely ΛsThe corresponding atom;
(2) calculating D by fixing WpAnd Df
Obtaining:
Dp=PWT(WWT)-1(10)
Df=FWT(WWT)-1. (11)
step 1.4: according to the motion rule dictionary and the motion frame dictionary learned in the step 1.3, obtaining sparse representation coefficients of the head and tail frames on the motion frame dictionary by using an orthogonal matching tracking algorithm according to the given motion head and tail frames;
the following optimization problem is solved by using an orthogonal matching pursuit algorithm (algorithm 1) to obtain the dictionary D of the given head and tail frames ffSparse representation coefficient of (1)
Figure BDA0002382695880000053
Namely:
Figure BDA0002382695880000054
step 1.5: reconstructing a motion rule, namely a polynomial coefficient, on a motion rule dictionary by using the sparse representation coefficient obtained in the step 1.4;
in this step, the sparse representation is used
Figure BDA0002382695880000055
Reconstructing a corresponding motion law in the motion law dictionary, which can be expressed as:
Figure BDA0002382695880000056
wherein
Figure BDA0002382695880000057
Is to generate polynomial coefficients corresponding to the law of motion from a given first and last frame f'.
Step 1.6: according to the polynomial coefficient obtained in the step 1.5, the position of each joint at any moment is obtained, and therefore a complete motion sequence is synthesized;
and synthesizing the human body movement according to the movement rule. Angle corresponding to j joint of i frame
Figure BDA0002382695880000058
Can be expressed as:
Figure BDA0002382695880000059
wherein
Figure BDA00023826958800000510
NinIndicating the number of motion frames that need to be interpolated,
Figure BDA00023826958800000511
is the polynomial coefficient of the jth articulation curve. Followed byThen converting the angle into corresponding three-dimensional coordinates according to the idea of spherical interpolation
Figure BDA00023826958800000512
The formula is as follows:
Figure BDA00023826958800000513
Figure BDA0002382695880000061
normalized coordinates, theta, representing the head and tail frames of a motion sequencejIs the angle change from the jth joint start frame to the end frame. And when the normalized position of each joint is obtained, calculating the absolute position coordinate of each joint according to the structure of the human body and the length of the skeleton, and finally reconstructing the real human body motion.

Claims (7)

1. A motion synthesis method based on spherical nonlinear interpolation is characterized by comprising the following steps:
step 1.1: preparation data training and normalization of joint coordinates: collecting a plurality of motion sequences with a single motion type as training data, and carrying out standardization processing on joint coordinates, namely using relative coordinates of each joint relative to a parent joint of the joint as a representation method of the characteristics of the joint;
step 1.2: extracting a motion rule of the motion sequence: calculating the angle between the position of any moment of a certain joint and the initial frame, constructing a polynomial function relation between the angle and a time variable, and taking a polynomial coefficient as a motion rule of the motion sequence;
step 1.3: learning a motion law dictionary and a motion frame dictionary from the normalized motion data: taking the head frame and the tail frame of a motion sequence and the motion rules extracted in the step 1.2 as a training data pair, and simultaneously training a motion rule dictionary and a motion frame dictionary in a joint dictionary learning mode to construct the relationship between the motion rule dictionary and the motion frame dictionary;
step 1.4: according to the motion rule dictionary and the motion frame dictionary learned in the step 1.3, obtaining sparse representation coefficients of the head and tail frames on the motion frame dictionary by using an orthogonal matching tracking algorithm according to the given motion head and tail frames;
step 1.5: reconstructing a motion rule, namely a polynomial coefficient, on a motion rule dictionary by using the sparse representation coefficient obtained in the step 1.4;
step 1.6: and (4) according to the polynomial coefficients obtained in the step 1.5, obtaining the position of each joint at any moment, thereby synthesizing a complete motion sequence.
2. The method of claim 1, wherein the position of each joint in the joint coordinates in step 1.1 is represented by a three-dimensional vector
Figure FDA0002382695870000011
Expressed and normalized, where the normalized coordinate x is defined as:
Figure FDA0002382695870000012
jprepresenting the parent joint coordinates of j.
3. The method for synthesizing motion based on spherical nonlinear interpolation of claim 1, wherein τ is defined as the angle of the position of the joint of the current frame relative to the position of the starting frame in step 1.2, the angle τ is normalized, and the position of the joint of the starting frame to the position of the corresponding joint of the ending frame is defined as a positive direction;
the correspondence between the angle τ of the joint position and the three-dimensional coordinates is expressed as:
Figure FDA0002382695870000013
wherein x issAnd xeIndicating the position of each joint of the start frame and the end frame, respectively, theta indicating the position of the end frame relative to the start frameThe angle is changed; the sequence of angles tau with respect to time t is then obtained by means of a least-squares method
Figure FDA0002382695870000014
Namely:
Figure FDA0002382695870000015
in the above formula, xrIs the coordinates of the real location; then, a corresponding function curve is fitted, and the correspondence between the angle τ and the time t is represented by a function g (t), that is:
τ=g(t)
said function g (t) being a polynomial of order 5, using
Figure FDA0002382695870000021
Representing the coefficients of the joint point corresponding polynomial.
4. A method for motion synthesis based on spherical nonlinear interpolation as claimed in claim 1, wherein in step 1.3, the motion law dictionary is expressed as
Figure FDA0002382695870000022
The motion frame dictionary is expressed as
Figure FDA0002382695870000023
n is the number of atoms of the dictionary; the dictionary learning objective function is expressed as:
Figure FDA0002382695870000024
s.t.||ωi||0≤Q i=1,2,3,K,Ntra
||di||2≤1 j=1,2,3,K,n
||qi||2≤1 j=1,2,3,K,n
wherein
Figure FDA0002382695870000025
Dp=[d1,d2,Kdn],Df=[q1,q2,K,qn]The extracted motion law is expressed as
Figure FDA0002382695870000026
NtraRepresenting the number of motion sequences collected and,
Figure FDA0002382695870000027
representing the law of motion of d joints in the ith motion sequence, pijShows the motion law of the jth joint of the ith motion sequence,
Figure FDA0002382695870000028
head and tail frames representing all motion data, fi' denotes the i-th group of head and tail frames in the training set X,
Figure FDA0002382695870000029
fsdenotes a start frame, feIndicates the joint position of the end frame,
Figure FDA00023826958700000210
represents a motion frame, where d is the number of joints of the human body, xjIndicating the position of the j-th joint.
5. A method for synthesizing motion based on spherical nonlinear interpolation as claimed in claim 1, characterized in that in step 1.4, the given head and tail frames f' are obtained in the dictionary D by orthogonal matching pursuit algorithmfSparse representation coefficient of (1)
Figure FDA00023826958700000211
Namely:
Figure FDA00023826958700000212
Figure FDA00023826958700000213
6. a method for motion synthesis based on spherical nonlinear interpolation as claimed in claim 1, characterized in that in step 1.5, the sparse representation is used
Figure FDA00023826958700000214
Reconstructing a corresponding motion law in the motion law dictionary, which can be expressed as:
Figure FDA0002382695870000031
wherein
Figure FDA0002382695870000032
Is to generate polynomial coefficients corresponding to the law of motion from a given first and last frame f'.
7. The motion synthesis method based on spherical nonlinear interpolation according to claim 1, characterized in that in step 1.6, human body motion is synthesized according to motion rules. Angle corresponding to j joint of i frame
Figure FDA0002382695870000033
Can be expressed as:
Figure FDA0002382695870000034
wherein the content of the first and second substances,
Figure FDA0002382695870000035
Ninindicating the number of motion frames that need to be interpolated,
Figure FDA0002382695870000036
is the polynomial coefficient of the jth joint motion curve; then converting the angle into corresponding three-dimensional coordinates according to the idea of spherical interpolation
Figure FDA0002382695870000037
The formula is as follows:
Figure FDA0002382695870000038
Figure FDA0002382695870000039
normalized coordinates, theta, representing the head and tail frames of a motion sequencejIs the angle change from the jth joint start frame to the end frame; and when the normalized position of each joint is obtained, calculating the absolute position coordinate of each joint according to the structure of the human body and the length of the skeleton, and finally reconstructing the real human body motion.
CN202010087956.5A 2020-02-12 2020-02-12 Motion synthesis method based on spherical nonlinear interpolation Pending CN111310641A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010087956.5A CN111310641A (en) 2020-02-12 2020-02-12 Motion synthesis method based on spherical nonlinear interpolation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010087956.5A CN111310641A (en) 2020-02-12 2020-02-12 Motion synthesis method based on spherical nonlinear interpolation

Publications (1)

Publication Number Publication Date
CN111310641A true CN111310641A (en) 2020-06-19

Family

ID=71148888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010087956.5A Pending CN111310641A (en) 2020-02-12 2020-02-12 Motion synthesis method based on spherical nonlinear interpolation

Country Status (1)

Country Link
CN (1) CN111310641A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860356A (en) * 2020-07-23 2020-10-30 中国电子科技集团公司第五十四研究所 Polarization SAR image classification method based on nonlinear projection dictionary pair learning
CN114972441A (en) * 2022-06-27 2022-08-30 南京信息工程大学 Motion synthesis framework based on deep neural network

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107993248A (en) * 2017-10-27 2018-05-04 浙江理工大学 A kind of exercise data distortion restoration methods

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107993248A (en) * 2017-10-27 2018-05-04 浙江理工大学 A kind of exercise data distortion restoration methods

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
G. XIA,ET.AL: "Learning-Based Sphere Nonlinear Interpolation for Motion Synthesis", 《IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860356A (en) * 2020-07-23 2020-10-30 中国电子科技集团公司第五十四研究所 Polarization SAR image classification method based on nonlinear projection dictionary pair learning
CN111860356B (en) * 2020-07-23 2022-07-01 中国电子科技集团公司第五十四研究所 Polarization SAR image classification method based on nonlinear projection dictionary pair learning
CN114972441A (en) * 2022-06-27 2022-08-30 南京信息工程大学 Motion synthesis framework based on deep neural network

Similar Documents

Publication Publication Date Title
CN108161882B (en) Robot teaching reproduction method and device based on augmented reality
CN110827342B (en) Three-dimensional human body model reconstruction method, storage device and control device
CN110570455B (en) Whole body three-dimensional posture tracking method for room VR
WO2021169839A1 (en) Action restoration method and device based on skeleton key points
Chao et al. A robot calligraphy system: From simple to complex writing by human gestures
CN111160164B (en) Action Recognition Method Based on Human Skeleton and Image Fusion
Metaxas et al. Constrained deformable superquadrics and nonrigid motion tracking
CN111553968B (en) Method for reconstructing animation of three-dimensional human body
Ilg et al. On the representation, learning and transfer of spatio-temporal movement characteristics
CN112837215B (en) Image shape transformation method based on generation countermeasure network
CN111310641A (en) Motion synthesis method based on spherical nonlinear interpolation
Zhu et al. Human motion generation: A survey
CN111709270B (en) Three-dimensional shape recovery and attitude estimation method and device based on depth image
CN111462274A (en) Human body image synthesis method and system based on SMP L model
CN114782596A (en) Voice-driven human face animation generation method, device, equipment and storage medium
Liu et al. Real-time robotic mirrored behavior of facial expressions and head motions based on lightweight networks
Pang et al. Basicnet: Lightweight 3d hand pose estimation network based on biomechanical structure information for dexterous manipulator teleoperation
CN111539288B (en) Real-time detection method for gestures of both hands
CN113012268A (en) Method, system, device and medium for dynamic motion of static pedestrian image
CN116740290A (en) Three-dimensional interaction double-hand reconstruction method and system based on deformable attention
Kao et al. Design and implementation of interaction system between humanoid robot and human hand gesture
Aleotti et al. Arm gesture recognition and humanoid imitation using functional principal component analysis
CN116749168A (en) Rehabilitation track acquisition method based on gesture teaching
CN116361512A (en) Virtual human model driving method and device based on characters and computer equipment
CN115018962A (en) Human motion attitude data set generation method based on virtual character model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200619

RJ01 Rejection of invention patent application after publication