CN110472497A - A kind of motion characteristic representation method merging rotation amount - Google Patents

A kind of motion characteristic representation method merging rotation amount Download PDF

Info

Publication number
CN110472497A
CN110472497A CN201910610766.4A CN201910610766A CN110472497A CN 110472497 A CN110472497 A CN 110472497A CN 201910610766 A CN201910610766 A CN 201910610766A CN 110472497 A CN110472497 A CN 110472497A
Authority
CN
China
Prior art keywords
rotation amount
motion characteristic
skeleton
representation method
characteristic representation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910610766.4A
Other languages
Chinese (zh)
Inventor
谷林
王婧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Polytechnic University
Original Assignee
Xian Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Polytechnic University filed Critical Xian Polytechnic University
Priority to CN201910610766.4A priority Critical patent/CN110472497A/en
Publication of CN110472497A publication Critical patent/CN110472497A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a kind of motion characteristic representation methods for merging rotation amount, specifically follow the steps below: step 1: with the infrared depth transducer acquisition skeleton information of Microsoft Kinect2.0 and using SpineBase as the human body topology information of root node;Step 2: according to skeleton body joint point coordinate in step 1, human body attitude matrix group is calculated;Step 3: the coordinate for mending structural information and skeleton artis is opened up according to human body in step 2, rotation amount of the sub- skeletal joint point relative to father's skeletal joint point is calculated by quaternary number, step 4: rotation amount in step 3 being combined with human body attitude matrix group in step 2, establishes the motion characteristic representation method of fusion rotation amount.A kind of motion characteristic representation method for merging rotation amount of the present invention can more efficiently avoid the interference between similar movement class, achieve the purpose that improve action recognition accuracy.

Description

A kind of motion characteristic representation method merging rotation amount
Technical field
The invention belongs to action recognition technical fields, and in particular to a kind of motion characteristic representation method for merging rotation amount.
Background technique
The development of computer image processing technology proposes requirements at the higher level to the precision of action recognition.Action recognition technology is It is widely used in many aspects such as rehabilitation training, smart home and somatic sensation television game.It is more next with the high speed development of computer vision More scholars is dedicated to the correlative study of human action identification.For action recognition, the extraction and expression of human action feature It is premise and key and difficult point and emphasis.
Action recognition has a very important significance in human-computer interaction.Action recognition technology can allow computer Practise and understand the behavior and movement of the mankind, it is new can also to open up computer vision for the experience sense of user when improving human-computer interaction Field.
Motion characteristic based on skeleton location information is more mature one of motion characteristic representation method, compared to The method of traditional extraction video image characteristic, the motion characteristic representation method based on skeleton location information have more preferable Observation angle independence and complex background independence.
Motion characteristic currently based on skeleton location information mainly includes human action feature operator, statistic histogram Operator, covariance operator, spin matrix, eigenmatrix, feature vector, three-dimensional coordinate matrix mean value, covariance matrix and bone Point motion profile etc..
In conclusion current action identification method is in terms of motion characteristic is extracted with motion characteristic expression, there are phase apparent movements Make interfering with each other between class, the excessively slow problem of the accuracy and speed of identification.
Summary of the invention
The object of the present invention is to provide a kind of motion characteristic representation methods for merging rotation amount, solve current movement and know Other method motion characteristic extract and motion characteristic expression in terms of interfering with each other between existing similar movement class, the essence of identification Degree and the excessively slow problem of speed.
The technical scheme adopted by the invention is that
A kind of motion characteristic representation method merging rotation amount, specifically follows the steps below:
Step 1: motion characteristic extracts,
Skeleton information is acquired with the infrared depth transducer of Microsoft Kinect2.0, including skeleton packet Multiple skeleton points containing human body three dimensional space coordinate, and using SpineBase as the human body topology information of root node;
Step 2: movement global feature expression,
According to skeleton body joint point coordinate in step 1, human body attitude matrix group is calculated, the calculating of human body attitude is public Formula are as follows:
RF=(Ri,j)M×M (1)
Wherein, RFFor the matrix of human posture, wherein M is bone node, Ri,jWhat is indicated is i-th of skeleton point to j-th The relative positional relationship of skeleton point;
Step 3: movement local feature expression,
The coordinate for mending structural information and skeleton artis is opened up according to human body in step 2, is calculated by quaternary number Rotation amount of the sub- skeletal joint point relative to father's skeletal joint point,
Step 4: the motion characteristic representation method of rotation amount is merged,
Rotation amount in step 3 is combined with human body attitude matrix group in step 2, establishes the motion characteristic of fusion rotation amount Representation method.
The features of the present invention also characterized in that
The skeleton point that skeleton information includes in step 1 is 25.
The data format of three-dimensional space position coordinate information is float x, float y, float z in step 1;Quaternary number Data format be float x, float y, float z, float w, wherein x, y and z are respectively the abscissa, vertical of skeleton point Coordinate and ordinate, w are the Eulerian angles of skeleton point.
In step 2, the unit vector that j-th of skeleton point is directed toward by calculating i-th of skeleton point obtains Ri,j, RFFor in F When frame, the matrix of the relative positional relationship composition of all skeleton points between a movement.
In step 3, benefit structure is opened up using quaternary number and in conjunction with human body, is calculated in father's skeletal joint point and sub- skeletal joint point Cartesian coordinate system rotation amount, and indicate the spinning situation of limbs.
In step 4, the motion characteristic for merging rotation amount is expressed as acting details, human body attitude matrix using rotation amount expression Expression acts integrality.
In step 4, the motion characteristic representation method of rotation amount is merged are as follows:
Definition is using SpineBase as origin, with Z axis orientation sensor, to be located at people's left hand with X-axis straight up for Y-axis Direction, the human action posture S in three-dimensional spaceFIt is parameterized are as follows:
SF=[RF, GiF] (2)
R thereinFIndicate the matrix that the relative positional relationship between each artis is constituted, GiFIndicate i-th of skeleton point Rotation amount in F frame.
In step 4, by BP neural network, the motion characteristic representation method of rotation amount will be merged as the defeated of neural network Enter, establishes attitude mode.
The invention has the advantages that a kind of human body of the motion characteristic representation method for merging rotation amount of the present invention in acquisition On the basis of skeleton point three dimensional local information, movement global feature representation method is merged with movement local feature representation method, is adopted With conventional body's attitude matrix, as the expression of movement global feature, and the rotation amount of skeleton point is combined, as movement office The expression of portion's feature;Interfering with each other between similar movement class can be more efficiently avoided, it is accurate to reach raising action recognition The purpose of rate.
Detailed description of the invention
Fig. 1 is skeleton point distribution map in a kind of motion characteristic representation method for merging rotation amount of the present invention;
Fig. 2 is human body topology information figure in a kind of motion characteristic representation method for merging rotation amount of the present invention;
Fig. 3 is that elbow joint cartesian coordinate system rotates in a kind of motion characteristic representation method for merging rotation amount of the present invention Elbow joint initial position elbow and artis rotate 180 ° of schematic diagram;
Fig. 4 is network model N in a kind of motion characteristic representation method for merging rotation amount of the present invention1And N2Network performance Comparison diagram;
Fig. 5 figure is network model N in a kind of motion characteristic representation method for merging rotation amount of the present invention1And N2Accuracy rate Comparison diagram.
Specific embodiment
The following describes the present invention in detail with reference to the accompanying drawings and specific embodiments.
A kind of motion characteristic representation method merging rotation amount, specifically follows the steps below:
Step 1: motion characteristic extracts,
Skeleton information is acquired with the infrared depth transducer of Microsoft Kinect2.0, including skeleton packet Multiple skeleton points containing human body three dimensional space coordinate, and using SpineBase as the human body topology information of root node;
Step 2: movement global feature expression,
According to skeleton body joint point coordinate in step 1, human body attitude matrix group is calculated, the calculating of human body attitude is public Formula are as follows:
RF=(Ri,j)M×M (1)
Wherein, RFFor the matrix of human posture, wherein M is bone node, RI, jWhat is indicated is i-th of skeleton point to j-th The relative positional relationship of skeleton point;
Step 3: movement local feature expression,
The coordinate for mending structural information and skeleton artis is opened up according to human body in step 2, is calculated by quaternary number Rotation amount of the sub- skeletal joint point relative to father's skeletal joint point,
Step 4: the motion characteristic representation method of rotation amount is merged,
Rotation amount in step 3 is combined with human body attitude matrix group in step 2, establishes the motion characteristic of fusion rotation amount Representation method.
The skeleton point that skeleton information includes in step 1 is 25.
The data format of three-dimensional space position coordinate information is float x, float y, float z in step 1;Quaternary number Data format be float x, float y, float z, float w, wherein x, y and z are respectively the abscissa, vertical of skeleton point Coordinate and ordinate, w are the Eulerian angles of skeleton point.
In step 2, the unit vector that j-th of skeleton point is directed toward by calculating i-th of skeleton point obtains RI, j, RFFor in F When frame, the matrix of the relative positional relationship composition of all skeleton points between a movement.
In step 3, benefit structure is opened up using quaternary number and in conjunction with human body, calculates the cartesian coordinate on father's bone and sub- bone The rotation amount of system, and indicate the spinning situation of limbs.
In step 4, the motion characteristic for merging rotation amount is expressed as acting details, human body attitude matrix using rotation amount expression Expression acts integrality.
In step 4, the motion characteristic representation method of rotation amount is merged are as follows:
Definition is using SpineBase as origin, with Z axis orientation sensor, to be located at people's left hand with X-axis straight up for Y-axis Direction, the human action posture S in three-dimensional spaceFIt is parameterized are as follows:
SF=[RF, GiF] (2)
R thereinFIndicate the matrix that the relative positional relationship between each artis is constituted, GiFIndicate i-th of skeleton point Rotation amount in F frame.
In step 4, by BP neural network, the motion characteristic for merging rotation amount is indicated into the input as neural network, Establish attitude mode.
A kind of motion characteristic representation method for merging rotation amount of the present invention is broadly divided into motion characteristic extraction and motion characteristic Indicate two parts, wherein motion characteristic indicates to include that the expression of movement global feature and movement local feature indicate, then will movement Global feature indicates and movement local feature indicates that fusion obtains a kind of motion characteristic representation method for merging rotation amount of the present invention, Its specifically:
First part: motion characteristic extracts.
Using the skeleton information of Microsoft Kinect2.0 infrared depth transducer acquisition, wherein skeleton packet Containing 25 skeleton points of human body three dimensional space coordinate, as shown in Figure 1, and using SpineBase as the human body topology knot of root node Structure information, as shown in Figure 2.The data format of three-dimensional space position coordinate information be (float x, float y, float z), four The data format of first number is (float x, float y, float z, float w), wherein x, y, z be skeleton point abscissa, Ordinate and ordinate, w are the Eulerian angles of skeleton point.
Second part: motion characteristic indicates, including movement global feature indicates and movement local feature indicates.
Acting global feature indicates:
According to collected skeleton body joint point coordinate, human body attitude matrix group is calculated, as the whole spy for movement The representation method of sign, human body attitude calculation method are as follows:
Defining a bone has M node, and human posture can be expressed as matrix
RF=(Ri,j)M×M (1)
In formula, RI, jWhat is indicated is relative positional relationship of i-th of skeleton point to j-th of skeleton point, can be by calculating the The unit vector that i skeleton point is directed toward j-th of skeleton point obtains the relative positional relationship between two skeleton points, matrix RFIt indicates the When F frame, the matrix of the relative positional relationship composition of all skeleton points between a movement.
Acting local feature indicates:
It is opened up according to human body and mends structural information and skeleton body joint point coordinate, sub- skeletal joint is calculated by quaternary number Rotation amount of the point relative to father's skeletal joint point, to better describe the spinning situation of limbs in movement, reduction is acted between class Interference of the similitude to accuracy of identification.
Benefit structure is opened up using quaternary number and in conjunction with human body, calculates the Descartes in father's skeletal joint point and sub- skeletal joint point The rotation amount of coordinate system indicates that the spinning situation of limbs, spinning situation are as shown in Figure 3.Except end skeleton point can not calculate Except rotation amount, remaining 20 skeleton point can calculate rotation amount.Left figure in Fig. 3 indicates the initial coordinate system point of bone coordinate Cloth, the right figure in Fig. 3 indicate the coordinate system distribution after right elbow skeletal joint point rotates clockwise 180 °.By calculating right elbow bone Rotation angle of the point relative to his father's skeletal joint point (right shoulder skeletal joint point) coordinate system, is denoted as the rotation of right elbow skeletal joint point Turn amount.
Merge the motion characteristic representation method of rotation amount
By above rotation in conjunction with human body attitude matrix, the motion characteristic for establishing fusion rotation amount is indicated, uses rotation It measures expression and acts details, human body attitude matrix expression acts integrality, and the motion characteristic for merging rotation amount indicates, specific to indicate Method is as follows:
Definition is using SpineBase as origin, with straight up for Y-axis, Z axis orientation sensor, X-axis is located at people's left-hand To the human action posture S optimized in three-dimensional spaceFIt can be parameterized and be
SF=[RF, GiF] (2)
R in formulaFIndicate the matrix that the relative positional relationship between each artis is constituted, GiFIndicate i-th of skeleton point Rotation amount in F frame.
After the motion characteristic representation method for obtaining fusion rotation amount, we establish network mould by using BP neural network The movement degree of conformity of type output, establishes attitude mode, realizes the evaluation and test to movement posture.The training process of BP neural network is main Including forward-propagating training data and reverse two processes of amendment for propagating progress network parameter.BP neural network is by repeatedly instructing Practice, in network parameter weight and threshold value be updated correction.By training set data, forward-propagating successively train weight and Threshold value obtains evaluation and test output.If the error of output exceeds default error, successively to weight and threshold by way of backpropagation Value is updated.It repeats the above process, until the error of output reaches default error, neural network model training terminates.
Constantly change network parameter, including weight rank, step-size factor, the coefficient of stability, error threshold, maximum by testing The number of iterations and hidden layer number of nodes etc. reach optimal learning training effect to improve the constringency performance of network.
On the basis of a kind of motion characteristic representation method for merging rotation amount of the present invention, have using neural network non- Linear approximation function and store-memory ability, are integrated into matlab motion feature and carry out learning training, obtain stable convergence Network model, the motion characteristic set acquired in real time is handled, realizes the degree of conformity of calculating action, to evaluate and test movement appearance The function of state.
A kind of experiment test of motion characteristic representation method for merging rotation amount of the present invention uses GAMING DATASETS-G3D data set is tested, which includes 20 movements of 10 testers.Experiment difference With matrix RFAnd SFInput of the feature vector as neural network, neural network N1Input be RFFeature vector, nerve net Network N2Input be SFFeature vector.Neural network N1And N2Output be movement degree of conformity, network model is established, to score Analyse the accuracy rate of two network models.
The sample data of input is normalized.The normalization algorithm of use is as follows:
Input quantity in formula is x, xminIt is the minimum value in input data, xmaxIt is the maximum value in input data.ymin= 0, the minimum value x of the x before corresponding normalizationmin, ymax=1 corresponding maximum value x for normalizing preceding xmax.Codomain after normalization is Result is carried out anti-normalization processing, the initial value of restoring data by [0,1].
In network training process, by testing the optimal network parameter of determining BP neural network repeatedly.When take dynamic Change the strategy of learning rate, the learning efficiency 0.01 of setting network model, learning rate increases ratio 1.05, learning rate suppression ratio 0.65, initial weight variation is 0.07, and weight changes incrementss 1.2, and weight changes minimum 0.5, and weight variation maximum value is 50, the performance of network reaches best when factor of momentum 0.945.Maximum number of iterations max_fail=5 is set, when in training process The number of iterations be more than then to think that e-learning fails for 5 times, stop study.Step-up error threshold value goal=0.0001, training knot The mean square error of fruit terminates to learn when being lower than 0.0001.The number of nodes of hidden layer is designed by following formula:
N is hidden layer number of nodes, n in formulaiFor input number of nodes, n0For output node number, the constant that a is 1~10.
Training function and transfer function are selected, the gradient of each layer weight and threshold value in network is determined by learning function, is instructed Correction weight and threshold value are updated by training function during practicing.Transfer function uses S type function (sigmoid), the value of function Domain is [0,1].Different trained functions are tested on the influence of the precision of network model, experimental result such as Tables 1 and 2 by testing repeatedly It is shown:
Table 1: difference training function is to network model N1The influence of precision
Table 2: difference training function is to network model N2The influence of precision
Go out from the experimental results, LM algorithm is in training error and has preferable performance in terms of the training time, and after optimization Constringency performance is preferable.Therefore LM algorithm is chosen as N1And N2Majorized function be trained study.
The accurate performance index of this experiment is the accuracy rate of Neural Network model predictive.Fig. 4 is network model N1And N2Network The comparative situation of performance, Fig. 5 are network model N1And N2The comparative situation of accuracy rate.
According to analysis of experimental results it is found that runing time is 4s, illustrate two kinds of motion characteristic representation methods to efficiency Optimize unobvious.Network model N1Accuracy rate be 90.981%, network model N2Accuracy rate be 95.624%, network model N2The movement degree of conformity accuracy rate of output is higher.
Therefore a kind of motion characteristic representation method for merging rotation amount of the present invention can accurate expression movement essence it is special Sign solves the motion characteristic and can not describe that can not accurately indicate that part is tiny present in existing motion characteristic representation method The problem of part of limb spinning.Network fast and stable can be made to restrain, and obtain accurately exporting result.Therefore it uses A kind of network model output action degree of conformity for the motion characteristic representation method merging rotation amount, the method for evaluating and testing movement posture are quasi- True rate is higher.

Claims (8)

1. a kind of motion characteristic representation method for merging rotation amount, which is characterized in that specifically follow the steps below:
Step 1: motion characteristic extracts,
Skeleton information is acquired with the infrared depth transducer of Microsoft Kinect2.0, includes people including skeleton information Multiple skeleton points of body three dimensional space coordinate, and using SpineBase as the human body topology information of root node;
Step 2: movement global feature expression,
According to skeleton body joint point coordinate described in step 1, human body attitude matrix group, the calculation formula of human body attitude are calculated Are as follows:
RF=(Ri,j)M×M (1)
Wherein, RFFor the matrix of human posture, wherein M is bone node, Ri,jWhat is indicated is i-th of skeleton point to j-th of bone The relative positional relationship of point;
Step 3: movement local feature expression,
The coordinate for mending structural information and skeleton artis is opened up according to human body in the step 2, is calculated by quaternary number Rotation amount of the sub- skeletal joint point relative to father's skeletal joint point,
Step 4: the motion characteristic representation method of rotation amount is merged,
The human body attitude matrix described in the step 2 of rotation amount described in step 3 is combined, the movement for establishing fusion rotation amount is special Levy representation method.
2. a kind of motion characteristic representation method for merging rotation amount according to claim 1, which is characterized in that in step 1, The skeleton point that the skeleton information includes is 25.
3. a kind of motion characteristic representation method for merging rotation amount according to claim 2, which is characterized in that in step 1, The data format of the three-dimensional space position coordinate information is float x, float y, float z;The data format of quaternary number For float x, float y, float z, float w, wherein x, y and z are respectively the abscissa, ordinate and perpendicular seat of skeleton point Mark, w are the Eulerian angles of skeleton point.
4. a kind of motion characteristic representation method for merging rotation amount according to claim 1, which is characterized in that in step 2, The unit vector that j-th of skeleton point is directed toward by calculating i-th of skeleton point obtains the Ri,j, the RFFor in F frame, one The matrix of the relative positional relationship composition of all skeleton points between a movement.
5. a kind of motion characteristic representation method for merging rotation amount according to claim 1, which is characterized in that in step 3, Benefit structure is opened up using the quaternary number and in conjunction with human body, calculates the rotation amount of father's bone and the cartesian coordinate system on sub- bone, And indicate the spinning situation of limbs.
6. a kind of motion characteristic representation method for merging rotation amount according to claim 1, which is characterized in that in step 4, The motion characteristic of the fusion rotation amount is expressed as acting details using rotation amount expression, and human body attitude matrix expression movement is whole State.
7. a kind of motion characteristic representation method for merging rotation amount according to claim 6, which is characterized in that in step 4, The motion characteristic representation method of the fusion rotation amount are as follows:
Definition is using SpineBase as origin, with Z axis orientation sensor, to be located at people's left-hand with X-axis straight up for Y-axis To human action posture S in three-dimensional spaceFIt is parameterized are as follows:
SF=[RF, GiF] (2)
R thereinFIndicate the matrix that the relative positional relationship between each artis is constituted, GiFIndicate i-th of skeleton point in F Rotation amount when frame.
8. a kind of motion characteristic representation method for merging rotation amount according to claim 1, which is characterized in that in step 4, Posture is established using the motion characteristic representation method of the fusion rotation amount as the input of neural network by BP neural network Model.
CN201910610766.4A 2019-07-08 2019-07-08 A kind of motion characteristic representation method merging rotation amount Pending CN110472497A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910610766.4A CN110472497A (en) 2019-07-08 2019-07-08 A kind of motion characteristic representation method merging rotation amount

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910610766.4A CN110472497A (en) 2019-07-08 2019-07-08 A kind of motion characteristic representation method merging rotation amount

Publications (1)

Publication Number Publication Date
CN110472497A true CN110472497A (en) 2019-11-19

Family

ID=68507103

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910610766.4A Pending CN110472497A (en) 2019-07-08 2019-07-08 A kind of motion characteristic representation method merging rotation amount

Country Status (1)

Country Link
CN (1) CN110472497A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110942007A (en) * 2019-11-21 2020-03-31 北京达佳互联信息技术有限公司 Hand skeleton parameter determination method and device, electronic equipment and storage medium
CN111341040A (en) * 2020-03-28 2020-06-26 江西财经职业学院 Financial self-service equipment and management system thereof
CN112686976A (en) * 2020-12-31 2021-04-20 咪咕文化科技有限公司 Processing method and device of skeleton animation data and communication equipment
CN113393561A (en) * 2021-05-26 2021-09-14 完美世界(北京)软件科技发展有限公司 Method, device and storage medium for generating limb action expression packet of virtual character
CN113850893A (en) * 2021-11-30 2021-12-28 北京健康有益科技有限公司 Skeleton point action data generation method and device, storage medium and electronic equipment
CN114863325A (en) * 2022-04-19 2022-08-05 上海人工智能创新中心 Motion recognition method, device, equipment and computer readable storage medium
CN116310012A (en) * 2023-05-25 2023-06-23 成都索贝数码科技股份有限公司 Video-based three-dimensional digital human gesture driving method, device and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130141431A1 (en) * 2011-12-06 2013-06-06 Elysium Co., Ltd. Method of controlling model, and recording medium therewith
CN103294832A (en) * 2013-06-27 2013-09-11 西安工业大学 Motion capture data retrieval method based on feedback study
JP2016162425A (en) * 2015-03-05 2016-09-05 日本電信電話株式会社 Body posture estimation device, method, and program
CN106066996A (en) * 2016-05-27 2016-11-02 上海理工大学 The local feature method for expressing of human action and in the application of Activity recognition
US20170277167A1 (en) * 2016-03-24 2017-09-28 Seiko Epson Corporation Robot system, robot control device, and robot
CN107229920A (en) * 2017-06-08 2017-10-03 重庆大学 Based on integrating, depth typical time period is regular and Activity recognition method of related amendment
CN107578462A (en) * 2017-09-12 2018-01-12 北京城市系统工程研究中心 A kind of bone animation data processing method based on real time motion capture

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130141431A1 (en) * 2011-12-06 2013-06-06 Elysium Co., Ltd. Method of controlling model, and recording medium therewith
CN103294832A (en) * 2013-06-27 2013-09-11 西安工业大学 Motion capture data retrieval method based on feedback study
JP2016162425A (en) * 2015-03-05 2016-09-05 日本電信電話株式会社 Body posture estimation device, method, and program
US20170277167A1 (en) * 2016-03-24 2017-09-28 Seiko Epson Corporation Robot system, robot control device, and robot
CN106066996A (en) * 2016-05-27 2016-11-02 上海理工大学 The local feature method for expressing of human action and in the application of Activity recognition
CN107229920A (en) * 2017-06-08 2017-10-03 重庆大学 Based on integrating, depth typical time period is regular and Activity recognition method of related amendment
CN107578462A (en) * 2017-09-12 2018-01-12 北京城市系统工程研究中心 A kind of bone animation data processing method based on real time motion capture

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
SHIAN-RU KE等: "A Review on Video-Based Human Activity Recognition", 《COMPUTERS》 *
SHUANG GONGA等: "Movement Analysis and Displacement Estimation of Human Lower Body by Inertial Sensors", 《INTERNATIONAL JOURNAL OF SCIENCE》 *
TAO WEI等: "Kinect Skeleton Coordinate Calibration for Remote Physical Training", 《MMEDIA 2014 : THE SIXTH INTERNATIONAL CONFERENCES ON ADVANCES IN MULTIMEDIA》 *
吴珍珍: "利用骨骼模型和格拉斯曼流行的3D人体动作识别", 《计算机工程与应用》 *
李红波等: "基于骨骼信息的虚拟角色控制方法", 《重庆邮电大学学报( 自然科学版) 》 *
胡星晨: "基于Kinect的体感交互机器人", 《信息技术及图像处理》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110942007A (en) * 2019-11-21 2020-03-31 北京达佳互联信息技术有限公司 Hand skeleton parameter determination method and device, electronic equipment and storage medium
CN110942007B (en) * 2019-11-21 2024-03-05 北京达佳互联信息技术有限公司 Method and device for determining hand skeleton parameters, electronic equipment and storage medium
CN111341040A (en) * 2020-03-28 2020-06-26 江西财经职业学院 Financial self-service equipment and management system thereof
CN112686976A (en) * 2020-12-31 2021-04-20 咪咕文化科技有限公司 Processing method and device of skeleton animation data and communication equipment
CN113393561A (en) * 2021-05-26 2021-09-14 完美世界(北京)软件科技发展有限公司 Method, device and storage medium for generating limb action expression packet of virtual character
CN113393561B (en) * 2021-05-26 2024-06-21 完美世界(北京)软件科技发展有限公司 Method and device for generating limb action expression package of virtual character and storage medium
CN113850893A (en) * 2021-11-30 2021-12-28 北京健康有益科技有限公司 Skeleton point action data generation method and device, storage medium and electronic equipment
CN113850893B (en) * 2021-11-30 2022-02-25 北京健康有益科技有限公司 Skeleton point action data generation method and device, storage medium and electronic equipment
CN114863325A (en) * 2022-04-19 2022-08-05 上海人工智能创新中心 Motion recognition method, device, equipment and computer readable storage medium
CN114863325B (en) * 2022-04-19 2024-06-07 上海人工智能创新中心 Action recognition method, apparatus, device and computer readable storage medium
CN116310012A (en) * 2023-05-25 2023-06-23 成都索贝数码科技股份有限公司 Video-based three-dimensional digital human gesture driving method, device and system
CN116310012B (en) * 2023-05-25 2023-07-25 成都索贝数码科技股份有限公司 Video-based three-dimensional digital human gesture driving method, device and system

Similar Documents

Publication Publication Date Title
CN110472497A (en) A kind of motion characteristic representation method merging rotation amount
CN107679522B (en) Multi-stream LSTM-based action identification method
CN104123545B (en) A kind of real-time human facial feature extraction and expression recognition method
CN110097639A (en) A kind of 3 D human body Attitude estimation method
CN103035135B (en) Children cognitive system based on augment reality technology and cognitive method
CN109308450A (en) A kind of face's variation prediction method based on generation confrontation network
CN108345869A (en) Driver's gesture recognition method based on depth image and virtual data
CN108081266A (en) A kind of method of the mechanical arm hand crawl object based on deep learning
CN104615983A (en) Behavior identification method based on recurrent neural network and human skeleton movement sequences
CN106650687A (en) Posture correction method based on depth information and skeleton information
CN107150347A (en) Robot perception and understanding method based on man-machine collaboration
CN105512621A (en) Kinect-based badminton motion guidance system
CN105787439A (en) Depth image human body joint positioning method based on convolution nerve network
CN106407889A (en) Video human body interaction motion identification method based on optical flow graph depth learning model
CN109858630A (en) Method and apparatus for intensified learning
CN102184541A (en) Multi-objective optimized human body motion tracking method
CN105536205A (en) Upper limb training system based on monocular video human body action sensing
CN107293175A (en) A kind of locomotive hand signal operation training method based on body-sensing technology
CN106210269A (en) A kind of human action identification system and method based on smart mobile phone
CN105243375A (en) Motion characteristics extraction method and device
CN109815930A (en) A kind of action imitation degree of fitting evaluation method
CN107066979A (en) A kind of human motion recognition method based on depth information and various dimensions convolutional neural networks
CN104678766B (en) A kind of optimal batting acquiring method of configuration of apery mechanical arm flight spheroid operation
Quan Development of computer aided classroom teaching system based on machine learning prediction and artificial intelligence KNN algorithm
Liu et al. Trampoline motion decomposition method based on deep learning image recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191119