CN105046193B - A kind of human motion recognition method based on fusion rarefaction representation matrix - Google Patents

A kind of human motion recognition method based on fusion rarefaction representation matrix Download PDF

Info

Publication number
CN105046193B
CN105046193B CN201510306471.XA CN201510306471A CN105046193B CN 105046193 B CN105046193 B CN 105046193B CN 201510306471 A CN201510306471 A CN 201510306471A CN 105046193 B CN105046193 B CN 105046193B
Authority
CN
China
Prior art keywords
rarefaction representation
action
matrix
test object
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510306471.XA
Other languages
Chinese (zh)
Other versions
CN105046193A (en
Inventor
于宗泽
方勇
李兆元
余鸿文
陶红波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201510306471.XA priority Critical patent/CN105046193B/en
Publication of CN105046193A publication Critical patent/CN105046193A/en
Application granted granted Critical
Publication of CN105046193B publication Critical patent/CN105046193B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/513Sparse representations

Abstract

The invention discloses a kind of human motion recognition methods based on fusion rarefaction representation matrix, including gathered data, the action matrix step of each test object, the rarefaction representation matrix step for building complete dictionary step, solving each test object are built, the first fusion rarefaction representation matrix step is solved, solves the second fusion rarefaction representation matrix and action recognition step.The present invention is trained using a large amount of different human action datas, pass through fusion methods different twice, the error of rarefaction representation matrix that same target in training process obtains is reduced first, then reduce the difference between different objects, reduce redundancy, effective information during integrated classification is optimized complementary information and cooperative information, improves the accuracy rate of human action identification.

Description

A kind of human motion recognition method based on fusion rarefaction representation matrix
Technical field
The present invention relates to a kind of human motion recognition method, more particularly to a kind of human body based on fusion rarefaction representation matrix Action identification method.
Background technology
Wireless body area network (Wireless Body Area Network, WBAN) is a kind of short distance centered on human body Cordless communication network.By wearing the different sensor of function on human body, WBAN, which becomes personal physiological information, to be acquired and transmits One of important technical, obtained in fermentation such as daily life monitoring, patient care, sportsman's rehabilitation supplemental trainings Extensive use, wherein, the human action identification based on the inertial sensor worn in WBAN becomes current research hotspot.At present Common human motion recognition method mainly has decision tree, artificial neural network, support vector machines, multilevel hierarchy method, sparse table Show classification etc..At present, the technology research is merged to information very big breakthrough, but at present still without one A definition merged about information being widely recognized as, according to a variety of different definition, it is real that we can be understood as information fusion Matter is exactly to make full use of multiple sensors or the perception data of multi-characteristicattribute attribute, is reasonably used with certain rule and dominates this A little perception datas, comprehensive multi-faceted incomplete local environmental information, eliminate contradiction that may be present between information, obtain to sense Know the description or explanation of object.As it can be seen that information fusion process is substantially the process of an information processing, the mode of processing has Pixel-based fusion, feature-based fusion and decision level fusion.The core of information fusion is to the coordination optimization of sensing data and comprehensive Conjunction handle, by information fusion technology be applied to human action identification in, can efficiently reduce multiple inertial sensor datas it Between redundancy, optimize cooperative information and complementary information, be conducive to the promotion of human action discrimination.
Invention content
The purpose of the present invention is to provide a kind of human motion recognition methods based on fusion rarefaction representation matrix.
To achieve these goals, the present invention adopts the following technical scheme that:
A kind of human motion recognition method based on fusion rarefaction representation matrix, includes the following steps:
Step 1:K test object is selected, each test object wears L sensor respectively, and each sensor is adopted The signal kinds of collection are identical, and including acceleration signal and gyroscope signal, each test object does different action types Number is T, and each test object is N to the number of repetition of the various actions, and each sensor is to various actions Repeat for each time carry out h time and sample, generate the action matrix H of each test objectp∈RU×W, p=1,2 ..., K:
Hp=[Hp,1,Hp,2,...,Hp,T]∈RU×W (1)
Wherein, the action submatrix H of U=5hL, W=NT, p-th people q kinds actionp,qFor:
Hp,q=[vp,q,1,vp,q,2,...,vp,q,N]∈RU×N (2)
The action vector v of wherein p-th people n-th q kinds actionp,q,nFor:
Wherein, p-th of people n-th q kind acts the corresponding action subvector β of j-th of sensorp,q,n,j, j=1, 2 ..., L is:
βp,q,n,j=(ap,q,n,j(1)T,ap,q,n,j(2)T,...,ap,q,n,j(h)T)T∈R5h (4)
Wherein, p-th of people n-th q kind acts the sensor vector a that j-th of sensor is acquired in t momentp,q,n,j(t) For:
ap,q,n,j(t)=(xp,q,n,j(t),yp,q,n,j(t),zp,q,n,j(t),θp,q,n,j(t),ρp,q,n,j(t))T∈R5 (5)
Wherein xp,q,n,j(t),yp,q,n,j(t),zp,q,n,j(t) j-th of sensing of q kinds action is done for p-th of people's n-th Device is in t moment collected X, Y and the acceleration signal of Z-direction, θp,q,n,j(t),ρp,q,n,j(t) it is q for p-th of people's n-th J-th of sensor of kind action is in the collected two axis gyroscope instrument X of t moment, the angular velocity signal of Y-direction;
Step 2:The action submatrix H of the various actions is by each test objectp,qGenerated complete dictionary matrix A:
A=[H1,1,H2,1,...,HK,1,H1,2,H2,2,...,HK,2,...,H1,T,H2,T,...,HK,T]∈RU×Q (6)
Wherein, Q=NTK;
Step 3:According to the excessively complete dictionary matrix A, using minimization L1 norms under quadratic constraints method generation with The action matrix H of each test objectp∈RU×W, the rarefaction representation matrix B of the corresponding each test object of p=1,2 ..., Kp ∈RQ×W, p=1,2 ..., K;
Step 4:By the rarefaction representation matrix B of each test objectp∈RQ×W, p=1,2 ..., K are maximized by cycle L1 norm methods extract the first fusion rarefaction representation Matrix C of each test objectp∈RQ×T, p=1,2 ..., K;Described first melts Close rarefaction representation Matrix Cp∈RQ×TColumn vector number and it is described action type number T-phase it is same;
Step 5:Rarefaction representation Matrix C is merged by the first of each test objectp∈RQ×T, p=1,2 ..., K pass through Cycle maximizes L1 norm methods and obtains the second fusion rarefaction representation matrix F ∈ RQ×T, the second fusion rarefaction representation matrix F∈RQ×TColumn vector be fq, q=1,2 ..., T are corresponding with the various actions respectively;
Step 6:Carry out human action identification:Calculate action vector γ to be identifiedtestRarefaction representation is merged with described second The linear weighted function difference of each column vector in matrix F, select action type corresponding to the column vector of linear weighted difference minimum for Final recognition result.
Each test object wears 5 sensors respectively, and the signal kinds of each sensor acquisition are identical, include Acceleration signal and gyroscope signal, each test object do 13 kinds of different actions, each action is done 5 times.
The method with minimization L1 norms under quadratic constraints described in the step 3 comprises the steps of:
Step 3-1:Solve the action matrix H of p-th of test objectp∈RU×W, p=1,2 ..., each column vector of K Corresponding rarefaction representation vectorN=1,2 ..., N.:
Wherein ε is observation noise.
Step 3-2:By the action matrix H of p-th of test objectp∈RU×W, p=1,2 ..., each column vector pair of K The rarefaction representation matrix B that the rarefaction representation vector answered is formedp, p=1,2 ..., K:
The step 4 is made of step in detail below:
Step 4-1:By the rarefaction representation matrix B of each test objectp, p=1,2 ..., K column major orders are equally divided into With the action type number T-phase with first to T sparse vector groups, every group of N number of vector:
Step 4-2:Seek the optimal column vector of each sparse vector group:It enables Q=1,2 ..., T, n=1,2 ..., N;
Step 4-3:By the optimal column vector of each sparse vector groupQ=1,2 ..., T, n=1, 2 ..., N forms the sparse output matrix C of the second fusion in orderp=[dp,1,dp,2,...,dp,T]∈RQ×T, p=1,2 ..., K.
The step 5 is made of step in detail below:
Step 5-1:Rarefaction representation Matrix C is merged by the first of each test objectp∈RQ×T, p=1,2 ..., K are pressed Row sequence is equally divided into the T+1 same with the action type number T-phase to 2T sparse vector groups, every group of K vector:
(d1,1,d2,1,...,dK,1), (d1,2,d2,2,...,dK,2) ..., (d1,T,d2,T,...,dK,T);
Step 5-3:The optimal column vector of T+1 to the 2T sparse vector groups is formed into second fusion in order Rarefaction representation matrix F=[f1,f2,...,fT]∈RQ×T
The difference r of linear weighted function described in the step 6qtest) computational methods be:
rqtest)=| | γtest-fq||2, q=1,2 ..., T (9)
The beneficial effects of the present invention are:
The present invention is trained using a large amount of different human action datas, by fusion methods different twice, first The error of rarefaction representation matrix that same target in training process obtains is reduced, then reduces the difference between different objects, Reduce redundancy, the effective information during integrated classification is optimized complementary information and cooperative information, improves people The accuracy rate of body action recognition.
Description of the drawings
Fig. 1 is the flow chart of the present invention.
Specific embodiment
Technical solution for a better understanding of the present invention, below will be to of the invention a kind of dilute based on merging with reference to attached drawing The human motion recognition method for dredging representing matrix is described in further detail, and system block diagram is as shown in Figure 1, specific implementation step It is as follows:
A kind of human motion recognition method based on fusion rarefaction representation matrix, includes the following steps:
Step 1:K test object is selected, each test object wears L sensor respectively, and each sensor is adopted The signal kinds of collection are identical, and including acceleration signal and gyroscope signal, each test object does different action types Number is T, and each test object is N to the number of repetition of the various actions, and each sensor is to various actions Repeat for each time carry out h time and sample, generate the action matrix H of each test objectp∈RU×W, p=1,2 ..., K:
Hp=[Hp,1,Hp,2,...,Hp,T]∈RU×W (1)
Wherein, the action submatrix H of U=5hL, W=NT, p-th people q kinds actionp,qFor:
Hp,q=[vp,q,1,vp,q,2,...,vp,q,N]∈RU×N (2)
The action vector v of wherein p-th people n-th q kinds actionp,q,nFor:
Wherein, p-th of people n-th q kind acts the corresponding action subvector β of j-th of sensorp,q,n,j, j=1, 2 ..., L is:
βp,q,n,j=(ap,q,n,j(1)T,ap,q,n,j(2)T,...,ap,q,n,j(h)T)T∈R5h (4)
Wherein, p-th of people n-th q kind acts the sensor vector a that j-th of sensor is acquired in t momentp,q,n,j(t) For:
ap,q,n,j(t)=(xp,q,n,j(t),yp,q,n,j(t),zp,q,n,j(t),θp,q,n,j(t),ρp,q,n,j(t))T∈R5 (5)
Wherein xp,q,n,j(t),yp,q,n,j(t),zp,q,n,j(t) j-th of sensing of q kinds action is done for p-th of people's n-th Device is in t moment collected X, Y and the acceleration signal of Z-direction, θp,q,n,j(t),ρp,q,n,j(t) it is q for p-th of people's n-th J-th of sensor of kind action is in the collected two axis gyroscope instrument X of t moment, the angular velocity signal of Y-direction;
Step 2:The action submatrix H of the various actions is by each test objectp,qGenerated complete dictionary matrix A:
A=[H1,1,H2,1,...,HK,1,H1,2,H2,2,...,HK,2,...,H1,T,H2,T,...,HK,T]∈RU×Q (6)
Wherein, Q=NTK;
Step 3:According to the excessively complete dictionary matrix A, using minimization L1 norms under quadratic constraints method generation with The action matrix H of each test objectp∈RU×W, the rarefaction representation matrix B of the corresponding each test object of p=1,2 ..., Kp ∈RQ×W, p=1,2 ..., K;
Step 4:By the rarefaction representation matrix B of each test objectp∈RQ×W, p=1,2 ..., K are maximized by cycle L1 norm methods extract the first fusion rarefaction representation Matrix C of each test objectp∈RQ×T, p=1,2 ..., K;Described first melts Close rarefaction representation Matrix Cp∈RQ×TColumn vector number and it is described action type number T-phase it is same;
Step 5:Rarefaction representation Matrix C is merged by the first of each test objectp∈RQ×T, p=1,2 ..., K pass through Cycle maximizes L1 norm methods and obtains the second fusion rarefaction representation matrix F ∈ RQ×T, the second fusion rarefaction representation matrix F∈RQ×TColumn vector be fq, q=1,2 ..., T are corresponding with the various actions respectively;
Step 6:Carry out human action identification:Calculate action vector γ to be identifiedtestRarefaction representation is merged with described second The linear weighted function difference of each column vector in matrix F, select action type corresponding to the column vector of linear weighted difference minimum for Final recognition result.
Each test object wears 5 sensors respectively, and the signal kinds of each sensor acquisition are identical, include Acceleration signal and gyroscope signal, each test object do 13 kinds of different actions, each action is done 5 times.
The method with minimization L1 norms under quadratic constraints described in the step 3 comprises the steps of:
Step 3-1:Solve the action matrix H of p-th of test objectp∈RU×W, p=1,2 ..., each column vector of K Corresponding rarefaction representation vectorN=1,2 ..., N.:
Wherein ε is observation noise, takes 0.01 in the present embodiment.
Step 3-2:By the action matrix H of p-th of test objectp∈RU×W, p=1,2 ..., each column vector pair of K The rarefaction representation matrix B that the rarefaction representation vector answered is formedp, p=1,2 ..., K:
The step 4 is made of step in detail below:
Step 4-1:By the rarefaction representation matrix B of each test objectp, p=1,2 ..., K column major orders are equally divided into With the action type number T-phase with first to T sparse vector groups, every group of N number of vector:
Step 4-2:Seek the optimal column vector of each sparse vector group:It enables Q=1,2 ..., T, n=1,2 ..., N;
Step 4-3:By the optimal column vector of each sparse vector groupQ=1,2 ..., T, n=1, 2 ..., N forms the sparse output matrix C of the second fusion in orderp=[dp,1,dp,2,...,dp,T]∈RQ×T, p=1,2 ..., K.
The step 5 is made of step in detail below:
Step 5-1:Rarefaction representation Matrix C is merged by the first of each test objectp∈RQ×T, p=1,2 ..., K are pressed Row sequence is equally divided into the T+1 same with the action type number T-phase to 2T sparse vector groups, every group of K vector:
(d1,1,d2,1,...,dK,1), (d1,2,d2,2,...,dK,2) ..., (d1,T,d2,T,...,dK,T);
Step 5-3:The optimal column vector of T+1 to the 2T sparse vector groups is formed into second fusion in order Rarefaction representation matrix F=[f1,f2,...,fT]∈RQ×T
The difference r of linear weighted function described in the step 6qtest) computational methods be:
rqtest)=| | γtest-fq||2, q=1,2 ..., T (9)
In conclusion the present invention provides a kind of human motion recognition method based on fusion rarefaction representation matrix, pass through Fusion methods different twice reduces the error of rarefaction representation matrix that same target in training process obtains, then first Reduce the difference between different objects, reduce redundancy, the effective information during integrated classification makes complementary information and association It is optimized with information, improves the accuracy rate of human action identification.

Claims (6)

1. a kind of human motion recognition method based on fusion rarefaction representation matrix, it is characterised in that:Include the following steps:
Step 1:K test object is selected, each test object wears L sensor respectively, each sensor acquisition Signal kinds are identical, and including acceleration signal and gyroscope signal, each test object does different action type numbers It is T, each test object is N to the number of repetition of the various actions, and each sensor is to each of various actions Secondary repetition carries out h sampling, generates the action matrix H of each test objectp∈RU×W, p=1,2 ..., K:
Hp=[Hp,1,Hp,2,…,Hp,T]∈RU×W (1)
Wherein, the action submatrix H of U=5hL, W=NT, p-th people q kinds actionp,qFor:
Hp,q=[vp,q,1,vp,q,2,…,vp,q,N]∈RU×N (2)
The action vector v of wherein p-th people n-th q kinds actionp,q,nFor:
Wherein, p-th of people n-th q kind acts the corresponding action subvector β of j-th of sensorp,q,n,j, j=1,2 ..., L are:
Wherein, p-th of people n-th q kind acts the sensor vector a that j-th of sensor is acquired in t momentp,q,n,j(t) it is:
ap,q,n,j(t)=(xp,q,n,j(t),yp,q,n,j(t),zp,q,n,j(t),θp,q,n,j(t),ρp,q,n,j(t))T∈R5 (5)
Wherein xp,q,n,j(t),yp,q,n,j(t),zp,q,n,j(t) j-th of sensor of q kinds action is made in t for p-th of people's n-th The acceleration signal of moment collected X, Y and Z-direction, θp,q,n,j(t),ρp,q,n,j(t) q kinds are done for p-th of people's n-th to move J-th of the sensor made is in the collected two axis gyroscope instrument X of t moment, the angular velocity signal of Y-direction;
Step 2:The action submatrix H of the various actions is by each test objectp,qGenerated complete dictionary matrix A:
A=[H1,1,H2,1,…,HK,1,H1,2,H2,2,…,HK,2,…,H1,T,H2,T,…,HK,T]∈RU×Q (6)
Wherein, Q=NTK;
Step 3:According to the excessively complete dictionary matrix A, using minimization L1 norms under quadratic constraints method generation with it is described The action matrix H of each test objectp∈RU×W, the rarefaction representation matrix B of the corresponding each test object of p=1,2 ..., Kp∈RQ×W, P=1,2 ..., K;
Step 4:By the rarefaction representation matrix B of each test objectp∈RQ×W, p=1,2 ..., K maximize L1 models by cycle Number method extracts the first fusion rarefaction representation Matrix C of each test objectp∈RQ×T, p=1,2 ..., K;First fusion is dilute Dredge representing matrix Cp∈RQ×TColumn vector number and it is described action type number T-phase it is same;
Step 5:Rarefaction representation Matrix C is merged by the first of each test objectp∈RQ×T, p=1,2 ..., K are by recycling most Bigization L1 norm methods obtain the second fusion rarefaction representation matrix F ∈ RQ×T, the second fusion rarefaction representation matrix F ∈ RQ×T's Column vector is fq, q=1,2 ..., T are corresponding with the various actions respectively;
Step 6:Carry out human action identification:Calculate action vector γ to be identifiedtestRarefaction representation matrix F is merged with described second In each column vector linear weighted function difference, it is final to select action type corresponding to the column vector of linear weighted difference minimum Recognition result.
2. the human motion recognition method according to claim 1 based on fusion rarefaction representation matrix, it is characterised in that:
Each test object wears 5 sensors respectively, and the signal kinds of each sensor acquisition are identical, include accelerating Signal and gyroscope signal are spent, each test object does 13 kinds of different actions, each action is done 5 times.
3. the human motion recognition method according to claim 1 based on fusion rarefaction representation matrix, it is characterised in that:
The method with minimization L1 norms under quadratic constraints described in the step 3 comprises the steps of:
Step 3-1:Solve the action matrix H of p-th of test objectp∈RU×W, p=1,2 ..., each column vector of K corresponds to Rarefaction representation vectorN=1,2 ..., N,
Wherein ε is observation noise;
Step 3-2:By the action matrix H of p-th of test objectp∈RU×W, p=1,2 ..., each column vector of K is corresponding The rarefaction representation matrix B that rarefaction representation vector is formedp, p=1,2 ..., K:
4. the human motion recognition method according to claim 1 based on fusion rarefaction representation matrix, it is characterised in that:
The step 4 is made of step in detail below:
Step 4-1:By the rarefaction representation matrix B of each test objectp, p=1,2 ..., K column major orders be equally divided into it is described Act type number T-phase with first to T sparse vector groups, every group of N number of vector:
Step 4-2:Seek the optimal column vector of each sparse vector group:It enables Q=1,2 ..., T, n=1,2 ..., N;
Step 4-3:By the optimal column vector of each sparse vector groupQ=1,2 ..., T, n=1,2 ..., N The sparse output matrix C of the second fusion of composition in orderp=[dp,1,dp,2,…,dp,T]∈RQ×T, p=1,2 ..., K.
5. the human motion recognition method according to claim 1 based on fusion rarefaction representation matrix, it is characterised in that:
The step 5 is made of step in detail below:
Step 5-1:Rarefaction representation Matrix C is merged by the first of each test objectp∈RQ×T, p=1,2 ..., K column major orders The T+1 same with the action type number T-phase is equally divided into 2T sparse vector groups, every group of K vector:(d1,1,d2 ,1,…,dK,1), (d1,2,d2,2,…,dK,2) ..., (d1,T,d2,T,…,dK,T);
Step 5-2:The T+1 is sought to the optimal column vector of 2T sparse vector groups:Enable Aq,p=| | dq,p||1,Wherein p=1,2 ..., K, q=1,2 ..., T;
Step 5-3:It is sparse that the optimal column vector of T+1 to the 2T sparse vector groups is formed into second fusion in order Representing matrix F=[f1,f2,…,fT]∈RQ×T
6. the human motion recognition method according to claim 1 based on fusion rarefaction representation matrix, it is characterised in that:
The difference r of linear weighted function described in the step 6qtest) computational methods be:
rqtest)=| | γtest-fq||2, q=1,2 ..., T (9).
CN201510306471.XA 2015-06-05 2015-06-05 A kind of human motion recognition method based on fusion rarefaction representation matrix Expired - Fee Related CN105046193B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510306471.XA CN105046193B (en) 2015-06-05 2015-06-05 A kind of human motion recognition method based on fusion rarefaction representation matrix

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510306471.XA CN105046193B (en) 2015-06-05 2015-06-05 A kind of human motion recognition method based on fusion rarefaction representation matrix

Publications (2)

Publication Number Publication Date
CN105046193A CN105046193A (en) 2015-11-11
CN105046193B true CN105046193B (en) 2018-07-10

Family

ID=54452722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510306471.XA Expired - Fee Related CN105046193B (en) 2015-06-05 2015-06-05 A kind of human motion recognition method based on fusion rarefaction representation matrix

Country Status (1)

Country Link
CN (1) CN105046193B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109362220B (en) * 2016-03-14 2022-06-17 澳大利亚国家Ict有限公司 Energy harvesting for sensor systems
CN108875445B (en) * 2017-05-08 2020-08-25 深圳荆虹科技有限公司 Pedestrian re-identification method and device
US10740659B2 (en) * 2017-12-14 2020-08-11 International Business Machines Corporation Fusing sparse kernels to approximate a full kernel of a convolutional neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0666544A1 (en) * 1994-02-03 1995-08-09 Canon Kabushiki Kaisha Gesture input method and apparatus
CN103440471A (en) * 2013-05-05 2013-12-11 西安电子科技大学 Human body action identifying method based on lower-rank representation
CN104268577A (en) * 2014-06-27 2015-01-07 大连理工大学 Human body behavior identification method based on inertial sensor
CN104298977A (en) * 2014-10-24 2015-01-21 西安电子科技大学 Low-order representing human body behavior identification method based on irrelevance constraint

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9524424B2 (en) * 2011-09-01 2016-12-20 Care Innovations, Llc Calculation of minimum ground clearance using body worn sensors

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0666544A1 (en) * 1994-02-03 1995-08-09 Canon Kabushiki Kaisha Gesture input method and apparatus
CN103440471A (en) * 2013-05-05 2013-12-11 西安电子科技大学 Human body action identifying method based on lower-rank representation
CN104268577A (en) * 2014-06-27 2015-01-07 大连理工大学 Human body behavior identification method based on inertial sensor
CN104298977A (en) * 2014-10-24 2015-01-21 西安电子科技大学 Low-order representing human body behavior identification method based on irrelevance constraint

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
体域网中一种基于压缩感知的人体动作识别方法;肖玲等;《电子与信息学报》;20130115;第119页-第125页 *

Also Published As

Publication number Publication date
CN105046193A (en) 2015-11-11

Similar Documents

Publication Publication Date Title
Zhang et al. Human activity recognition based on motion sensor using u-net
CN107679522B (en) Multi-stream LSTM-based action identification method
CN103268495B (en) Human body behavior modeling recognition methods based on priori knowledge cluster in computer system
CN107423730A (en) A kind of body gait behavior active detecting identifying system and method folded based on semanteme
CN103718175A (en) Apparatus, method, and medium detecting object pose
CN104899561A (en) Parallelized human body behavior identification method
Su et al. HDL: Hierarchical deep learning model based human activity recognition using smartphone sensors
CN113723312B (en) Rice disease identification method based on visual transducer
CN110610158A (en) Human body posture identification method and system based on convolution and gated cyclic neural network
WO2020224433A1 (en) Target object attribute prediction method based on machine learning and related device
CN109325106A (en) A kind of U.S. chat robots intension recognizing method of doctor and device
CN106210269A (en) A kind of human action identification system and method based on smart mobile phone
CN111178288B (en) Human body posture recognition method and device based on local error layer-by-layer training
CN108960171B (en) Method for converting gesture recognition into identity recognition based on feature transfer learning
CN112541529A (en) Expression and posture fusion bimodal teaching evaluation method, device and storage medium
Shin et al. Korean sign language recognition using EMG and IMU sensors based on group-dependent NN models
CN110575663A (en) physical education auxiliary training method based on artificial intelligence
CN108985223A (en) A kind of human motion recognition method
CN105046193B (en) A kind of human motion recognition method based on fusion rarefaction representation matrix
CN109726662A (en) Multi-class human posture recognition method based on convolution sum circulation combination neural net
CN108762503A (en) A kind of man-machine interactive system based on multi-modal data acquisition
CN109325513A (en) A kind of image classification network training method based on magnanimity list class single image
CN102880870A (en) Method and system for extracting facial features
Li et al. Graph diffusion convolutional network for skeleton based semantic recognition of two-person actions
CN104850225B (en) A kind of activity recognition method based on multi-level Fusion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180710

Termination date: 20210605