CN105930767A - Human body skeleton-based action recognition method - Google Patents

Human body skeleton-based action recognition method Download PDF

Info

Publication number
CN105930767A
CN105930767A CN201610211534.8A CN201610211534A CN105930767A CN 105930767 A CN105930767 A CN 105930767A CN 201610211534 A CN201610211534 A CN 201610211534A CN 105930767 A CN105930767 A CN 105930767A
Authority
CN
China
Prior art keywords
action
characteristic vector
sequence
distance
skeleton
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610211534.8A
Other languages
Chinese (zh)
Other versions
CN105930767B (en
Inventor
王行
周晓军
李骊
盛赞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Huajie Imi Software Technology Co Ltd
Original Assignee
Nanjing Huajie Imi Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Huajie Imi Software Technology Co Ltd filed Critical Nanjing Huajie Imi Software Technology Co Ltd
Priority to CN201610211534.8A priority Critical patent/CN105930767B/en
Publication of CN105930767A publication Critical patent/CN105930767A/en
Application granted granted Critical
Publication of CN105930767B publication Critical patent/CN105930767B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a human body skeleton-based action recognition method. The method is characterized by comprising the following basic steps that: step 1, a continuous skeleton data frame sequence of a person who is executing target actions is obtained from a somatosensory device; step 2, main joint point data which can characterize the actions are screened out from the skeleton data; step 3, action feature values are extracted from the main joint point data and are calculated, and a feature vector sequence of the actions is constructed; step 4, the feature vectors are preprocessed; step 4, the feature vector sequence of an action sample set is saved as an action sample template library; step 6, actions are acquired in real time, the distance value of the feature vector sequence of the actions and the feature vector sequence of all action samples in the template library is calculated by using a dynamic time warping algorithm; and step 7, the actions are classified and recognized. The method of the invention has the advantages of high real-time performance, high robustness, high accuracy and simple and reliable implementation, and is suitable for a real-time action recognition system.

Description

A kind of action identification method based on human skeleton
Technical field
The invention belongs to pattern recognition and human-computer interaction technique field, particularly relate to a kind of action recognition side based on human skeleton Method.
Background technology
Along with computer vision and the development of human-computer interaction technology, increasing man-machine interactive system select to use human posture or Action is as input, and the appearance of Microsoft's Kinect somatosensory technology makes people interact with a computer to become more natural, with posture Or action carrys out control system as input and becomes more universal.Yet with human body difference, the multiformity of action executing and action The a variety of causes such as complexity, enabling in real time, stable, identifying human action accurately, to become a difficulty the biggest Problem.
Chinese patent application CN201110046975.4 discloses and " a kind of realizes real scale game based on decomposition of movement and behavior analysis Method ", the action recognition that the method relates to is by the normalized human body 3D skeleton pattern got and offline play storehouse Action is mated, and mates including single-frame images coupling and multiple image, then carries out action recognition.The method there is also following Substantially not enough: one is that the method first has to 3D human skeleton model is done normalized, Human Height is normalized to 1, and Position of human body is adjusted to the distance from video camera consistent with the distance set in maneuver library, and then adjusts human body each articulare position With limbs length information, on the one hand do so makes normalized amount of calculation very big, because each frame skeleton data is relevant with institute Node will process, and on the other hand adjusts the section of the method for articulare position and limbs length according to human body and photographic head distance There is the biggest problem in the property learned and accuracy;Two is that the match information (eigenvalue) used by the method is unreasonable, employs node even Line length and be 2D length as one of matching degree tolerance, this value cannot be effectively as the eigenvalue of action;Three is the method The matching degree algorithm used is too simple, calculates the diversity factor between the metric of fixing frame number, and as the coupling of action recognition Spending, the method for this calculating matching degree cannot weigh the problem of action executing speed asynchronous action similarity, thus without reality Using value.
Chinese patent application CN201310486754 discloses " a kind of human motion recognition method based on Kinect ", the party Method utilizes the spatial positional information of the skeleton joint point of Kinect acquisition target body, then default by judging whether it meets The criterion of various human actions identify the type of action of target body.Although the method can identify some actions and Posture, but there is following obvious deficiency, one is that the various human action criterion preset in the method depend on a series of bone The parameter of frame articulare and threshold value, should have rich experience, has a large amount of test repeatedly again so that these parameters and threshold value Itself it is difficult to set, on the one hand needs to rely on rich experience, on the other hand need a large amount of test repeatedly;Two is definition and knowledge Others' body action needs the programing work of long time and bigger size of code;Three are the robustness of the method and accuracy is difficult to Ensure, particularly in the case of human body size, action executing speed difference;Four be the method be only applicable to simple action or Person's gesture recognition, for complicated action, the method seems helpless.
Chinese patent application CN201310192961 discloses " a kind of real-time body action recognition side based on range image sequence Method ", first the method extracts subject performance outline from target depth image sequence, concentrates from training depth image and extracts training Action outline;Then training action outline is carried out posture cluster, and cluster result is carried out action demarcation;Then mesh is calculated Mark action outline and the posture feature of training action outline;Posture feature then in conjunction with training action outline is carried out based on Gauss The postural training of mixed model also builds gesture model;Then the transition probability between each posture in each action of cluster result is calculated And build action diagram model;Finally according to posture feature, gesture model and the action diagram model of described subject performance outline to mesh Mark range image sequence carries out action recognition.Although the method can identify some actions, but there is also clearly disadvantageous: one Be the method be to carry out action recognition based on depth map so that the accuracy of method identification is largely dependent upon the matter of depth map Amount, also can be affected by external environment condition simultaneously, and two is that the method needs complicated algorithm for pattern recognition support, and training set needs Substantial amounts of off-line training, it is achieved difficulty is relatively big, three is that the real-time of method is bad, and recognition result has bigger delay.
Chinese patent application CN201410009445 discloses " a kind of 3D Gaussian spatial human body behavior based on image depth information Recognition methods ", first the method is extracted the skeleton 3D coordinate in depth information and it is normalized operation, filters The joint low to Human bodys' response rate and Joint motion, then for each behavior build interest close knot cluster, and based on Gauss away from Freestone carries out AP cluster to human action space characteristics, it is thus achieved that behavior characteristics word list also carries out data scrubbing, last structure to it Build human body behavior condition random field identification model, realize the classification to human body behavior accordingly.The method exists following not enough: one is Its used skeleton 3D coordinate is to use single pixel object identification method based on Stochastic Decision-making forest classified device to confirm people Body region also extracts, i.e. the method needs to initially set up the grader that can obtain skeleton data from depth image Practising algorithm, this can add its complexity and difficulties realized undoubtedly, and identifies that the quality of skeleton coordinate data is to whole method Identification accuracy bring uncertainty, the real-time that simultaneously will also result in method is poor;Two is the seat that method has only used skeleton Mark data, motion characteristic is single;Three is that the method finally needs to build Human bodys' response model, and model is set up at specific action Training sample basis on, need re-training to model for new behavior act, this suitability making the method and extension Property is the strongest.
In sum, the deficiency existing for prior art how is overcome to become in pattern recognition and human-computer interaction technique field urgently One of emphasis difficult problem to be solved.
Summary of the invention
It is an object of the invention to provide a kind of action recognition side based on human skeleton for overcoming the deficiency existing for prior art Method, the present invention has good real-time, robustness, accuracy, it is achieved easy to be reliable, it is adaptable to real-time action recognition system System.
A kind of based on human skeleton the action identification method proposed according to the present invention, it is characterised in that include following basic step Rapid:
Step one, obtains the people's continuous skeleton data frame sequence under performance objective action from somatosensory device: described somatosensory device is Refer at least can obtain 3d space positional information and the collecting device of angle information of each articulare including human skeleton;Institute State the data of the human joint points that human skeleton data include that this collecting device provided.
Step 2, filters out the major joint point data that can characterize action from skeleton data: the main pass of described sign action Node data is the data of the articulare playing a crucial role action recognition;If in the detection identification to gesture motion, Ke Yixuan Take the articulare data of upper limb: include right hand joint point, right finesse articulare, right elbow joint point, right shoulder joint node, left shoulder joint Node, left elbow joint point, left finesse articulare, left hand joint point, choosing by that analogy of the major joint point of other actions.
Step 3, extracts from the skeleton joint point data filtered out, calculates motion characteristic value and construct the characteristic vector sequence of action Row: described motion characteristic includes position, angle, speed, the speed of articulare and joint angle;Described characteristic vector sequence is The characteristic vector being made up of eigenvalue the sequence constituted.
Step 4, carries out pretreatment: described pretreatment refers to the coordinate of articulare in characteristic vector is done normalization to characteristic vector Process, process including size normalization process and place normalization.
Step 5, preserves the characteristic vector sequence of sample action collection as sample action template base.
Step 6, Real-time Collection action also calculates its characteristic vector sequence and everything in template base with dynamic time warping algorithm The distance value of the characteristic vector sequence of sample: described dynamic time warping algorithm refers to calculate two different seasonal effect in time series of length Distance value, and using this distance value as the method for the similarity passing judgment on two sequences.
Step 7, carries out Classification and Identification: according to the distance value calculated in step 6, calculate subject performance and template base to action The similarity of middle action template, last foundation similarity carries out Classification and Identification to subject performance.
The further preferred scheme of a kind of based on human skeleton the action identification method that the present invention proposes is:
Feature extraction described in step 3 refers to extract suitable feature from skeleton data, and these features can be the position of articulare P, speed V, close internode angle, θ etc., but be not limited only to the described feature mentioned;Characteristic vector sequence refers to by every frame bone Characteristic vector sequence R of the characteristic vector composition that rack data calculates, then R can be expressed as:
R={R1,R2,...,Rn,...RN,
Wherein, RnFor the characteristic vector of n-th frame skeleton data, N is the length of characteristic vector sequence.RnCan be expressed as:
Rn={ F1,F2,...,Fi,...FI}
FiBeing characterized the ith feature value of vector, I is the dimension of characteristic vector.
Described in step 4 referring to the coordinate position normalized of articulare in characteristic vector, the position choosing an articulare is made Centered by initial point, its position coordinates is:
C=(Cx,Cy,Cz),
Eigenvalue is the spatial value of jth articulareAfter normalization it is:
P j = ( P x j - C x , P y j - C y , P z j - C z ) ,
Also have the coordinate size normalization of articulare in characteristic vector is processed and refer to, choose the distance of two articulares as reference Distance D, the coordinate of last normalization posterior joint is:
P j = ( ( P x j - C x ) / D , ( P y j - C y ) / D , ( P z j - C z ) / D ) .
Sample action template base described in step 5 refers to the optimum sample form of the different actions obtained through step one to step 4 Composition, if RgFor the characteristic vector sequence of action g in template base, it can be expressed as:
R g = { R 1 g , R 2 n g , ... , R g , ... R N g } ,
Wherein n is characterized vector index in characteristic vector sequence, and N is characterized the length of sequence vector;The most last action template Storehouse can be expressed as: Rg, wherein in g ∈ (1, G), G is the number of action in module library.
Described in step 6 with dynamic time warping algorithm calculate in its characteristic vector sequence and template base the feature of everything sample to The distance value of amount sequence can be expressed as: makes R={R1,R2,...,RnAnd T={T1,T2,...,TmIt is respectively in action template base certain The reference template features sequence vector of action and the characteristic vector sequence of current action, n, m are respectively the length of characteristic vector sequence Degree, the D [R, T] of the beeline between R and T represents, first calculate in characteristic sequence between each characteristic vector away from From, it is preferred that distance uses Euclidean distance, with d (Ri,Tj) computing formula be:
d ( R i , T j ) = Σ t = 1 t = f ( R t i - T t j ) 2 ,
Wherein f is characterized the dimension of vector,Tt jIt is respectively characteristic vector RiAnd TjCall number is the eigenvalue of t, these distances Value may be constructed the node of the matrix grid of a m × n, tries to achieve the optimal path from the lower left corner of grid to the upper right corner;
The path of each node only has three directions, it is assumed that the coordinate of present node is that (i, j), the coordinate of the most next node can only be (i+1, j), in (i, j+1) and (i+1, j+1);With d, (i j) represents that (i, shortest path j) is corresponding from point (0,0) to point Distance, its computing formula is represented by:
D (i, j)=d (Ri,Tj)+min{d (i-1, j), d (i, j-1), d (i-1, j-1) },
In order to the regular path of different length sequence is compensated, if the length that K is reference action template characteristic sequence vector, then The minimum distance calculation formula of characteristic vector sequence R and T is:
D [R, T]=d (m, n)/K.
The distance value of subject performance and swooping template action can be obtained after above process, be set to: D [Ri, T], i is in template base The index of action, i ∈ (1, G), G are the quantity of swooping template action.
The algorithm that action carries out described in step 7 Classification and Identification is:
If similarity threshold is TH, obtain DTW distance D that subject performance is minimum with template basemin, its computing formula is:
Dmin=min{D [Ri, T] }, i ∈ (1, G),
If action corresponding to minimum range is g, by DminCompare with the distance threshold TH set, if Dmin<=TH, then subject performance is identified as g, if Dmin> TH, then during current action is not action template base Action.
The principle that realizes of the present invention is: a kind of based on human skeleton the action identification method that the present invention proposes, and is to obtain human body In the case of the skeleton data of action, by the feature of extraction action and calculate characteristic vector sequence and set up the template base of action, so The rear similarity calculating subject performance and the action in template base utilizing dynamic time warping algorithm real-time, finally completed action Classification and Identification.The different building shape size that the present invention reduces people by eigenvalue pre-normalization pretreatment is relative with photographic head with people The impact of position, enhances the robustness of algorithm, and uses template matching algorithm based on dynamic time warping, improve algorithm Accuracy and practicality, it is adaptable to real-time motion recognition system.
The present invention its remarkable advantage compared with existing human action identification technology is:
One be the present invention be to carry out action recognition based on skeleton data, compared with action identification method based on depth map, skeleton Data are affected less by environment, it is not necessary to the most complicated image processing algorithm carries out pretreatment, and motion characteristic is also easier to carry Take and calculate.
Two is that characteristic vector has been done normalization pretreatment by the present invention, and do so can strengthen the versatility of sample, eliminate human body The impact that the diversity of size is different relative to position with human body and somatosensory device, enhances robustness and the suitability of method.
Three is that the present invention has used dynamic time warping algorithm to calculate the similarity between the action of two different time length, should Sample does the impact that can avoid action executing speed, improves real-time and the accuracy of action recognition.
Four be the present invention be the template matching method that have employed characteristic vector sequence in terms of action training and identification, with rule-based Action identification method compare, it is to avoid crossing multiparameter and the setting of threshold value, autgmentability and the robustness of algorithm be higher, it is achieved rises Come simpler, also be able to identify more complicated action simultaneously.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of a kind of action identification method based on human skeleton.
Fig. 2 is the schematic diagram utilizing somatosensory device collection action.
Fig. 3 is the human skeleton schematic diagram having 20 articulares.
Detailed description of the invention
Below in conjunction with drawings and Examples, the detailed description of the invention of the present invention is described in further detail.
In conjunction with Fig. 1, a kind of based on human skeleton the action identification method that the present invention proposes, comprise the following specific steps that:
Step one, obtains the people's continuous skeleton data frame sequence under performance objective action from somatosensory device: described somatosensory device is Refer at least can obtain 3d space positional information and the collecting device of angle information of each articulare including human skeleton;Institute State the data of the human joint points that human skeleton data include that this collecting device provided;
Step 2, filters out the major joint point data that can characterize action from skeleton data: the main pass of described sign action Node data is the data of the articulare playing a crucial role action recognition;
Step 3, extracts from the skeleton joint point data filtered out, calculates motion characteristic value and construct the characteristic vector sequence of action Row: described motion characteristic includes position, angle, speed, the speed of articulare and joint angle;Described characteristic vector sequence is The characteristic vector being made up of eigenvalue the sequence constituted;
Step 4, carries out pretreatment: described pretreatment refers to the coordinate of articulare in characteristic vector is done normalization to characteristic vector Process, process including size normalization process and place normalization;
Step 5, preserves the characteristic vector sequence of sample action collection as sample action template base;
Step 6, Real-time Collection action also calculates its characteristic vector sequence and everything in template base with dynamic time warping algorithm The distance value of the characteristic vector sequence of sample: described dynamic time warping algorithm refers to calculate two different seasonal effect in time series of length Distance value, and using this distance value as the method for the similarity passing judgment on two sequences;
Step 7, carries out Classification and Identification: according to the distance value calculated in step 6, calculate subject performance and template base to action The similarity of middle action template, last foundation similarity carries out Classification and Identification to subject performance.
In conjunction with Fig. 2 and Fig. 3, a kind of based on human skeleton the action identification method below present invention proposed and preferred version thereof Concrete Application Example further illustrated:
First, obtain the people's continuous skeleton data frame sequence under performance objective action from somatosensory device.Fig. 2 gives and uses body-sensing Schematic diagram during equipment collection action, 201 can collect three-dimensional scene information with body-sensing photographic head by it for one, And extract the skeleton data of people in scene, 202 in the face of body-sensing collecting device and make subject performance for people.Fig. 3 gives one Kind comprise the human skeleton figure of 20 articulares, this skeleton include head, left and right shoulder, shoulder central point, left and right elbow joint, The articulares such as right-hand man's carpal joint, right-hand man, spinal column, left and right buttocks, buttocks central point, left and right knee joint, left and right ankle, left and right foot. The corresponding skeleton data (generally comprising positional information and the rotation information etc. of articulare) of continuous action is gathered also by somatosensory device Recording formation continuous print skeleton data frame sequence, these data are for subsequent action template establishment and action recognition.
Second, from skeleton data, filter out the major joint point data that can characterize action.The skeleton data obtained is sieved Choosing, selects the data of the articulare that can characterize action, and such as the detection identification to gesture motion, can choose the articulare of upper limb Data, the 301-308 in Fig. 3 is respectively right hand joint point, right finesse articulare, right elbow joint point, right shoulder joint node, a left side Shoulder joint node, left elbow joint point, left finesse articulare, left hand joint point, can only utilize these joints during gesture motion identification The data of point.
3rd, extract from the skeleton joint point data filtered out, calculate motion characteristic value and construct the characteristic vector sequence of action Row.Conventional motion characteristic has the position of articulare, towards, speed, close intersegment angle etc., each frame skeleton number to action According to extracting eigenvalue, and with these eigenvalues construction feature vector Rn, RnFor the characteristic vector of n-th frame skeleton data, if Fi It is characterized the ith feature value of vector, then RnCan be expressed as:
Rn={ F1,F2,...,Fi,...Fm}
M is characterized the dimension of vector, then the characteristic vector of the continuous print skeleton frame of action may be constructed the feature characterizing this action Sequence vector R, it can be expressed as:
R={R1,R2,...,Rn,...RN,
Wherein, N is characterized the length of sequence vector.
4th, characteristic vector is carried out pretreatment.First the coordinate position of articulare in characteristic vector is done normalized, choosing Taking the position of an articulare as center origin, its position coordinates is:
C=(Cx,Cy,Cz)
The eigenvalue to being characterized with body joint point coordinate value in characteristic vector is so needed to be normalized, if eigenvalue is The spatial value of j articulareThen the value after its normalization is:
P j = ( P x j - C x , P y j - C y , P z j - C z )
Then need coordinate size is normalized, choose the distance of two articulares more stable in skeleton as ginseng Examine distance, preferred scheme be use the distance between shoulder center and joint of vertebral column point as reference distance D, to position Coordinate data after normalization carries out size normalization again, and the coordinate of last normalization posterior joint is:
P j = ( ( P x j - C x ) / D , ( P y j - C y ) / D , ( P z j - C z ) / D ) .
5th, the characteristic vector sequence of sample action collection is preserved as sample action template base.Action template base be through The optimum sample form of the different actions that front four steps obtain, then characteristic vector sequence R of action ggCan be expressed as:
R g = { R 1 g , R 2 n g , ... , R g , ... R N g }
Wherein, n is characterized vector index in characteristic vector sequence, and N is characterized the length of sequence vector.
6th, Real-time Collection action also calculates its characteristic vector sequence and everything sample in template base with dynamic time warping algorithm The distance value of this characteristic vector sequence.After template base establishes, identifying of action can be carried out, during action recognition, want real Time gather skeleton data frame corresponding to one section of action and can obtain the dynamic of different time length after the first step processes to the 4th step Make characteristic vector sequence, then calculate its characteristic vector sequence and everything sample in template base with dynamic time warping algorithm The distance of characteristic vector sequence.Preferably, described step includes:
Make R={R1,R2,...,RnAnd T={T1,T2,...,TmIt is respectively the reference template features vector of certain action in action template base The characteristic vector sequence of sequence and current action, n, m be respectively the length of characteristic vector sequence, is also skeleton corresponding to action The length of Frame, Ri(1 < i < n), Tj(1 < j < m) represents characteristic vector respectively;
Beeline between R and T D [R, T] represents, first calculates the distance between each characteristic vector in characteristic sequence, Preferably, distance uses Euclidean distance, with d (Ri,Tj) computing formula be:
d ( R i , T j ) = Σ t = 1 t = f ( R t i - T t j ) 2
Wherein f is characterized the dimension of vector,Tt jIt is respectively characteristic vector RiAnd TjCall number is the eigenvalue of t, these away from Distance values may be constructed the node of the matrix grid of a m × n, then dynamic time warping algorithm refers to ask the lower-left from grid Angle is to the optimal path in the upper right corner.
The path of each node only has three directions, it is assumed that present node (i, j), the most next node can only be (i+1, j), In (i, j+1) and (i+1, j+1) one;With d (i, j) represent from point (0,0) to point (i, the distance that shortest path j) is corresponding, Its computing formula is represented by:
D (i, j)=d (Ri,Tj)+min{d (i-1, j), d (i, j-1), d (i-1, j-1) },
In order to the regular path of different length sequence is compensated, if the length that K is reference action template characteristic sequence vector, then The minimum distance calculation formula of characteristic vector sequence R and T is:
D [R, T]=d (m, n)/K,
The distance value of subject performance and swooping template action can be obtained after above process, be set to: D [Ri, T], i is in template base The index of action, i ∈ (1, G), G are the quantity of swooping template action.
7th, action is carried out Classification and Identification.Calculate subject performance and the similarity of action template, last foundation in template base Similarity carries out Classification and Identification to subject performance.First setting action similarity threshold, if distance is less than similarity threshold, then Identify successfully, the most unidentified success.
If similarity threshold is TH, obtain DTW distance D that subject performance is minimum with template basemin, its computing formula is:
Dmin=min{D [Ri,T]},i∈(1,G)
If action corresponding to minimum range is g, by DminCompare with the distance threshold TH set, if Dmin<=TH, then subject performance is identified as g, if Dmin> TH, then during current action is not action template base Action.
Especially, it should be noted that, those skilled in the art is fully able to understand, each module or each step of the invention described above can To realize with general calculating device.Based on such understanding, technical scheme can embody with the form of software product The most so, in the case of not conflicting, the feature in embodiments of the invention and embodiment can be mutually combined, i.e. the present invention It is not restricted to the combination of any specific hardware and software.It will be appreciated by those skilled in the art that accompanying drawing is a preferred embodiment Schematic diagram.
In the detailed description of the invention of the present invention, all explanations not related to belong to techniques known, refer to known technology in addition Implement.
The present invention, through validation trial, achieves satisfied trial effect.
Above detailed description of the invention and embodiment are that a kind of based on human skeleton the action identification method technology proposing the present invention is thought The concrete support thought, it is impossible to limiting protection scope of the present invention with this, every technological thought proposed according to the present invention, in this skill Any equivalent variations done on the basis of art scheme or the change of equivalence, all still fall within the scope of technical solution of the present invention protection.

Claims (6)

1. an action identification method based on human skeleton, it is characterised in that comprise the following steps that
Step one, obtains the people's continuous skeleton data frame sequence under performance objective action from somatosensory device: described somatosensory device is Refer at least can obtain 3d space positional information and the collecting device of angle information of each articulare including human skeleton;Institute State the data of the human joint points that human skeleton data include that this collecting device provided;
Step 2, filters out the major joint point data that can characterize action from skeleton data: the main pass of described sign action Node data is the data of the articulare playing a crucial role action recognition;
Step 3, extracts from the skeleton joint point data filtered out, calculates motion characteristic value and construct the characteristic vector sequence of action Row: described motion characteristic includes position, angle, speed, the speed of articulare and joint angle;Described characteristic vector sequence is The characteristic vector being made up of eigenvalue the sequence constituted;
Step 4, carries out pretreatment: described pretreatment refers to the coordinate of articulare in characteristic vector is done normalization to characteristic vector Process, process including size normalization process and place normalization;
Step 5, preserves the characteristic vector sequence of sample action collection as sample action template base;
Step 6, Real-time Collection action also calculates its characteristic vector sequence and everything in template base with dynamic time warping algorithm The distance value of the characteristic vector sequence of sample: described dynamic time warping algorithm refers to calculate two different seasonal effect in time series of length Distance value, and using this distance value as the method for the similarity passing judgment on two sequences;
Step 7, carries out Classification and Identification: according to the distance value calculated in step 6, calculate subject performance and template base to action The similarity of middle action template, last foundation similarity carries out Classification and Identification to subject performance.
A kind of action identification method based on human skeleton the most according to claim 1, it is characterised in that step 3 institute Stating feature extraction to refer to extract suitable feature from skeleton data, these features can be the position P of articulare, speed V, Close internode angle, θ etc., but be not limited only to the described feature mentioned;Characteristic vector sequence refers to be calculated by every frame skeleton data Characteristic vector composition characteristic vector sequence R, then R can be expressed as:
R={R1,R2,...,Rn,...RN,
Wherein, RnFor the characteristic vector of n-th frame skeleton data, N is the length of characteristic vector sequence, RnCan be expressed as:
Rn={ F1,F2,...,Fi,...FI}
FiBeing characterized the ith feature value of vector, I is the dimension of characteristic vector.
A kind of action identification method based on human skeleton the most according to claim 2, it is characterised in that step 4 institute Stating and the coordinate position of articulare in characteristic vector is done normalized refer to, the position choosing an articulare is former as center Point, its position coordinates is:
C=(Cx,Cy,Cz),
Eigenvalue is the spatial value of jth articulareAfter normalization it is:
P j = ( P x j - C x , P y j - C y , P z j - C z ) ,
Also have and body joint point coordinate size characteristic value is normalized, choose the distance of two articulares as reference distance D, the coordinate of last normalization posterior joint is:
P j = ( ( P x j - C x ) / D , ( P y j - C y ) / D , ( P z j - C z ) / D ) .
A kind of action identification method based on human skeleton the most according to claim 3, it is characterised in that step 5 institute State the optimum sample form composition of the different actions that sample action template base refers to obtain through step one to step 4, if RgFor the characteristic vector sequence of action g in template base, it can be expressed as:
R g = { R 1 g , R 2 g , ... , R n g , ... R N g } ,
Wherein n is characterized vector index in characteristic vector sequence, and N is characterized the length of sequence vector;The most last action template Storehouse can be expressed as: Rg, wherein in g ∈ (1, G), G is the number of action in module library.
A kind of action identification method based on human skeleton the most according to claim 4, it is characterised in that step 6 institute State and calculate its characteristic vector sequence and the distance of the characteristic vector sequence of everything sample in template base with dynamic time warping algorithm Value can be expressed as: makes R={R1,R2,...,RnAnd T={T1,T2,...,TmIt is respectively the reference mould of certain action in action template base The characteristic vector sequence of plate features sequence vector and current action, n, m be respectively the length of characteristic vector sequence, R and T it Between beeline represent with D [R, T], first calculate the distance between each characteristic vector in characteristic sequence, it is preferred that away from From using Euclidean distance, with d (Ri,Tj) computing formula be:
d ( R i , T j ) = Σ t = 1 t = f ( R t i - T t j ) 2 ,
Wherein f is characterized the dimension of vector,It is respectively characteristic vector RiAnd TjCall number is the eigenvalue of t, these distances Value may be constructed the node of the matrix grid of a m × n, tries to achieve the optimal path from the lower left corner of grid to the upper right corner;
The path of each node only has three directions, it is assumed that the coordinate of present node is that (i, j), the coordinate of the most next node can only be (i+1, j), in (i, j+1) and (i+1, j+1);With d, (i j) represents that (i, shortest path j) is corresponding from point (0,0) to point Distance, its computing formula is represented by:
D (i, j)=d (Ri,Tj)+min{d (i-1, j), d (i, j-1), d (i-1, j-1) },
In order to the regular path of different length sequence is compensated, if the length that K is reference action template characteristic sequence vector, then The minimum distance calculation formula of characteristic vector sequence R and T is:
D [R, T]=d (m, n)/K,
The distance value of subject performance and swooping template action can be obtained after above process, be set to: D [Ri, T], i is in template base The index of action, i ∈ (1, G), G are the quantity of swooping template action.
A kind of action identification method based on human skeleton the most according to claim 5, it is characterised in that step 7 institute The algorithm stating the Classification and Identification to action is:
If similarity threshold is TH, obtain DTW distance D that subject performance is minimum with template basemin, its computing formula is:
Dmin=min{D [Ri, T] }, i ∈ (1, G),
If action corresponding to minimum range is g, by DminCompare with the distance threshold TH set, if Dmin<=TH, then subject performance is identified as g, if Dmin> TH, then during current action is not action template base Action.
CN201610211534.8A 2016-04-06 2016-04-06 A kind of action identification method based on human skeleton Active CN105930767B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610211534.8A CN105930767B (en) 2016-04-06 2016-04-06 A kind of action identification method based on human skeleton

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610211534.8A CN105930767B (en) 2016-04-06 2016-04-06 A kind of action identification method based on human skeleton

Publications (2)

Publication Number Publication Date
CN105930767A true CN105930767A (en) 2016-09-07
CN105930767B CN105930767B (en) 2019-05-17

Family

ID=56840213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610211534.8A Active CN105930767B (en) 2016-04-06 2016-04-06 A kind of action identification method based on human skeleton

Country Status (1)

Country Link
CN (1) CN105930767B (en)

Cited By (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106658038A (en) * 2016-12-19 2017-05-10 广州虎牙信息科技有限公司 Live broadcast interaction method based on video stream and corresponding device thereof
CN106682594A (en) * 2016-12-13 2017-05-17 中国科学院软件研究所 Posture and motion identification method based on dynamic grid coding
CN106845386A (en) * 2017-01-16 2017-06-13 中山大学 A kind of action identification method based on dynamic time warping Yu Multiple Kernel Learning
CN107080940A (en) * 2017-03-07 2017-08-22 中国农业大学 Body feeling interaction conversion method and device based on depth camera Kinect
CN107180235A (en) * 2017-06-01 2017-09-19 陕西科技大学 Human action recognizer based on Kinect
CN107229920A (en) * 2017-06-08 2017-10-03 重庆大学 Based on integrating, depth typical time period is regular and Activity recognition method of related amendment
CN107392939A (en) * 2017-08-01 2017-11-24 南京华捷艾米软件科技有限公司 Indoor sport observation device, method and storage medium based on body-sensing technology
CN107392131A (en) * 2017-07-14 2017-11-24 天津大学 A kind of action identification method based on skeleton nodal distance
CN107392098A (en) * 2017-06-15 2017-11-24 北京小轮科技有限公司 A kind of action completeness recognition methods based on human skeleton information
CN107424207A (en) * 2017-07-10 2017-12-01 北京航空航天大学 A kind of Virtual Maintenance Simulation method and device based on data fusion
CN107443396A (en) * 2017-08-25 2017-12-08 魔咖智能科技(常州)有限公司 A kind of intelligence for imitating human action in real time accompanies robot
CN107832736A (en) * 2017-11-24 2018-03-23 南京华捷艾米软件科技有限公司 The recognition methods of real-time body's action and the identification device of real-time body's action
CN107908288A (en) * 2017-11-30 2018-04-13 沈阳工业大学 A kind of quick human motion recognition method towards human-computer interaction
CN108460364A (en) * 2018-03-27 2018-08-28 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN108597578A (en) * 2018-04-27 2018-09-28 广东省智能制造研究所 A kind of human motion appraisal procedure based on two-dimensional framework sequence
CN108764120A (en) * 2018-05-24 2018-11-06 杭州师范大学 A kind of human body specification action evaluation method
CN108960056A (en) * 2018-05-30 2018-12-07 西南交通大学 A kind of fall detection method based on posture analysis and Support Vector data description
CN109255293A (en) * 2018-07-31 2019-01-22 浙江理工大学 Model's showing stage based on computer vision walks evaluation method
CN109271845A (en) * 2018-07-31 2019-01-25 浙江理工大学 Human action analysis and evaluation methods based on computer vision
CN109308437A (en) * 2017-07-28 2019-02-05 上海形趣信息科技有限公司 Action recognition error correction method, electronic equipment, storage medium
CN109344694A (en) * 2018-08-13 2019-02-15 西安理工大学 A kind of human body elemental motion real-time identification method based on three-dimensional human skeleton
CN109376663A (en) * 2018-10-29 2019-02-22 广东工业大学 A kind of human posture recognition method and relevant apparatus
CN109409209A (en) * 2018-09-11 2019-03-01 广州杰赛科技股份有限公司 A kind of Human bodys' response method and apparatus
CN109460702A (en) * 2018-09-14 2019-03-12 华南理工大学 Passenger's abnormal behaviour recognition methods based on human skeleton sequence
CN109589563A (en) * 2018-12-29 2019-04-09 南京华捷艾米软件科技有限公司 A kind of auxiliary method and system of dancing posture religion based on 3D body-sensing camera
CN109833608A (en) * 2018-12-29 2019-06-04 南京华捷艾米软件科技有限公司 A kind of auxiliary method and system of dance movement religion based on 3D body-sensing camera
CN109919137A (en) * 2019-03-28 2019-06-21 广东省智能制造研究所 A kind of pedestrian's structured features expression
CN110135246A (en) * 2019-04-03 2019-08-16 平安科技(深圳)有限公司 A kind of recognition methods and equipment of human action
CN110147717A (en) * 2019-04-03 2019-08-20 平安科技(深圳)有限公司 A kind of recognition methods and equipment of human action
CN110163130A (en) * 2019-05-08 2019-08-23 清华大学 A kind of random forest grader and classification method of the feature pre-align for gesture identification
CN110163086A (en) * 2019-04-09 2019-08-23 缤刻普达(北京)科技有限责任公司 Body-building action identification method, device, equipment and medium neural network based
CN110210284A (en) * 2019-04-12 2019-09-06 哈工大机器人义乌人工智能研究院 A kind of human body attitude behavior intelligent Evaluation method
CN110290352A (en) * 2019-06-28 2019-09-27 Oppo广东移动通信有限公司 Monitoring method and device, electronic equipment and storage medium
CN110414453A (en) * 2019-07-31 2019-11-05 电子科技大学成都学院 Human body action state monitoring method under a kind of multiple perspective based on machine vision
CN110555349A (en) * 2018-06-01 2019-12-10 杭州海康威视数字技术股份有限公司 working time length statistical method and device
CN110688929A (en) * 2019-09-20 2020-01-14 北京华捷艾米科技有限公司 Human skeleton joint point positioning method and device
CN110717460A (en) * 2019-10-12 2020-01-21 中国矿业大学 Mine personnel illegal action identification method
CN110728220A (en) * 2019-09-30 2020-01-24 上海大学 Gymnastics auxiliary training method based on human body action skeleton information
CN110781857A (en) * 2019-11-05 2020-02-11 北京沃东天骏信息技术有限公司 Motion monitoring method, device, system and storage medium
CN111316283A (en) * 2017-10-31 2020-06-19 Sk电信有限公司 Gesture recognition method and device
CN111382306A (en) * 2018-12-28 2020-07-07 杭州海康威视数字技术股份有限公司 Method and device for inquiring video frame
CN111507182A (en) * 2020-03-11 2020-08-07 杭州电子科技大学 Skeleton point fusion cyclic cavity convolution-based littering behavior detection method
CN111553229A (en) * 2020-04-21 2020-08-18 清华大学 Worker action identification method and device based on three-dimensional skeleton and LSTM
CN112001335A (en) * 2020-08-27 2020-11-27 武汉科技大学 Interface system for evaluating unsafe behavior risk of subway passengers
CN111991772A (en) * 2020-09-08 2020-11-27 衢州职业技术学院 Device and system for assisting upper limb training
CN112101315A (en) * 2019-11-20 2020-12-18 北京健康有益科技有限公司 Deep learning-based exercise judgment guidance method and system
CN112101273A (en) * 2020-09-23 2020-12-18 浙江浩腾电子科技股份有限公司 Data preprocessing method based on 2D framework
CN112185515A (en) * 2020-10-12 2021-01-05 安徽动感智能科技有限公司 Patient auxiliary system based on action recognition
EP3716212A4 (en) * 2017-12-19 2021-01-27 Huawei Technologies Co., Ltd. Image coding method, action recognition method, and computer device
CN112418153A (en) * 2020-12-04 2021-02-26 上海商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and computer storage medium
CN112541421A (en) * 2020-12-08 2021-03-23 浙江科技学院 Pedestrian reloading identification method in open space
CN112651275A (en) * 2020-09-01 2021-04-13 武汉科技大学 Intelligent system for recognizing pedaling accident inducement behaviors in intensive personnel places
CN112800892A (en) * 2021-01-18 2021-05-14 南京邮电大学 Human body posture recognition method based on openposition
CN113627369A (en) * 2021-08-16 2021-11-09 南通大学 Action recognition and tracking method in auction scene
CN113705542A (en) * 2021-10-27 2021-11-26 北京理工大学 Pedestrian behavior state identification method and system
CN113822250A (en) * 2021-11-23 2021-12-21 中船(浙江)海洋科技有限公司 Ship driving abnormal behavior detection method
CN115393964A (en) * 2022-10-26 2022-11-25 天津科技大学 Body-building action recognition method and device based on BlazePose
US11514605B2 (en) 2020-09-29 2022-11-29 International Business Machines Corporation Computer automated interactive activity recognition based on keypoint detection

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101794372A (en) * 2009-11-30 2010-08-04 南京大学 Method for representing and recognizing gait characteristics based on frequency domain analysis
CN103211599A (en) * 2013-05-13 2013-07-24 桂林电子科技大学 Method and device for monitoring tumble
CN103955682A (en) * 2014-05-22 2014-07-30 深圳市赛为智能股份有限公司 Behavior recognition method and device based on SURF interest points
CN104598880A (en) * 2015-03-06 2015-05-06 中山大学 Behavior identification method based on fuzzy support vector machine
CN105320944A (en) * 2015-10-24 2016-02-10 西安电子科技大学 Human body behavior prediction method based on human body skeleton movement information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101794372A (en) * 2009-11-30 2010-08-04 南京大学 Method for representing and recognizing gait characteristics based on frequency domain analysis
CN103211599A (en) * 2013-05-13 2013-07-24 桂林电子科技大学 Method and device for monitoring tumble
CN103955682A (en) * 2014-05-22 2014-07-30 深圳市赛为智能股份有限公司 Behavior recognition method and device based on SURF interest points
CN104598880A (en) * 2015-03-06 2015-05-06 中山大学 Behavior identification method based on fuzzy support vector machine
CN105320944A (en) * 2015-10-24 2016-02-10 西安电子科技大学 Human body behavior prediction method based on human body skeleton movement information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
田国会 等: "一种基于关节点信息的人体行为识别新方法", 《机器人》 *

Cited By (87)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106682594A (en) * 2016-12-13 2017-05-17 中国科学院软件研究所 Posture and motion identification method based on dynamic grid coding
CN111405299A (en) * 2016-12-19 2020-07-10 广州虎牙信息科技有限公司 Live broadcast interaction method based on video stream and corresponding device thereof
CN106658038A (en) * 2016-12-19 2017-05-10 广州虎牙信息科技有限公司 Live broadcast interaction method based on video stream and corresponding device thereof
CN111405299B (en) * 2016-12-19 2022-03-01 广州虎牙信息科技有限公司 Live broadcast interaction method based on video stream and corresponding device thereof
CN106845386A (en) * 2017-01-16 2017-06-13 中山大学 A kind of action identification method based on dynamic time warping Yu Multiple Kernel Learning
CN106845386B (en) * 2017-01-16 2019-12-03 中山大学 A kind of action identification method based on dynamic time warping and Multiple Kernel Learning
CN107080940A (en) * 2017-03-07 2017-08-22 中国农业大学 Body feeling interaction conversion method and device based on depth camera Kinect
CN107180235A (en) * 2017-06-01 2017-09-19 陕西科技大学 Human action recognizer based on Kinect
CN107229920B (en) * 2017-06-08 2020-11-13 重庆大学 Behavior identification method based on integration depth typical time warping and related correction
CN107229920A (en) * 2017-06-08 2017-10-03 重庆大学 Based on integrating, depth typical time period is regular and Activity recognition method of related amendment
CN107392098A (en) * 2017-06-15 2017-11-24 北京小轮科技有限公司 A kind of action completeness recognition methods based on human skeleton information
CN107424207A (en) * 2017-07-10 2017-12-01 北京航空航天大学 A kind of Virtual Maintenance Simulation method and device based on data fusion
CN107392131A (en) * 2017-07-14 2017-11-24 天津大学 A kind of action identification method based on skeleton nodal distance
CN109308437B (en) * 2017-07-28 2022-06-24 上海史贝斯健身管理有限公司 Motion recognition error correction method, electronic device, and storage medium
CN109308437A (en) * 2017-07-28 2019-02-05 上海形趣信息科技有限公司 Action recognition error correction method, electronic equipment, storage medium
CN107392939A (en) * 2017-08-01 2017-11-24 南京华捷艾米软件科技有限公司 Indoor sport observation device, method and storage medium based on body-sensing technology
CN107443396A (en) * 2017-08-25 2017-12-08 魔咖智能科技(常州)有限公司 A kind of intelligence for imitating human action in real time accompanies robot
CN111316283A (en) * 2017-10-31 2020-06-19 Sk电信有限公司 Gesture recognition method and device
CN111316283B (en) * 2017-10-31 2023-10-17 Sk电信有限公司 Gesture recognition method and device
CN107832736B (en) * 2017-11-24 2020-10-27 南京华捷艾米软件科技有限公司 Real-time human body action recognition method and real-time human body action recognition device
CN107832736A (en) * 2017-11-24 2018-03-23 南京华捷艾米软件科技有限公司 The recognition methods of real-time body's action and the identification device of real-time body's action
CN107908288A (en) * 2017-11-30 2018-04-13 沈阳工业大学 A kind of quick human motion recognition method towards human-computer interaction
US11303925B2 (en) 2017-12-19 2022-04-12 Huawei Technologies Co., Ltd. Image coding method, action recognition method, and action recognition apparatus
US11825115B2 (en) 2017-12-19 2023-11-21 Huawei Technologies Co., Ltd. Image coding method, action recognition method, and action recognition apparatus
EP3716212A4 (en) * 2017-12-19 2021-01-27 Huawei Technologies Co., Ltd. Image coding method, action recognition method, and computer device
CN108460364A (en) * 2018-03-27 2018-08-28 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN108460364B (en) * 2018-03-27 2022-03-11 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN108597578B (en) * 2018-04-27 2021-11-05 广东省智能制造研究所 Human motion assessment method based on two-dimensional skeleton sequence
CN108597578A (en) * 2018-04-27 2018-09-28 广东省智能制造研究所 A kind of human motion appraisal procedure based on two-dimensional framework sequence
CN108764120B (en) * 2018-05-24 2021-11-09 杭州师范大学 Human body standard action evaluation method
CN108764120A (en) * 2018-05-24 2018-11-06 杭州师范大学 A kind of human body specification action evaluation method
CN108960056B (en) * 2018-05-30 2022-06-03 西南交通大学 Fall detection method based on attitude analysis and support vector data description
CN108960056A (en) * 2018-05-30 2018-12-07 西南交通大学 A kind of fall detection method based on posture analysis and Support Vector data description
CN110555349B (en) * 2018-06-01 2023-05-02 杭州海康威视数字技术股份有限公司 Working time length statistics method and device
CN110555349A (en) * 2018-06-01 2019-12-10 杭州海康威视数字技术股份有限公司 working time length statistical method and device
CN109255293A (en) * 2018-07-31 2019-01-22 浙江理工大学 Model's showing stage based on computer vision walks evaluation method
CN109255293B (en) * 2018-07-31 2021-07-13 浙江理工大学 Model walking-show bench step evaluation method based on computer vision
CN109271845A (en) * 2018-07-31 2019-01-25 浙江理工大学 Human action analysis and evaluation methods based on computer vision
CN109344694B (en) * 2018-08-13 2022-03-22 西安理工大学 Human body basic action real-time identification method based on three-dimensional human body skeleton
CN109344694A (en) * 2018-08-13 2019-02-15 西安理工大学 A kind of human body elemental motion real-time identification method based on three-dimensional human skeleton
CN109409209A (en) * 2018-09-11 2019-03-01 广州杰赛科技股份有限公司 A kind of Human bodys' response method and apparatus
CN109460702B (en) * 2018-09-14 2022-02-15 华南理工大学 Passenger abnormal behavior identification method based on human body skeleton sequence
CN109460702A (en) * 2018-09-14 2019-03-12 华南理工大学 Passenger's abnormal behaviour recognition methods based on human skeleton sequence
CN109376663A (en) * 2018-10-29 2019-02-22 广东工业大学 A kind of human posture recognition method and relevant apparatus
CN111382306A (en) * 2018-12-28 2020-07-07 杭州海康威视数字技术股份有限公司 Method and device for inquiring video frame
CN111382306B (en) * 2018-12-28 2023-12-01 杭州海康威视数字技术股份有限公司 Method and device for inquiring video frame
CN109833608A (en) * 2018-12-29 2019-06-04 南京华捷艾米软件科技有限公司 A kind of auxiliary method and system of dance movement religion based on 3D body-sensing camera
CN109589563A (en) * 2018-12-29 2019-04-09 南京华捷艾米软件科技有限公司 A kind of auxiliary method and system of dancing posture religion based on 3D body-sensing camera
CN109919137B (en) * 2019-03-28 2021-06-25 广东省智能制造研究所 Pedestrian structural feature expression method
CN109919137A (en) * 2019-03-28 2019-06-21 广东省智能制造研究所 A kind of pedestrian's structured features expression
CN110147717B (en) * 2019-04-03 2023-10-20 平安科技(深圳)有限公司 Human body action recognition method and device
CN110135246A (en) * 2019-04-03 2019-08-16 平安科技(深圳)有限公司 A kind of recognition methods and equipment of human action
CN110147717A (en) * 2019-04-03 2019-08-20 平安科技(深圳)有限公司 A kind of recognition methods and equipment of human action
WO2020199479A1 (en) * 2019-04-03 2020-10-08 平安科技(深圳)有限公司 Human motion recognition method and device
CN110135246B (en) * 2019-04-03 2023-10-20 平安科技(深圳)有限公司 Human body action recognition method and device
WO2020199480A1 (en) * 2019-04-03 2020-10-08 平安科技(深圳)有限公司 Body movement recognition method and device
CN110163086A (en) * 2019-04-09 2019-08-23 缤刻普达(北京)科技有限责任公司 Body-building action identification method, device, equipment and medium neural network based
CN110210284A (en) * 2019-04-12 2019-09-06 哈工大机器人义乌人工智能研究院 A kind of human body attitude behavior intelligent Evaluation method
CN110163130A (en) * 2019-05-08 2019-08-23 清华大学 A kind of random forest grader and classification method of the feature pre-align for gesture identification
CN110163130B (en) * 2019-05-08 2021-05-28 清华大学 Feature pre-alignment random forest classification system and method for gesture recognition
CN110290352A (en) * 2019-06-28 2019-09-27 Oppo广东移动通信有限公司 Monitoring method and device, electronic equipment and storage medium
CN110414453A (en) * 2019-07-31 2019-11-05 电子科技大学成都学院 Human body action state monitoring method under a kind of multiple perspective based on machine vision
CN110688929A (en) * 2019-09-20 2020-01-14 北京华捷艾米科技有限公司 Human skeleton joint point positioning method and device
CN110688929B (en) * 2019-09-20 2021-11-30 北京华捷艾米科技有限公司 Human skeleton joint point positioning method and device
CN110728220A (en) * 2019-09-30 2020-01-24 上海大学 Gymnastics auxiliary training method based on human body action skeleton information
CN110717460A (en) * 2019-10-12 2020-01-21 中国矿业大学 Mine personnel illegal action identification method
CN110781857B (en) * 2019-11-05 2022-09-06 北京沃东天骏信息技术有限公司 Motion monitoring method, device, system and storage medium
CN110781857A (en) * 2019-11-05 2020-02-11 北京沃东天骏信息技术有限公司 Motion monitoring method, device, system and storage medium
CN112101315A (en) * 2019-11-20 2020-12-18 北京健康有益科技有限公司 Deep learning-based exercise judgment guidance method and system
CN111507182A (en) * 2020-03-11 2020-08-07 杭州电子科技大学 Skeleton point fusion cyclic cavity convolution-based littering behavior detection method
CN111507182B (en) * 2020-03-11 2021-03-16 杭州电子科技大学 Skeleton point fusion cyclic cavity convolution-based littering behavior detection method
CN111553229A (en) * 2020-04-21 2020-08-18 清华大学 Worker action identification method and device based on three-dimensional skeleton and LSTM
CN112001335A (en) * 2020-08-27 2020-11-27 武汉科技大学 Interface system for evaluating unsafe behavior risk of subway passengers
CN112651275A (en) * 2020-09-01 2021-04-13 武汉科技大学 Intelligent system for recognizing pedaling accident inducement behaviors in intensive personnel places
CN111991772A (en) * 2020-09-08 2020-11-27 衢州职业技术学院 Device and system for assisting upper limb training
CN112101273A (en) * 2020-09-23 2020-12-18 浙江浩腾电子科技股份有限公司 Data preprocessing method based on 2D framework
CN112101273B (en) * 2020-09-23 2022-04-29 浙江浩腾电子科技股份有限公司 Data preprocessing method based on 2D framework
US11514605B2 (en) 2020-09-29 2022-11-29 International Business Machines Corporation Computer automated interactive activity recognition based on keypoint detection
CN112185515A (en) * 2020-10-12 2021-01-05 安徽动感智能科技有限公司 Patient auxiliary system based on action recognition
CN112418153A (en) * 2020-12-04 2021-02-26 上海商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and computer storage medium
CN112541421A (en) * 2020-12-08 2021-03-23 浙江科技学院 Pedestrian reloading identification method in open space
CN112800892B (en) * 2021-01-18 2022-08-26 南京邮电大学 Human body posture recognition method based on openposition
CN112800892A (en) * 2021-01-18 2021-05-14 南京邮电大学 Human body posture recognition method based on openposition
CN113627369A (en) * 2021-08-16 2021-11-09 南通大学 Action recognition and tracking method in auction scene
CN113705542A (en) * 2021-10-27 2021-11-26 北京理工大学 Pedestrian behavior state identification method and system
CN113822250A (en) * 2021-11-23 2021-12-21 中船(浙江)海洋科技有限公司 Ship driving abnormal behavior detection method
CN115393964A (en) * 2022-10-26 2022-11-25 天津科技大学 Body-building action recognition method and device based on BlazePose

Also Published As

Publication number Publication date
CN105930767B (en) 2019-05-17

Similar Documents

Publication Publication Date Title
CN105930767A (en) Human body skeleton-based action recognition method
CN106250867B (en) A kind of implementation method of the skeleton tracking system based on depth data
EP2095296B1 (en) A method and system for providing a three-dimensional model of an object of interest.
CN109299659A (en) A kind of human posture recognition method and system based on RGB camera and deep learning
CN102184541B (en) Multi-objective optimized human body motion tracking method
CN108416266A (en) A kind of video behavior method for quickly identifying extracting moving target using light stream
CN105426827A (en) Living body verification method, device and system
Sincan et al. Using motion history images with 3d convolutional networks in isolated sign language recognition
CN102622766A (en) Multi-objective optimization multi-lens human motion tracking method
CN108875586B (en) Functional limb rehabilitation training detection method based on depth image and skeleton data multi-feature fusion
CN110009027A (en) Comparison method, device, storage medium and the electronic device of image
CN102567703A (en) Hand motion identification information processing method based on classification characteristic
CN110991274B (en) Pedestrian tumbling detection method based on Gaussian mixture model and neural network
JP2019096113A (en) Processing device, method and program relating to keypoint data
CN106815855A (en) Based on the human body motion tracking method that production and discriminate combine
CN105741326B (en) A kind of method for tracking target of the video sequence based on Cluster-Fusion
CN105912126A (en) Method for adaptively adjusting gain, mapped to interface, of gesture movement
WO2021085561A1 (en) Training data generation method
JP2016014954A (en) Method for detecting finger shape, program thereof, storage medium of program thereof, and system for detecting finger shape
CN108305321A (en) A kind of three-dimensional human hand 3D skeleton patterns real-time reconstruction method and apparatus based on binocular color imaging system
Liu et al. Trampoline motion decomposition method based on deep learning image recognition
Li et al. Application algorithms for basketball training based on big data and Internet of things
CN110910426A (en) Action process and action trend identification method, storage medium and electronic device
Mosayyebi et al. Gender recognition in masked facial images using EfficientNet and transfer learning approach
CN111310720A (en) Pedestrian re-identification method and system based on graph metric learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant