CN109117893A - A kind of action identification method and device based on human body attitude - Google Patents

A kind of action identification method and device based on human body attitude Download PDF

Info

Publication number
CN109117893A
CN109117893A CN201810988873.6A CN201810988873A CN109117893A CN 109117893 A CN109117893 A CN 109117893A CN 201810988873 A CN201810988873 A CN 201810988873A CN 109117893 A CN109117893 A CN 109117893A
Authority
CN
China
Prior art keywords
bone data
human body
data
filtering
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810988873.6A
Other languages
Chinese (zh)
Inventor
陈加
张玉麒
宁国勤
左明章
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong Normal University
Original Assignee
Huazhong Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong Normal University filed Critical Huazhong Normal University
Priority to CN201810988873.6A priority Critical patent/CN109117893A/en
Publication of CN109117893A publication Critical patent/CN109117893A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of action identification method and device based on human body attitude, method therein includes: the filtered skeleton data obtained by improved limit filtration algorithm, angle character is obtained by improved angle computation method, the angle character logic-based recurrence of point good class is trained, classifier after being trained, the recognition result to human body static posture is obtained by classifier again, finally identifies the movement of human body using inverted order method according to the recognition result of static posture.The present invention realizes the technical effect for promoting recognition speed and improving identification accuracy.

Description

Motion recognition method and device based on human body posture
Technical Field
The invention relates to the technical field of human-computer interaction, in particular to a method and a device for recognizing actions based on human body postures.
Background
With the development of the times, people call a more natural man-machine interaction mode, and the man-machine interaction mode is introduced into man-machine interaction and is called as a 'natural' interaction mode. This includes a series of techniques for recognizing human body, arm, and hand gestures. Among these natural interaction methods, there are methods such as motion, gesture, and voice. Motion is an important distinction between people and other objects. People express certain information and emotions through gestures, such as in a sporting event, where referees use various gestures to convey information. Therefore, it is very necessary to find a good method for recognizing human body gestures.
The conventional human body motion recognition technology often uses media such as a common camera device, a radar or some wearable sensor devices, and the application of the technology is relatively limited due to the defects of one or more aspects such as recognition efficiency, cost, environmental constraints and the like. The low-cost depth camera Kinect released by Microsoft in 2010 provides a new choice for the technology, the Kinect can acquire a more accurate depth image and directly represents the three-dimensional characteristics of an object, and the problem possibly existing in action recognition based on the traditional two-dimensional image characteristics can be avoided to a certain extent.
In the implementation of the scheme of the invention, the applicant finds that in the prior art, a method for identifying human body actions after a depth image is acquired by using a Kinect has higher requirements on identification actions and is influenced by different illumination and identifiers, so that the algorithm complexity is higher, and the identification accuracy is still to be improved.
Disclosure of Invention
In view of the above, the present invention provides a method and an apparatus for recognizing a motion based on a human body posture, which can recognize a motion of a human body in real time, have a higher accuracy for recognizing the motion, reduce the complexity of an algorithm to make the algorithm easier to use, and solve the technical problem of low recognition accuracy in the prior art.
The invention provides a motion recognition method based on human body gestures in a first aspect, which comprises the following steps:
step S1: acquiring skeleton data of a human body through a skeleton tracking technology of a depth sensor, wherein the skeleton data comprises three-dimensional coordinates of human body joint points, and converting the three-dimensional coordinates into a world coordinate system in which the human body is located;
step S2: filtering the bone data by using an improved amplitude limiting filtering algorithm to obtain filtered bone data, wherein the improved amplitude limiting filtering algorithm specifically comprises: firstly, judging whether the jitter degree of the current bone data exceeds a threshold value, if the jitter degree of the current bone data does not exceed the threshold value, updating the current bone data by adopting the bone data of a filtering buffer area, otherwise, continuously judging whether the jitter degree of the last bone data exceeds the threshold value, and if the jitter degree of the last bone data does not exceed the threshold value, updating the bone data of the filtering buffer area by adopting the current bone data; if the jitter degree of the previous bone data exceeds the threshold value, judging whether the current bone data is in a filtering range, if so, updating the current bone data by using the bone data of the filtering buffer area, and if not, updating the bone data of the filtering buffer area by using the current bone data;
step S3: performing feature extraction on the filtered bone data according to the converted three-dimensional coordinates and a preset angle calculation method to obtain angle features formed by angles of all joint points;
step S4: training a pre-obtained training sample set based on a logistic regression algorithm and the angle features to obtain a classifier;
step S5: identifying the motion of the human body through the classifier to obtain a static gesture identification result;
step S6: and judging whether two preset static gestures are recognized in five frames or not by adopting a reverse order recognition method based on the static gesture recognition result, if so, recognizing a dynamic action, and taking the dynamic action as an action recognition result.
Further, the depth sensor further acquires depth information, and step S1 specifically includes:
step S1.1: obtaining the actual distance from the depth sensor to the human body according to the depth information;
step S1.2: converting the three-dimensional coordinates of the depth image into actual coordinates in a world coordinate system according to the actual distance and a coordinate conversion formula, wherein the coordinate conversion formula is as follows:
wherein (x, y) is the actual coordinate, (x)d,yd,zd) In the depth information, w × h is the resolution of the depth sensor, and D and F are constants, where D is-10 and F is 0.0021.
Further, in step S2, the degree of shaking of the bone data is expressed by the shaking radius of the bone data.
Further, step S3 specifically includes:
step S3.1: adopting a distance calculation formula to calculate the distance information between the joint points, wherein the distance calculation formula is as follows:
wherein the joint points comprise A, B, C, wherein the actual coordinate of the joint point A is (x)1,y1) The actual coordinate of the joint point B is (x)2,y2) The actual coordinate of the joint point C is (x)3,y3);
Step S3.2: obtaining the angle of the connecting line between the joint points according to the distance information, and taking the angle as the angle characteristic, specifically:
wherein a represents the distance of the connecting line between the joint point B and the joint point C, B represents the distance of the connecting line between the joint point A and the joint point C, C represents the distance of the connecting line between the joint point A and the joint point B, and theta is the included angle between AC and BC.
Further, step S4 specifically includes:
step S4.1: training the pre-acquired training sample set by using a logistic regression algorithm based on the angle features to obtain a classification model, wherein the pre-acquired training sample set is the posture data of each frame;
step S4.2: and verifying the effect of the classification model through the data of the test set, and adjusting the hyper-parameters to obtain the adjusted classifier.
Further, the classifier includes N vectors, e.g., in the form of θ ═ θ012…,θN-1]TAnd the classifier includes N preset gestures and corresponding gesture numbers, step S5 specifically includes:
step S5.1: taking the human body action to be detected as a sample xiCalculating the probability vector p of the sample1*j=g(x(i)θ), wherein i represents a sample number, j represents the number of types of static postures, and g is a kernel function of the logistic regression algorithm;
step S5.2: the subscript corresponding to the element with the largest probability vector is the recognized gesture number, and the gesture corresponding to the recognized gesture number is taken as the static gesture recognition result.
Based on the same inventive concept, the second aspect of the present invention provides a motion recognition apparatus based on human body gestures, comprising:
the bone data acquisition module is used for acquiring bone data of a human body through a bone tracking technology of the depth sensor, wherein the bone data comprises three-dimensional coordinates of human body joint points, and the three-dimensional coordinates are converted into a world coordinate system in which the human body is located;
a bone data filtering module, configured to filter the bone data by using an improved clipping filtering algorithm to obtain filtered bone data, where the improved clipping filtering algorithm specifically includes: firstly, judging whether the jitter degree of the current bone data exceeds a threshold value, if the jitter degree of the current bone data does not exceed the threshold value, updating the current bone data by adopting the bone data of a filtering buffer area, otherwise, continuously judging whether the jitter degree of the last bone data exceeds the threshold value, and if the jitter degree of the last bone data does not exceed the threshold value, updating the bone data of the filtering buffer area by adopting the current bone data; if the jitter degree of the previous bone data exceeds the threshold value, judging whether the current bone data is in a filtering range, if so, updating the current bone data by using the bone data of the filtering buffer area, and if not, updating the bone data of the filtering buffer area by using the current bone data;
the angle feature extraction module is used for extracting features of the filtered bone data according to the converted three-dimensional coordinates and a preset angle calculation method to obtain angle features formed by angles of all joint points;
the training module is used for training a training sample set acquired in advance based on a logistic regression algorithm and the angle features to obtain a classifier;
the gesture recognition module is used for recognizing the motion of the human body through the classifier to obtain a static gesture recognition result;
and the action recognition module is used for judging whether two preset static gestures are recognized in five frames or not by adopting a reverse order recognition method based on the static gesture recognition result, and if so, recognizing a dynamic action as an action recognition result.
Further, the depth sensor further acquires depth information, and the bone data acquisition module is specifically configured to:
obtaining the actual distance from the depth sensor to the human body according to the depth information;
converting the three-dimensional coordinates of the depth image into actual coordinates in a world coordinate system according to the actual distance and a coordinate conversion formula, wherein the coordinate conversion formula is as follows:
wherein (x, y) is the actual coordinate, (x)d,yd,zd) In the depth information, w × h is the resolution of the depth sensor, and D and F are constants, where D is-10 and F is 0.0021.
Based on the same inventive concept, a third aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed, performs the method of the first aspect.
Based on the same inventive concept, a fourth aspect of the present invention provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of the first aspect when executing the program.
One or more technical solutions in the embodiments of the present application have at least one or more of the following technical effects:
in the method provided by the invention, the acquired bone data is filtered through an improved amplitude limiting filtering algorithm, stable bone data can be obtained, a basis is provided for subsequent identification, feature extraction is carried out on the filtered bone data according to the converted three-dimensional coordinates and a preset angle calculation method, angle features formed by angles of all joint points are obtained, a training sample set obtained in advance is trained on the basis of a logistic regression algorithm and the angle features, a classifier is obtained, the angle characteristics can be used for realizing the action recognition through the logistic regression, the complexity of the recognition method is reduced, the recognition speed can be improved, the classifier obtained by training can accurately define and describe each action, the human body action can be accurately identified through the classifier, so that the identification accuracy is improved, and the technical problem of low identification accuracy in the prior art is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flowchart of a method for recognizing human body gesture-based actions according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the lifting of two hands according to the embodiment of the present invention;
FIG. 3 is a schematic diagram of a skeleton model of a human body represented by 25 joint position information of the human body acquired by the method shown in FIG. 1;
FIG. 4 is a block diagram of an apparatus for recognizing human body gesture-based actions according to an embodiment of the present invention;
FIG. 5 is a block diagram of a computer-readable storage medium according to an embodiment of the present invention;
fig. 6 is a block diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a method and a device for recognizing actions based on human body gestures, which can recognize the actions of a human body in real time, have higher accuracy rate for recognizing the actions, and reduce the complexity of an algorithm to make the algorithm easier to use.
In order to achieve the technical effects, the general idea of the invention is as follows:
a motion recognition method based on human body posture includes obtaining filtered bone data through an improved amplitude limiting filtering algorithm, obtaining angle features through an improved angle calculation method, training the well-classified angle features based on logistic regression to obtain a trained classifier, obtaining a static posture recognition result of human body motion through the classifier, and finally recognizing the human body motion through a reverse order method according to the static posture recognition result.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
The embodiment provides a motion recognition method based on human body gestures, please refer to fig. 1, and the method includes:
step S1 is first executed: the method comprises the steps of obtaining skeleton data of a human body through a skeleton tracking technology of a depth sensor, wherein the skeleton data comprise three-dimensional coordinates of human body joint points, and converting the three-dimensional coordinates into a world coordinate system where the human body is located.
Specifically, the depth sensor may be an existing sensor, such as a Kinect sensor of microsoft, a PrimeSense sensor of apple, or the like, and the number of obtained human body joint points corresponds to the technology of the depth sensor. In this embodiment, KinectV2, which is a man-machine interaction device developed by microsoft, may be used, and the skeletal tracking technology is the core technology of KinectV2, which may accurately calibrate 25 key nodes of the human body, and may track the positions of the 25 nodes in real time, and the resolution of the skeletal tracking technology is 1920 × 1080. KinectV1, which obtains 20 joint points of the human body, can also be used. Referring to fig. 3, a schematic diagram of a human skeleton model represented by position information of 25 joint points, wherein the 25 joint points specifically include a head, a neck, a left shoulder, a right shoulder, a left elbow, a right elbow, a left wrist, a right wrist, a left hand, a right hand, a left finger, a right finger, a left thumb, a right thumb, a cervical vertebra, a mid-spine, a spine base, a left hip, a right hip, a left knee, a right knee, a left ankle, a right ankle, a left foot, and a right foot. Since the acquired three-dimensional coordinates of the 25 human body joint points are information in the KinectV2 coordinate system, it is necessary to convert them into an actual coordinate system, i.e., a world coordinate system in which the human body is located.
In one embodiment, the depth sensor further acquires depth information, and step S1 specifically includes:
step S1.1: obtaining the actual distance from the depth sensor to the human body according to the depth information;
step S1.2: converting the three-dimensional coordinates of the depth image into actual coordinates in a world coordinate system according to an actual distance and a coordinate conversion formula, wherein the coordinate conversion formula is as follows:
wherein (x, y) is the actual coordinate, (x)d,yd,zd) In the depth information, w × h is the resolution of the depth sensor, and D and F are constants, where D is-10 and F is 0.0021.
Then, step S2 is executed: filtering the bone data by using an improved amplitude limiting filtering algorithm to obtain the filtered bone data, wherein the improved amplitude limiting filtering algorithm specifically comprises the following steps: firstly, judging whether the jitter degree of the current bone data exceeds a threshold value, if the jitter degree of the current bone data does not exceed the threshold value, updating the current bone data by adopting the bone data of a filtering buffer area, otherwise, continuously judging whether the jitter degree of the last bone data exceeds the threshold value, and if the jitter degree of the last bone data does not exceed the threshold value, updating the bone data of the filtering buffer area by adopting the current bone data; if the jitter degree of the previous skeleton data exceeds the threshold value, judging whether the current skeleton data is in the filtering range, if so, updating the current skeleton data by adopting the skeleton data in the filtering buffer area, otherwise, updating the skeleton data in the filtering buffer area by adopting the current skeleton data.
Specifically, the improved amplitude limiting filter algorithm adds a dynamic programming idea into the existing amplitude limiting filter algorithm, firstly judges whether the jitter degree of the current bone data (data of the current frame) exceeds a threshold value, then judges whether the jitter degree of the last bone data exceeds the threshold value, and then determines how the current bone data is specifically processed, so that the bone data can be more stable by the method.
In a specific implementation process, the dithering degree of the bone data is represented by the dithering radius of the bone data. The bone data obtained by the depth sensor sometimes has small-amplitude jitter, the jitter generally makes small-amplitude fluctuation around the actual joint coordinates of the human body, and sometimes the jitter causes that joint points cannot be detected. Therefore, a filtering algorithm needs to be selected to perform corresponding processing on the bone data, and the previous state of the joint point with small amplitude fluctuation is kept unchanged. Accordingly, the stability of the data can be ensured, and in the present embodiment, the degree of the bone data jitter is referred to as a bone data certainty degree, and hereinafter, both have the same meaning. More specifically, the degree of the bone data jitter in the present embodiment can be represented by the jitter radius of the bone data, and the threshold value can be set according to the actual situation, and can be, for example, 0.02m, 0.03m, 0.04m, and the like. When the jitter radius exceeds a set threshold, errors are corrected to within this range by a modified clipping filtering algorithm.
If the jitter degree of the last bone data exceeds the threshold, the method can be used for judging the jitter degree of the last bone data through the following modes: whether the difference value between the position filter [ id ] position.X and the position.X, the position filter [ id ] position.Y and the position filter [ id ] position.Z and the position.position.Z is smaller than a threshold value or not is judged, wherein the position filter [ id ] position.X, the position filter [ id ] position.Y and the position filter [ id ] position.Z are the coordinates of the joint point which is processed by filtering before, and the position.position.X.position.Y and the position.position.Z are the coordinates of the current joint point, namely, the judgment is carried out by calculating the distance between the coordinates of the current joint point and the coordinates of the previous joint point.
Step S3 is executed next: and performing feature extraction on the filtered bone data according to the converted three-dimensional coordinates and a preset angle calculation method to obtain angle features formed by angles of all joint points.
In one embodiment, step S3 specifically includes:
step S3.1: adopting a distance calculation formula to calculate the distance information between the joint points, wherein the distance calculation formula is as follows:
wherein the joint points comprise A, B, C, wherein the actual coordinate of the joint point A is (x)1,y1) The actual coordinate of the joint point B is (x)2,y2) The actual coordinate of the joint point C is (x)3,y3);
Step S3.2: obtaining the angle of a connecting line between all the joint points according to the distance information, and taking the angle as an angle characteristic, specifically:
wherein a represents the distance of the connecting line between the joint point B and the joint point C, B represents the distance of the connecting line between the joint point A and the joint point C, C represents the distance of the connecting line between the joint point A and the joint point B, and theta is the included angle between AC and BC.
Specifically, in the above manner, the angles between the connecting lines of the respective joint points can be calculated, and thus a plurality of obtained angles are taken as the angle features.
Step S4 is executed again: training a pre-acquired training sample set based on a logistic regression algorithm and angle features to obtain a classifier.
In one embodiment, step S4 specifically includes:
step S4.1: training the pre-acquired training sample set by using a logistic regression algorithm based on the angle features to obtain a classification model, wherein the pre-acquired training sample set is the posture data of each frame;
step S4.2: and verifying the effect of the classification model through the data of the test set, and adjusting the hyper-parameters to obtain the adjusted classifier.
Specifically, a pre-acquired training sample set is marked in advance, and logistic regression is supervised learning, that is, the pre-acquired training sample set is data of a known static posture, then training is performed through logistic regression to obtain a model, that is, a classifier, the effect of the model is verified through data of a test set, and a hyper-parameter is adjusted to obtain a final classifier with a better effect, that is, an adjusted classifier.
The motion information of each frame is classified by using a logistic regression algorithm, for example, assuming that there is N-dimensional feature vector x ═ x0,x1,x2…,xn-1]TThe parameter vector θ ═ θ012…,ΘN-1]T,In one-to-many logistic regression classification, each class is trained with a model hθ (i)(x) Selecting h when making a predictionθ (i)(x) The class with the largest value is used as the classification result. In the present embodiment, one-to-many classifier θ ═ θ [ θ ] is trained for each static posture012…,θN-1]TFor the incoming new sample xiCalculating a probability vector p1*j=g(x(i)θ), then the largest element subscript is the number of the identified static gesture. h isθ (i)(x) The function model of the logistic regression algorithm is specifically as follows:
hθ(x)=g(θTx)
g is the kernel function:
step S5 is executed again: and identifying the motion of the human body through the classifier to obtain a static gesture identification result.
In one embodiment, the classifier includes N vectors, e.g., in the form of θ ═ θ012…,θN-1]TAnd the classifier includes N preset static gestures and corresponding gesture numbers, step S5 specifically includes:
step S5.1: taking the human body action to be detected as a sample xiCalculating the probability vector p of the sample1*j=g(x(i)θ), wherein i represents a sample number, j represents the number of types of static postures, and g is a kernel function of the logistic regression algorithm;
step S5.2: the subscript corresponding to the element with the largest probability vector is the recognized gesture number, and the gesture corresponding to the recognized gesture number is taken as the static gesture recognition result.
Step S6: and based on the static gesture recognition result, judging whether two preset static gestures are recognized in five frames by adopting a reverse order recognition method, if so, recognizing a dynamic action, and taking the dynamic action as an action recognition result.
Specifically, after obtaining the static gesture recognition result after the logistic regression, the present embodiment recognizes the motion by using the reverse order recognition method. Firstly, the static gestures within five frames are classified and judged through logistic regression to obtain results, five frames are taken as a period, data before the five frames are automatically deleted, the data of the current frame are compared with the data of the 5 frames before, and when two specified static gestures are recognized in the five frames, the two specified static gestures are recognized as an action.
The motion recognition method based on the human body posture is not influenced by illumination and a recognizer, and can achieve good effects under the test of users with different illumination, different heights and fat and thin. The stability of each frame of data is guaranteed by the aid of the improved amplitude limiting filtering algorithm, the motion recognition is realized by the aid of the angle characteristics through logistic regression, complexity of the algorithm is reduced, recognition speed and accuracy are improved, and the average recognition time is 35ms through experiments. The diversity and complexity of the recognizable motion are also greatly enhanced by the judgment of different static gestures in five frames.
To more clearly illustrate the implementation process of the recognition method of the present invention, a specific example is described below, please refer to fig. 2, which is a schematic diagram of the operation of lifting both hands according to the embodiment of the present invention, and the operation of lifting both hands is composed of two static gestures, that is, first making the both hands in a T shape, then lifting the both hands over the top of the head, and making the two gestures is to determine that the operation is lifting both hands.
When judging whether the target human body has the action of lifting both hands, the specific implementation of the embodiment comprises the following steps:
step S101: the method comprises the steps of obtaining skeleton data of a human body through a skeleton tracking technology of a depth sensor, wherein the skeleton data comprise three-dimensional coordinates of human body joint points, and converting the three-dimensional coordinates into a world coordinate system where the human body is located, and the method specifically comprises the following steps:
step S11: and obtaining the actual distance from the depth sensor to the human body according to the depth information.
d=K tan(Hdd+L)-O
Wherein,
ddthe depth information obtained is 3.7cm, 1.18rad, 12.36cm, 3.5 10H-4rad。
Step S12: converting the three-dimensional coordinates of the depth image into actual coordinates in a world coordinate system according to an actual distance and a coordinate conversion formula, wherein the coordinate conversion formula is as follows:
wherein (x, y) is the actual coordinate, (x)d,yd,zd) In the depth information, w × h is the resolution of the depth sensor, and D and F are constants, where D is-10 and F is 0.0021.
Step S201: the specific implementation scheme comprises the following sub-steps of:
step S21: the concept of dynamic programming is added into the amplitude limiting filter algorithm, whether the confirmation degree of the bone data is smaller than a threshold value JOINT _ CONFIDENCE is judged firstly,
if the bone data of the filtering buffer area is smaller than the bone data of the filtering buffer area, updating the current bone data.
Step S22: if the confirmation degree of the bone data is larger than the threshold, whether the confirmation degree of the last filtering result is smaller than the threshold is continuously judged, and if the confirmation degree of the last filtering result is smaller than the threshold, the bone data in the buffer area is updated by using the bone data at this time.
Step S23: if the confirmation degree of the last filtering result is still larger than the threshold value, whether the bone data is in the filtering range is judged, namely whether the following formula is established, and whether the difference value between the point filter [ id ] position.X and the point.position.X, the point filter [ id ] position.Y and the point.position.Y, and the point filter [ id ] position.Z and the point.position.Z is smaller than the threshold value is judged, if so, the data of the filtering buffer area is used for updating the bone data of the current time, and if not, the data of the bone data of the filtering buffer area is used for updating the bone data of the filtering buffer area.
Step S301: the bone data in step S201 is subjected to feature extraction, and features constituted by the angles of the respective joint points are obtained by an angle calculation method. The specific embodiment comprises the following steps:
the angular characteristics are obtained by using a three-point method, namely, distance information between joint points is obtained by using the following formula
Then the angle of the connecting line between the joint points is calculated by the following formula
Wherein, extract 10 angle characteristics that probably are correlated with the posture from 25 joints, all angles are between 0-180, including the contained angle of left shoulder left wrist and Y axle, the contained angle of right shoulder right wrist and Y axle, the contained angle of left shoulder left elbow and left elbow left wrist vector, the contained angle of right shoulder right elbow and right elbow right wrist, left knee, the contained angle of left buttockss and left knee left ankle, right knee, right buttockss and right knee, the contained angle of right ankle, left shoulder, the contained angle of right shoulder and X axle, left buttockss, the contained angle of right buttockss and X axle, the vertebra middle part, the contained angle of vertebra and Y axle under the neck, the head, the contained angle of vertebra basal portion and Y axle.
Step S401 is performed next: training a training sample set based on logistic regression by using the angle features obtained in step S301 to obtain a classifier, thereby classifying and identifying the static posture, wherein the specific implementation scheme of step S401 includes the following steps:
after extracting the angle characteristics of a plurality of joints, classifying the motion information of each frame by using a logistic regression algorithm, in the embodiment, 6 (three men and three women) testees are recruited to perform an experiment, a kinect is placed in front of 1.8m of the testee, for 2-5 testees, each tester finishes 20 set motions in sequence, each posture is sampled for 50 times, and 4000 samples are counted, and for 1 tester, each posture is sampled for 250 times, and 5000 samples are counted, wherein 50% of data of 1 tester is used for training, the remaining 50% of data is used for testing, and all data of 2-5 testees are used for testing. Namely, a static gesture recognition system (namely a classifier) in real time is established through a logistic regression algorithm. Suppose that there is an N-dimensional eigenvector x ═ x0,x1,x2…,xn-1]TThe parameter vector θ ═ θ0,θ1,θ2…,θN-1]T,The function model is as follows:
hθ(x)=g(θTx)
wherein, the kernel function g is defined as
In one-to-many logistic regression classification, each class is trained with a model hθ (i)(x) Selecting h when making a predictionθ (i)(x) The class with the largest value is used as the classification result. I.e. when training for each gestureOne-to-many classifier theta0,θ1,θ2…,θN-1]TIf a new sample x comes iniCalculating a probability vector p1*j=g(x(i)θ) where the largest element subscript is the recognized gesture number.
When the hands are lifted, two postures exist, namely the hands are lifted horizontally in a T shape and lifted upwards, and when the hands are horizontally multiplied by the T shape, h is formedθ (i)(x) The value is maximum under the index of the element which is the number of the horizontal posture of the two hands, namely, the two hands are judged to be lifted horizontally, and the two hands are also recognized to be lifted, namely, two static postures are recognized.
Step S501: and identifying the motion of the human body through the classifier to obtain a static gesture identification result.
Step S601 uses a reverse order recognition method to determine whether two defined static gestures are recognized within five frames, thereby recognizing a dynamic motion, and the specific embodiment includes the following steps:
the static posture is determined first, and the correspondence relationship of the static posture is defined as follows and stored in a variable static. When the actions of lifting two hands are performed, the gestures of lifting two hands are recognized firstly, the gesture numbers of the gestures are stored in variable Static, wherein the Static variable is used as the result of Static gesture recognition of each frame of data, the Static variable can be pressed into a container through a sequence container vector in a C + + standard library, five frames are taken as a period, data before five frames are deleted from the vector, current frame data are compared with previous frames, if the actions of lifting two hands are found in the next frames of the five frames, the two gestures of the five frames are confirmed to be lifted by two hands and lifted by two hands through judgment conditions, and therefore the actions of lifting two hands are recognized.
Based on the same inventive concept, the application also provides a device corresponding to the human body posture-based action recognition method in the first embodiment, which is detailed in the second embodiment.
Example two
The present embodiment provides a motion recognition apparatus based on human body posture, please refer to fig. 4, the apparatus includes:
a bone data acquisition module 401, configured to acquire bone data of a human body through a bone tracking technology of a depth sensor, where the bone data includes three-dimensional coordinates of human body joint points, and the three-dimensional coordinates are converted into a world coordinate system in which the human body is located;
a bone data filtering module 402, configured to filter the bone data by using an improved clipping filtering algorithm to obtain filtered bone data, where the improved clipping filtering algorithm specifically includes: firstly, judging whether the jitter degree of the current bone data exceeds a threshold value, if the jitter degree of the current bone data does not exceed the threshold value, updating the current bone data by adopting the bone data of a filtering buffer area, otherwise, continuously judging whether the jitter degree of the last bone data exceeds the threshold value, and if the jitter degree of the last bone data does not exceed the threshold value, updating the bone data of the filtering buffer area by adopting the current bone data; if the jitter degree of the previous skeleton data exceeds a threshold value, judging whether the current skeleton data is in a filtering range, if so, updating the current skeleton data by adopting the skeleton data of a filtering buffer area, otherwise, updating the skeleton data of the filtering buffer area by adopting the current skeleton data;
an angle feature extraction module 403, configured to perform feature extraction on the filtered bone data according to the converted three-dimensional coordinates and a preset angle calculation method, to obtain an angle feature formed by angles of each joint point;
a training module 404, configured to train a pre-obtained training sample set based on a logistic regression algorithm and an angle feature to obtain a classifier;
a gesture recognition module 405, configured to recognize a motion of a human body through a classifier to obtain a static gesture recognition result;
and the action recognition module 406 is configured to determine whether two preset static gestures are recognized in five frames by using a reverse order recognition method based on the static gesture recognition result, and if so, recognize a dynamic action as an action recognition result.
In one embodiment, the depth sensor further obtains depth information, and the bone data obtaining module 401 is specifically configured to:
obtaining the actual distance from the depth sensor to the human body according to the depth information;
converting the three-dimensional coordinates of the depth image into actual coordinates in a world coordinate system according to an actual distance and a coordinate conversion formula, wherein the coordinate conversion formula is as follows:
wherein (x, y) is the actual coordinate, (x)d,yd,zd) In the depth information, w × h is the resolution of the depth sensor, and D and F are constants, where D is-10 and F is 0.0021.
In one embodiment, the degree of jitter of the bone data is expressed in terms of a jitter radius of the bone data.
In one embodiment, the angular feature extraction module 403 is specifically configured to:
adopting a distance calculation formula to calculate the distance information between the joint points, wherein the distance calculation formula is as follows:
wherein the joint points comprise A, B, C, wherein the actual coordinate of the joint point A is (x)1,y1) The actual coordinate of the joint point B is (x)2,y2) The actual coordinate of the joint point C is (x)3,y3);
Obtaining the angle of a connecting line between all the joint points according to the distance information, and taking the angle as an angle characteristic, specifically:
wherein a represents the distance of the connecting line between the joint point B and the joint point C, B represents the distance of the connecting line between the joint point A and the joint point C, C represents the distance of the connecting line between the joint point A and the joint point B, and theta is the included angle between AC and BC.
In one embodiment, the training module 404 is specifically configured to:
training a pre-acquired training sample set by using a logistic regression algorithm based on the angle characteristics to obtain a classification model, wherein the pre-acquired training sample set is the posture data of each frame;
and verifying the effect of the classification model through the data of the test set, and adjusting the hyper-parameters to obtain the adjusted classifier.
In one embodiment, the classifier includes N vectors, e.g., in the form of θ ═ θ012…,θN-1]TAnd the classifier includes N preset gestures and corresponding gesture numbers, the gesture recognition module 405 is specifically configured to:
taking the human body action to be detected as a sample xiCalculating the probability vector p of the sample1*j=g(x(i)θ), wherein i represents a sample number, j represents the number of types of static postures, and g is a kernel function of the logistic regression algorithm;
the subscript corresponding to the element with the largest probability vector is the recognized gesture number, and the gesture corresponding to the recognized gesture number is taken as the static gesture recognition result.
Since the device described in the second embodiment of the present invention is a device used for implementing the method for recognizing a motion based on a human body posture in the first embodiment of the present invention, a person skilled in the art can understand the specific structure and deformation of the device based on the method described in the first embodiment of the present invention, and thus the details thereof are not described herein. All the devices adopted in the method of the first embodiment of the present invention belong to the protection scope of the present invention.
EXAMPLE III
Based on the same inventive concept, the present application further provides a computer-readable storage medium 500, please refer to fig. 5, on which a computer program 511 is stored, which when executed implements the method in the first embodiment.
Because the computer-readable storage medium introduced in the third embodiment of the present invention is a computer-readable storage medium used for implementing the motion recognition method based on human body gestures in the first embodiment of the present invention, based on the method introduced in the first embodiment of the present invention, persons skilled in the art can understand the specific structure and deformation of the computer-readable storage medium, and therefore details are not described here. Any computer readable storage medium used in the method of the first embodiment of the present invention falls within the intended scope of the present invention.
Example four
Based on the same inventive concept, the present application further provides a computer apparatus, please refer to fig. 6, which includes a memory 601, a processor 602, and a computer program 603 stored in the memory and running on the processor, and when the processor executes the program, the method of the first embodiment is implemented.
Since the computer device introduced in the fourth embodiment of the present invention is a device used for implementing the motion recognition method based on human body gestures in the first embodiment of the present invention, a person skilled in the art can understand the specific structure and deformation of the computer device based on the method introduced in the first embodiment of the present invention, and thus the detailed description thereof is omitted here. All the computer devices adopted in the method of the first embodiment of the present invention are within the scope of the present invention.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made in the embodiments of the present invention without departing from the spirit or scope of the embodiments of the invention. Thus, if such modifications and variations of the embodiments of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to encompass such modifications and variations.

Claims (10)

1. A motion recognition method based on human body gestures is characterized by comprising the following steps:
step S1: acquiring skeleton data of a human body through a skeleton tracking technology of a depth sensor, wherein the skeleton data comprises three-dimensional coordinates of human body joint points, and converting the three-dimensional coordinates into a world coordinate system in which the human body is located;
step S2: filtering the bone data by using an improved amplitude limiting filtering algorithm to obtain filtered bone data, wherein the improved amplitude limiting filtering algorithm specifically comprises: firstly, judging whether the jitter degree of the current bone data exceeds a threshold value, if the jitter degree of the current bone data does not exceed the threshold value, updating the current bone data by adopting the bone data of a filtering buffer area, otherwise, continuously judging whether the jitter degree of the last bone data exceeds the threshold value, and if the jitter degree of the last bone data does not exceed the threshold value, updating the bone data of the filtering buffer area by adopting the current bone data; if the jitter degree of the previous bone data exceeds the threshold value, judging whether the current bone data is in a filtering range, if so, updating the current bone data by using the bone data of the filtering buffer area, and if not, updating the bone data of the filtering buffer area by using the current bone data;
step S3: performing feature extraction on the filtered bone data according to the converted three-dimensional coordinates and a preset angle calculation method to obtain angle features formed by angles of all joint points;
step S4: training a pre-obtained training sample set based on a logistic regression algorithm and the angle features to obtain a classifier;
step S5: identifying the motion of the human body through the classifier to obtain a static gesture identification result;
step S6: and judging whether two preset static gestures are recognized in five frames or not by adopting a reverse order recognition method based on the static gesture recognition result, if so, recognizing a dynamic action, and taking the dynamic action as an action recognition result.
2. The method according to claim 1, wherein the depth sensor further obtains depth information, and the step S1 specifically includes:
step S1.1: acquiring the actual distance from the Kinect2 to the human body according to the depth information;
step S1.2: converting the three-dimensional coordinates into actual coordinates in a world coordinate system according to the actual distance and a coordinate conversion formula, wherein the coordinate conversion formula is as follows:
wherein (x, y) is the actual coordinate, (x)d,yd,zd) In the depth information, w × h is the resolution of the depth sensor, and D and F are constants, where D is-10 and F is 0.0021.
3. The method as claimed in claim 1, wherein in step S2, the dithering degree of the bone data is expressed by a dithering radius of the bone data.
4. The method according to claim 1, wherein step S3 specifically comprises:
step S3.1: adopting a distance calculation formula to calculate the distance information between the joint points, wherein the distance calculation formula is as follows:
wherein the joint points comprise A, B, C, wherein the actual coordinate of the joint point A is (x)1,y1) The actual coordinate of the joint point B is (x)2,y2) The actual coordinate of the joint point C is (x)3,y3);
Step S3.2: obtaining the angle of the connecting line between the joint points according to the distance information, and taking the angle as the angle characteristic, specifically:
wherein a represents the distance of the connecting line between the joint point B and the joint point C, B represents the distance of the connecting line between the joint point A and the joint point C, C represents the distance of the connecting line between the joint point A and the joint point B, and theta is the included angle between AC and BC.
5. The method according to claim 1, wherein step S4 specifically comprises:
step S4.1: training the pre-acquired training sample set by using a logistic regression algorithm based on the angle features to obtain a classification model, wherein the pre-acquired training sample set is the posture data of each frame;
step S4.2: and verifying the effect of the classification model through the data of the test set, and adjusting the hyper-parameters to obtain the adjusted classifier.
6. The method of claim 1, wherein the classifier includes N vectors in the form of θ ═ θ [ θ ═ θ012…,θN-1]TAnd the classifier includes N preset gestures and corresponding gesture numbers, step S5 specifically includes:
step S5.1: taking the human body action to be detected as a sample xiCalculating the probability vector p of the sample1*j=g(x(i)θ), wherein i represents a sample number, j represents the number of types of static postures, and g is a kernel function of the logistic regression algorithm;
step S5.2: the subscript corresponding to the element with the largest probability vector is the recognized gesture number, and the gesture corresponding to the recognized gesture number is taken as the static gesture recognition result.
7. A motion recognition device based on human body gestures, comprising:
the bone data acquisition module is used for acquiring bone data of a human body through a Kinect2 bone tracking technology, wherein the bone data comprises three-dimensional coordinates of human body joint points, and the three-dimensional coordinates are converted into a world coordinate system in which the human body is located;
a bone data filtering module, configured to filter the bone data by using an improved clipping filtering algorithm to obtain filtered bone data, where the improved clipping filtering algorithm specifically includes: firstly, judging whether the jitter degree of the current bone data exceeds a threshold value, if the jitter degree of the current bone data does not exceed the threshold value, updating the current bone data by adopting the bone data of a filtering buffer area, otherwise, continuously judging whether the jitter degree of the last bone data exceeds the threshold value, and if the jitter degree of the last bone data does not exceed the threshold value, updating the bone data of the filtering buffer area by adopting the current bone data; if the jitter degree of the previous bone data exceeds the threshold value, judging whether the current bone data is in a filtering range, if so, updating the current bone data by using the bone data of the filtering buffer area, and if not, updating the bone data of the filtering buffer area by using the current bone data;
the angle feature extraction module is used for extracting features of the filtered bone data according to the converted three-dimensional coordinates and a preset angle calculation method to obtain angle features formed by angles of all joint points;
the training module is used for training a training sample set acquired in advance based on a logistic regression algorithm and the angle features to obtain a classifier;
the gesture recognition module is used for recognizing the motion of the human body through the classifier to obtain a static gesture recognition result;
and the action recognition module is used for judging whether two preset static gestures are recognized in five frames or not by adopting a reverse order recognition method based on the static gesture recognition result, and if so, recognizing a dynamic action as an action recognition result.
8. The apparatus of claim 7, wherein the depth sensor further acquires depth information, the bone data acquisition module being specifically configured to:
acquiring the actual distance from the Kinect2 to the human body according to the depth information;
converting the three-dimensional coordinates of the depth image into actual coordinates in a world coordinate system according to the actual distance and a coordinate conversion formula, wherein the coordinate conversion formula is as follows:
wherein (x, y) is the actual coordinate, (x)d,yd,zd) In the three-dimensional coordinates of the depth image in the depth information, w × h is the resolution of kinect2, and D and F are constants, where D is-10 and F is 0.0021.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when executed, implements the method of any one of claims 1 to 6.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 6 when executing the program.
CN201810988873.6A 2018-08-28 2018-08-28 A kind of action identification method and device based on human body attitude Pending CN109117893A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810988873.6A CN109117893A (en) 2018-08-28 2018-08-28 A kind of action identification method and device based on human body attitude

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810988873.6A CN109117893A (en) 2018-08-28 2018-08-28 A kind of action identification method and device based on human body attitude

Publications (1)

Publication Number Publication Date
CN109117893A true CN109117893A (en) 2019-01-01

Family

ID=64861058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810988873.6A Pending CN109117893A (en) 2018-08-28 2018-08-28 A kind of action identification method and device based on human body attitude

Country Status (1)

Country Link
CN (1) CN109117893A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635783A (en) * 2019-01-02 2019-04-16 上海数迹智能科技有限公司 Video monitoring method, device, terminal and medium
CN110309743A (en) * 2019-06-21 2019-10-08 新疆铁道职业技术学院 Human body attitude judgment method and device based on professional standard movement
CN110327053A (en) * 2019-07-12 2019-10-15 广东工业大学 A kind of human body behavior safety monitoring method, equipment and system based on lift space
CN111067597A (en) * 2019-12-10 2020-04-28 山东大学 System and method for determining puncture path according to human body posture in tumor puncture
CN111142663A (en) * 2019-12-27 2020-05-12 恒信东方文化股份有限公司 Gesture recognition method and gesture recognition system
CN111341040A (en) * 2020-03-28 2020-06-26 江西财经职业学院 Financial self-service equipment and management system thereof
CN111754619A (en) * 2020-06-29 2020-10-09 武汉市东旅科技有限公司 Bone space data acquisition method, acquisition device, electronic device and storage medium
CN111860243A (en) * 2020-07-07 2020-10-30 华中师范大学 Robot action sequence generation method
CN111840920A (en) * 2020-07-06 2020-10-30 暨南大学 Upper limb intelligent rehabilitation system based on virtual reality
CN112233769A (en) * 2020-10-12 2021-01-15 安徽动感智能科技有限公司 Recovery system after suffering from illness based on data acquisition
CN112434741A (en) * 2020-11-25 2021-03-02 杭州盛世传奇标识系统有限公司 Method, system, device and storage medium for using interactive introduction identifier
CN112711332A (en) * 2020-12-29 2021-04-27 上海交通大学宁波人工智能研究院 Human body motion capture method based on attitude coordinates
CN112801061A (en) * 2021-04-07 2021-05-14 南京百伦斯智能科技有限公司 Posture recognition method and system
CN113627369A (en) * 2021-08-16 2021-11-09 南通大学 Action recognition and tracking method in auction scene
CN114677625A (en) * 2022-03-18 2022-06-28 北京百度网讯科技有限公司 Object detection method, device, apparatus, storage medium and program product
CN116392798A (en) * 2023-03-09 2023-07-07 恒鸿达(福建)体育科技有限公司 Automatic test method, device, equipment and medium for parallel lever arm bending and stretching
CN116719417A (en) * 2023-08-07 2023-09-08 海马云(天津)信息技术有限公司 Motion constraint method and device for virtual digital person, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056035A (en) * 2016-04-06 2016-10-26 南京华捷艾米软件科技有限公司 Motion-sensing technology based kindergarten intelligent monitoring method
US20170161563A1 (en) * 2008-09-18 2017-06-08 Grandeye, Ltd. Unusual Event Detection in Wide-Angle Video (Based on Moving Object Trajectories)
CN107180235A (en) * 2017-06-01 2017-09-19 陕西科技大学 Human action recognizer based on Kinect
CN107832713A (en) * 2017-11-13 2018-03-23 南京邮电大学 A kind of human posture recognition method based on OptiTrack
CN107943276A (en) * 2017-10-09 2018-04-20 广东工业大学 Based on the human body behavioral value of big data platform and early warning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170161563A1 (en) * 2008-09-18 2017-06-08 Grandeye, Ltd. Unusual Event Detection in Wide-Angle Video (Based on Moving Object Trajectories)
CN106056035A (en) * 2016-04-06 2016-10-26 南京华捷艾米软件科技有限公司 Motion-sensing technology based kindergarten intelligent monitoring method
CN107180235A (en) * 2017-06-01 2017-09-19 陕西科技大学 Human action recognizer based on Kinect
CN107943276A (en) * 2017-10-09 2018-04-20 广东工业大学 Based on the human body behavioral value of big data platform and early warning
CN107832713A (en) * 2017-11-13 2018-03-23 南京邮电大学 A kind of human posture recognition method based on OptiTrack

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱宇辉: "《中国优秀硕士学位论文全文数据库信息科技辑》", 15 March 2017 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635783A (en) * 2019-01-02 2019-04-16 上海数迹智能科技有限公司 Video monitoring method, device, terminal and medium
CN110309743A (en) * 2019-06-21 2019-10-08 新疆铁道职业技术学院 Human body attitude judgment method and device based on professional standard movement
CN110327053A (en) * 2019-07-12 2019-10-15 广东工业大学 A kind of human body behavior safety monitoring method, equipment and system based on lift space
CN111067597A (en) * 2019-12-10 2020-04-28 山东大学 System and method for determining puncture path according to human body posture in tumor puncture
CN111067597B (en) * 2019-12-10 2021-04-16 山东大学 System for determining puncture path according to human body posture in tumor puncture
CN111142663B (en) * 2019-12-27 2024-02-02 恒信东方文化股份有限公司 Gesture recognition method and gesture recognition system
CN111142663A (en) * 2019-12-27 2020-05-12 恒信东方文化股份有限公司 Gesture recognition method and gesture recognition system
CN111341040A (en) * 2020-03-28 2020-06-26 江西财经职业学院 Financial self-service equipment and management system thereof
CN111754619A (en) * 2020-06-29 2020-10-09 武汉市东旅科技有限公司 Bone space data acquisition method, acquisition device, electronic device and storage medium
CN111840920A (en) * 2020-07-06 2020-10-30 暨南大学 Upper limb intelligent rehabilitation system based on virtual reality
CN111860243A (en) * 2020-07-07 2020-10-30 华中师范大学 Robot action sequence generation method
CN112233769A (en) * 2020-10-12 2021-01-15 安徽动感智能科技有限公司 Recovery system after suffering from illness based on data acquisition
CN112434741A (en) * 2020-11-25 2021-03-02 杭州盛世传奇标识系统有限公司 Method, system, device and storage medium for using interactive introduction identifier
CN112711332B (en) * 2020-12-29 2022-07-15 上海交通大学宁波人工智能研究院 Human body motion capture method based on attitude coordinates
CN112711332A (en) * 2020-12-29 2021-04-27 上海交通大学宁波人工智能研究院 Human body motion capture method based on attitude coordinates
CN112801061A (en) * 2021-04-07 2021-05-14 南京百伦斯智能科技有限公司 Posture recognition method and system
CN113627369A (en) * 2021-08-16 2021-11-09 南通大学 Action recognition and tracking method in auction scene
CN114677625A (en) * 2022-03-18 2022-06-28 北京百度网讯科技有限公司 Object detection method, device, apparatus, storage medium and program product
CN114677625B (en) * 2022-03-18 2023-09-08 北京百度网讯科技有限公司 Object detection method, device, apparatus, storage medium, and program product
CN116392798A (en) * 2023-03-09 2023-07-07 恒鸿达(福建)体育科技有限公司 Automatic test method, device, equipment and medium for parallel lever arm bending and stretching
CN116719417A (en) * 2023-08-07 2023-09-08 海马云(天津)信息技术有限公司 Motion constraint method and device for virtual digital person, electronic equipment and storage medium
CN116719417B (en) * 2023-08-07 2024-01-26 海马云(天津)信息技术有限公司 Motion constraint method and device for virtual digital person, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109117893A (en) A kind of action identification method and device based on human body attitude
Hasan et al. RETRACTED ARTICLE: Static hand gesture recognition using neural networks
US20130335318A1 (en) Method and apparatus for doing hand and face gesture recognition using 3d sensors and hardware non-linear classifiers
Ding et al. STFC: Spatio-temporal feature chain for skeleton-based human action recognition
US9734435B2 (en) Recognition of hand poses by classification using discrete values
Maisto et al. An accurate algorithm for the identification of fingertips using an RGB-D camera
WO2008007471A1 (en) Walker tracking method and walker tracking device
JP2009514109A (en) Discriminant motion modeling for tracking human body motion
EP2980728A1 (en) Procedure for identifying a hand gesture
CN109766782B (en) SVM-based real-time limb action recognition method
CN111680550B (en) Emotion information identification method and device, storage medium and computer equipment
CN108875586B (en) Functional limb rehabilitation training detection method based on depth image and skeleton data multi-feature fusion
Xue et al. A Chinese sign language recognition system using leap motion
Manikandan et al. Hand gesture detection and conversion to speech and text
JP2015195020A (en) Gesture recognition device, system, and program for the same
Ong et al. Investigation of feature extraction for unsupervised learning in human activity detection
CN110910426A (en) Action process and action trend identification method, storage medium and electronic device
CN110991292A (en) Action identification comparison method and system, computer storage medium and electronic device
CN110598647A (en) Head posture recognition method based on image recognition
Xu et al. A novel method for hand posture recognition based on depth information descriptor
CN111274854B (en) Human body action recognition method and vision enhancement processing system
CN116543452A (en) Gesture recognition and gesture interaction method and device
KR101868520B1 (en) Method for hand-gesture recognition and apparatus thereof
KR102237131B1 (en) Appratus and method for processing image including at least one object
Periyanayaki et al. An Efficient way of Emotion and Gesture Recognition using Deep Learning Algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190101