CN112800892A - Human body posture recognition method based on openposition - Google Patents

Human body posture recognition method based on openposition Download PDF

Info

Publication number
CN112800892A
CN112800892A CN202110060938.2A CN202110060938A CN112800892A CN 112800892 A CN112800892 A CN 112800892A CN 202110060938 A CN202110060938 A CN 202110060938A CN 112800892 A CN112800892 A CN 112800892A
Authority
CN
China
Prior art keywords
action
data
characteristic
sequence
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110060938.2A
Other languages
Chinese (zh)
Other versions
CN112800892B (en
Inventor
徐佳
王子沁
骆健
徐力杰
李宾
胡洋
蒋凌云
鲁蔚锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Yiqian Information Technology Co ltd
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202110060938.2A priority Critical patent/CN112800892B/en
Publication of CN112800892A publication Critical patent/CN112800892A/en
Application granted granted Critical
Publication of CN112800892B publication Critical patent/CN112800892B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a human body posture recognition method based on openfuse, which is a human body posture recognition system consisting of openfuse data acquisition, data preprocessing, characteristic value construction, deep network training and action judgment and opencv drawing, and comprises the following steps: the method comprises the following steps: acquiring a human body skeleton data frame sequence of a human body under the execution of a target action by utilizing an open source project openposition; step two: screening out main key point data which can represent actions from the skeleton data; step three: extracting and calculating motion characteristic values from the screened skeleton joint point data and constructing a characteristic vector sequence of the motion; step four: preprocessing the characteristic vector sequence; step five: storing the characteristic vector sequence of the action sample set as a standard action template; step six: acquiring an action characteristic sequence in real time and sending the action characteristic sequence to a pre-trained neural network; step seven: and obtaining a prediction result from the network and giving an action standard degree. The invention is simple and reliable to implement and is suitable for a real-time action recognition system.

Description

Human body posture recognition method based on openposition
Technical Field
The invention relates to pattern recognition and man-machine interaction, in particular to a human body posture recognition method based on openposition.
Background
In recent years, human-computer interactive applications related to computer vision, such as behavior monitoring, electronic games, healthcare, and the like, are increasingly favored, and the key technology of these interactive applications is how to make a machine understand the actions of a human body, i.e., human behavior recognition. Existing skeleton-based behavior recognition methods can be roughly divided into two categories: joint-based methods and body-part-based methods. The human skeleton is regarded as a point set based on the joint point method, and then the position related characteristics of the joint points in the point set are used for describing the skeleton, wherein the characteristics comprise joint point position characteristics, paired relative joint point position characteristics, joint point direction characteristics under a fixed coordinate system and the like. On the other hand, the human body skeleton is regarded as a series of connected rigid body segments based on a body part method, and the human body three-dimensional skeleton is represented by using joint point angle characteristics, bionic three-dimensional characteristics, three-dimensional geometrical relation characteristics among different parts and the like. These studies combine intra-frame spatial domain features and inter-frame temporal domain features to represent the human skeleton sequence, but neglect the weight variation relationship of different postures and joint points, so that the feature representation is redundant, because not all joint points and postures have the same importance, and those important joint points and postures should have a greater weight in determining the category to which the behavior belongs. How to overcome the defects of the prior art becomes one of the key problems to be solved urgently in the technical field of pattern recognition and man-machine interaction.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to provide an openfuse-based action recognition method which is stable and reliable, can resist environmental interference and is suitable for a real-time action recognition system.
The technical scheme is as follows: a human body posture recognition method based on openposition comprises a human body posture recognition system consisting of five modules of openposition data acquisition, data preprocessing, characteristic value construction, deep network training and action judgment and opencv drawing, and the recognition method of the system comprises the following steps:
the method comprises the following steps: acquiring a continuous framework data frame sequence of the person under the execution of the target action from an interface WrapperPython provided by openposition through an openposition data acquisition module: openposition refers to an open source system developed by Kanai Meilong university and capable of acquiring spatial information and position information of each joint point of a human skeleton, and human skeleton data refers to human joint point data provided by the project;
step two: through a data preprocessing module, screening out main key point data which can represent actions from skeleton data: the main key point data for representing the action is joint point data playing a key role in action identification; if the gesture motion is detected and identified, the joint point data of the upper limb can be selected: the left wrist joint point, the right shoulder joint point, the left hand joint point, the left wrist joint point, the left shoulder joint point and the like, and the rest of the actions are analogized;
step three: extracting and calculating action characteristic values from the screened skeleton joint point data through a characteristic value construction module, and constructing a characteristic vector sequence of the action: the action characteristics comprise joint point positions and angles; the characteristic vector sequence refers to a sequence formed by characteristic vectors composed of characteristic values;
step four: preprocessing the characteristic vector sequence: the preprocessing refers to the normalization processing of the coordinates of the joint points in the feature vectors;
step five: storing the characteristic vector sequence of the action sample set as a standard action template through a deep network training and action judging module for calculating the standard degree;
step six: acquiring an action characteristic sequence in real time and giving the action characteristic sequence to a pre-trained neural network: the pre-trained neural network is a seven-layer network structure built based on keras, wherein three layers of relu activation layers, three layers of BatchNormalization layers and the last layer of softmax output layer are arranged;
seventhly, acquiring a prediction result from the network, and giving an action standard degree: and obtaining a prediction result according to the sixth step, calculating the action standard degree by comparing the sequence similarity through the standard action sample set obtained in the fifth step, finishing the identification of the action, and presenting the action standard degree through an opencv drawing module.
Further, the feature extraction in step three is to extract suitable features from the bone data, including the position P and the inter-joint angle θ, the feature vector sequence is a feature vector sequence R composed of feature vectors calculated by each frame of bone data, and then R may be represented as:
R={R1,R2,…,Rn,…,RN}
where N is the number of video frames, RnIs a feature vector of the n-th frame of skeleton data, RnCan be expressed as:
Figure BDA0002902318450000021
wherein the content of the first and second substances,
Figure BDA0002902318450000022
is the ith eigenvalue of the eigenvector, I is the dimension of the eigenvector.
Further, the normalization processing of the coordinates of the joint points in the feature vector in the fourth step means that the position of one joint point is selected as a center far point, and the position coordinates are as follows:
C=(CX,CY)
the spatial coordinate value of the jth joint point is:
Figure BDA0002902318450000023
wherein j belongs to {0,1, …, m-1}, and m is the number of nodes.
Furthermore, the standard motion sample template library of the step five is composed of the optimal motion samples which are acquired and stored in advance, and M is setgIs a feature vector of action g in the template library, which can be expressed as:
Figure BDA0002902318450000031
where I is the dimension of the feature vector, the standard motion sample template library can be represented as: mgG is equal to {0,1, …, K-1}, K is the number of actions,
Figure BDA0002902318450000032
the standard motion feature vector for the ith motion.
Further, in the sixth step, the result prediction is carried out by using a neural network, the network is generated by training of self-collected data in advance, an output layer is softmax, the softmax is used in the multi-classification process, the softmax maps the output of a plurality of neurons into a (0,1) interval and can be regarded as probability, and therefore multi-classification is carried out. The specific formula is as follows:
Figure BDA0002902318450000033
wherein V is an array of various action result values output by the network, and V isgIs the value of action g, g ∈ {0,1, …, K-1}, where K is the number of actions, then S isgThe probability value of the g-th action is shown, and e is the base of the natural logarithm.
Further, the step seven calculates the standard degree of the motion by using a sequence similarity method, which substantially uses cosine similarity, and the cosine value of 0 degree is 1, meaning that if the included angle between two vectors is smaller, i.e. more similar, the value is closer to 1,let the feature vector R be obtained and the standard template vector be MgThen, the following formula is given:
Figure BDA0002902318450000034
Figure BDA0002902318450000035
wherein R isiFor the ith value in the vector R,
Figure BDA0002902318450000036
is a vector MgI is the dimension of the feature vector.
The realization principle of the invention is as follows: under the condition of obtaining skeleton data of human body action, the characteristics of the action are extracted, then a pre-trained neural network is used for predicting a result, and finally classification recognition is completed, and meanwhile, the standard degree is given by using the knowledge of cosine similarity. The influence of different body sizes of people and the relative positions of the people and the camera is reduced through the characteristic value normalization preprocessing, the robustness of the algorithm is enhanced, the practicability and the accuracy of the system are improved by using the neural network, and the method is suitable for a real-time action recognition system.
Has the advantages that: compared with the prior art, the invention has the following remarkable advantages:
(1) the invention carries out action recognition based on the skeleton data, compared with the action recognition method based on the depth map, the skeleton data is less influenced by the environment, the preprocessing is not needed by a complex image processing algorithm, and the action characteristics are easier to extract and calculate;
(2) the invention carries out normalization processing on the characteristic vector, thereby enhancing the robustness and the practicability of the system;
(3) the method utilizes the trained neural network model to predict the result, and improves the real-time performance and the accuracy of the action recognition.
Drawings
FIG. 1 is a block diagram of a system for recognizing human body gestures based on openposition of the present invention;
FIG. 2 is a flow chart of a method of the present invention;
FIG. 3 is a schematic representation of the human skeleton of the present invention having 18 joints;
FIG. 4 is a schematic illustration of a human body frame for a embrace, bow, etc. in accordance with the present invention;
fig. 5 is a schematic diagram of the human skeleton of the white crane wing of the invention.
Detailed Description
The following detailed description of embodiments of the present invention will be made with reference to the accompanying drawings and examples.
Fig. 1 is a block diagram of the system of the present invention, which is composed of five modules, i.e., openpos data acquisition, data preprocessing, feature value construction, deep network training and action determination, and opencv mapping, and the flow of the implementation method based on the system is shown in fig. 2, and specifically includes the following steps:
the method comprises the following steps: acquiring a continuous framework data frame sequence of the person under the execution of the target action from an interface WrapperPython provided by openposition through an openposition data acquisition module: openposition refers to an open source system developed by Kanai Meilong university and capable of acquiring spatial information and position information of each joint point of a human skeleton, and human skeleton data refers to human joint point data provided by the project;
step two: through a data preprocessing module, screening out main key point data which can represent actions from skeleton data: the main key point data for representing the action is joint point data playing a key role in action identification; if the gesture motion is detected and identified, the joint point data of the upper limb can be selected: including right hand joint point, right wrist joint point, right shoulder joint point, left hand joint point, left wrist joint point, left shoulder joint point, etc., and so on for other actions.
Step three: extracting and calculating action characteristic values from the screened skeleton joint point data through a characteristic value construction module, and constructing a characteristic vector sequence of the action: the action characteristics comprise joint point positions and angles; the characteristic vector sequence refers to a sequence formed by characteristic vectors composed of characteristic values;
step four: preprocessing the characteristic vector sequence: the preprocessing refers to the normalization processing of the coordinates of the joint points in the feature vectors;
step five: through a deep network training and action judging module, storing a characteristic vector sequence of an action sample set as a standard action sample template library: the action sample set comprises standard samples of seven actions of standing, saluting, bowing, embrace, smoking, falling and white crane wing in the system and is used for calculating the standard degree;
step six: acquiring an action characteristic sequence in real time and giving the action characteristic sequence to a pre-trained neural network: the pre-trained neural network is a seven-layer network structure built based on keras, wherein three layers of relu activation layers, three layers of BatchNormalization layers and the last layer of softmax output layer are arranged;
seventhly, acquiring a prediction result from the network, and giving an action standard degree: and obtaining a prediction result according to the sixth step, calculating the action standard degree by comparing the sequence similarity through the standard action sample set obtained in the fifth step, finishing the identification of the action, and presenting the action standard degree through an opencv drawing module.
In the following, the embodiment of the human body posture recognition system based on openposition according to the present invention is further specifically described, and the recognition action randomly given by the system is crane (white crane-like):
firstly, the method comprises the following steps: and acquiring a continuous framework data frame sequence of the human under the condition of executing the target action from an interface WrapperPython provided by openposition. And running a program, acquiring real-time video frame data through a computer camera, and then inputting the video frame data into a python interface provided by openposition to obtain the coordinate information of the joint points of the human body in the video frame. Figure 3 shows a schematic representation of a human skeleton containing 18 joints, including left and right shoulders, nose, eyes, left and right ears, left and right elbows, left and right wrists, etc. Openpos acquires and records corresponding skeleton data of continuous actions to form a corresponding skeleton frame sequence, and the data is used for storing a subsequent action template and identifying the actions.
Secondly, the method comprises the following steps: and screening out main key point data which can represent the action from the skeleton data. And screening the obtained skeleton joint point data. In this example, the recognized motion is the white-haired fin, so that the joint point information of the toes is removed, and the joint point position information of the nose, the left and right wrists, the left and right elbows, the left and right shoulders, the neck, the left and right buttocks, and the left and right knees is selected.
Thirdly, the method comprises the following steps: and extracting and calculating motion characteristic values from the screened skeleton joint point data and constructing a characteristic vector sequence of the motion. Fifteen groups of distance information of left and right hand waists, left and right hand shoulders, left and right hand knees, left and right head hands, two hands and two feet, and fifteen groups of angle information of hands, legs, shoulders, gears, hands, waist knees, hands, head hands, waist hands, neck, waist knees and hands, neck and waist are calculated, namely thirty groups of characteristic value information are calculated. The feature vector sequence refers to a feature vector sequence R composed of feature vectors calculated from each frame of skeleton data, and R may be expressed as:
R={R1,R2,…,Rn,…,RN}
where N is the number of video frames, RnIs a feature vector of the n-th frame of skeleton data, RnCan be expressed as:
Figure BDA0002902318450000051
wherein the content of the first and second substances,
Figure BDA0002902318450000052
is the ith eigenvalue of the eigenvector, I is the dimension of the eigenvector.
Fourthly: and preprocessing the characteristic vector sequence. Selecting the position of a joint point (neck) as a central origin, wherein the starting position coordinates are as follows:
C=(CX,CY)
the spatial coordinate value of the jth joint point is:
Figure BDA0002902318450000061
wherein j belongs to {0,1, …, m-1}, and m is the number of nodes.
Fifth, the method comprises the following steps: and storing the characteristic vector sequence of the motion sample set as a standard motion sample template library. Let MgIs a feature vector of action g in the template library, which can be expressed as:
Figure BDA0002902318450000062
where I is the dimension of the feature vector, the standard motion sample template library can be represented as: mgG is equal to {0,1, …, K-1}, K is the number of actions,
Figure BDA0002902318450000063
the standard motion feature vector for the ith motion.
Sixth: and acquiring the action characteristic sequence in real time and sending the action characteristic sequence to a pre-trained neural network. The neural network carries out result prediction, the network is generated by 14000 self-collected data training in advance, seven actions are carried out, each action is 2000 data, an output layer is softmax, the softmax is used in a multi-classification process, the output of a plurality of neurons is mapped into a (0,1) interval and can be understood as probability, and therefore multi-classification is carried out. The specific formula is as follows:
Figure BDA0002902318450000064
wherein V is an array of various action result values output by the network, and V isgIs the value of action g, g ∈ {0,1, …, K-1}, where K is the number of actions, then S isgThe probability value of the g-th action is shown, and e is the base of the natural logarithm.
Seventh: and obtaining a prediction result from the network and giving an action standard degree. The standard degree of the action is calculated by using a sequence similarity method, the cosine similarity is essentially used in the method, the cosine value of 0 degree is 1, which means that if the included angle of two vectors is smaller, namely more similar, the value is largerThe closer to 1, the more the feature vector R is obtained, and the standard template vector is MgThen, the following formula is given:
Figure BDA0002902318450000065
Figure BDA0002902318450000066
wherein R isiFor the ith value in the vector R,
Figure BDA0002902318450000071
is a vector MgI is the dimension of the feature vector, and the system employs thirty sets of feature values, so that dimension I takes on a value of 30.
In this example, the identification action is a crane wing, FIG. 4 shows a embrace and bow action, the system determines as wrong, FIG. 5 shows a target action, i.e., a crane wing, and S is calculated as [8.1422404e-05, 4.5637302e-03,3.7604365e-02, 4.4506835e-03,6.3662082e-03,1.1721434e-03,9.4693369 e-01%]I.e. S6The action of the white crane wings is the highest, and the judgment is correct, and meanwhile, the calculated standard degree is 97%.
The first table shows test comparison data of the system and the traditional template matching method, three groups of actions, namely cane, Bow and salt, are selected, and each action tests 1000 pieces of data, and test results show that the recognition accuracy of the traditional template matching is about eighty percent basically, and the recognition accuracy of the system can reach more than ninety percent.
Table-performance test data comparison
Figure BDA0002902318450000072
The embodiments of the present invention are described in detail with reference to the prior art, and the description thereof is not limited thereto. The invention is verified by repeated experiments, and obtains more satisfactory effect.

Claims (6)

1. An openposition-based human body posture recognition method is characterized by comprising the following steps: the human body posture recognition system is composed of five modules of openposition data acquisition, data preprocessing, characteristic value construction, deep network training and action judgment and opencv drawing, and the recognition method of the system comprises the following steps:
the method comprises the following steps: acquiring a human body skeleton data frame sequence of a person under the execution of a target action from an interface WrapperPython provided by openposition through an openposition data acquisition module: the openposition is an open source system developed by Kanai Meilong university and capable of acquiring spatial information and position information of each joint point of a human skeleton, and the human skeleton data is human joint point data provided by the project;
step two: through a data preprocessing module, screening main key point data used for representing actions from skeleton data: the main key point data for representing the motion is joint point data playing a key role in motion identification;
step three: extracting and calculating action characteristic values from the screened skeleton joint point data through a characteristic value construction module, and constructing a characteristic vector sequence of the action: the action characteristics comprise joint point positions and angles; the characteristic vector sequence refers to a sequence formed by combining characteristic vectors composed of characteristic values;
step four: preprocessing the characteristic vector sequence: the preprocessing refers to the normalization processing of the coordinates of the joint points in the feature vectors;
step five: through a deep network training and action judging module, storing a characteristic vector sequence of an action sample set as a standard action sample template library for calculating the standard degree;
step six: and (3) acquiring an action characteristic sequence in real time, and giving the action characteristic sequence to a pre-trained neural network: the pre-trained neural network is a seven-layer network structure built based on keras, wherein three layers of relu activation layers, three layers of BatchNormalization layers and the last layer of softmax output layer are arranged;
step seven: obtaining a prediction result from a network, and giving an action standard degree: and obtaining a prediction result according to the sixth step, calculating the action standard degree by comparing the sequence similarity through the standard action sample set obtained in the fifth step, finishing the identification of the action, and presenting the action standard degree through an opencv drawing module.
2. The openposition-based human body posture recognition method as claimed in claim 1, wherein: the feature extraction in the third step is to extract a position P and an inter-joint angle θ from the skeleton data, the feature vector sequence is a feature vector sequence R composed of feature vectors calculated by each frame of skeleton data, and then the expression of R is:
R={R1,R2,…,Rn,…,RN}
where N is the number of video frames, RnIs a feature vector of the n-th frame of skeleton data, RnThe expression is as follows:
Figure FDA0002902318440000011
wherein the content of the first and second substances,
Figure FDA0002902318440000012
is the ith eigenvalue of the eigenvector, I is the dimension of the eigenvector.
3. The openposition-based human body posture recognition method as claimed in claim 1, wherein: the normalization processing in the fourth step is to select the position of a joint point as a center origin, and the position coordinates are as follows:
C=(CX,CY)
the spatial coordinate value of the jth joint point is:
Figure FDA0002902318440000021
wherein j belongs to {0,1, …, m-1}, and m is the number of nodes.
4. The openposition-based human body posture recognition method as claimed in claim 1, wherein: the standard motion sample template library in the step five is composed of optimal motion samples which are obtained and stored in advance, and M is setgThe expression is the characteristic vector of the action g in the standard action sample template library, and the expression is as follows:
Figure FDA0002902318440000022
wherein, I is the dimension of the feature vector, the standard action sample template table expression is: mgG is equal to {0,1, …, K-1}, K is the number of actions,
Figure FDA0002902318440000023
is the standard motion feature vector of the ith motion.
5. The openposition-based human body posture recognition method as claimed in claim 1, wherein: in the sixth step, a neural network is used for result prediction, the network is generated by training self-collected data in advance, an output layer is softmax, the softmax is used for mapping the output of a plurality of neurons into a (0,1) interval in the multi-classification process, and an expression is as follows:
Figure FDA0002902318440000024
wherein V is an array of various action result values output by the network, and V isgIs the value of action g, g ∈ {0,1, …, K-1}, where K is the number of actions, then S isgThe probability value of the g-th action is shown, and e is the base of the natural logarithm.
6. The openposition-based human body posture recognition method as claimed in claim 1, wherein: said step (c) isSeventhly, calculating the standard degree of the action by using cosine similarity, and setting the obtained characteristic vector R and a standard action sample template library as MgThen, the following formula is given:
Figure FDA0002902318440000025
wherein R isiFor the ith value in the vector R,
Figure FDA0002902318440000031
is a vector MgI is the dimension of the feature vector.
CN202110060938.2A 2021-01-18 2021-01-18 Human body posture recognition method based on openposition Active CN112800892B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110060938.2A CN112800892B (en) 2021-01-18 2021-01-18 Human body posture recognition method based on openposition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110060938.2A CN112800892B (en) 2021-01-18 2021-01-18 Human body posture recognition method based on openposition

Publications (2)

Publication Number Publication Date
CN112800892A true CN112800892A (en) 2021-05-14
CN112800892B CN112800892B (en) 2022-08-26

Family

ID=75809987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110060938.2A Active CN112800892B (en) 2021-01-18 2021-01-18 Human body posture recognition method based on openposition

Country Status (1)

Country Link
CN (1) CN112800892B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113962245A (en) * 2021-09-08 2022-01-21 华南理工大学 Human behavior recognition method and system
CN114067146A (en) * 2021-09-24 2022-02-18 北京字节跳动网络技术有限公司 Evaluation method, evaluation device, electronic device and computer-readable storage medium
CN114783059A (en) * 2022-04-20 2022-07-22 浙江东昊信息工程有限公司 Temple incense and worship participation management method and system based on depth camera
CN114973403A (en) * 2022-05-06 2022-08-30 广州紫为云科技有限公司 Efficient behavior prediction method based on space-time dual-dimension feature depth network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930767A (en) * 2016-04-06 2016-09-07 南京华捷艾米软件科技有限公司 Human body skeleton-based action recognition method
CN110096950A (en) * 2019-03-20 2019-08-06 西北大学 A kind of multiple features fusion Activity recognition method based on key frame
CN112164091A (en) * 2020-08-25 2021-01-01 南京邮电大学 Mobile device human body pose estimation method based on three-dimensional skeleton extraction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930767A (en) * 2016-04-06 2016-09-07 南京华捷艾米软件科技有限公司 Human body skeleton-based action recognition method
CN110096950A (en) * 2019-03-20 2019-08-06 西北大学 A kind of multiple features fusion Activity recognition method based on key frame
CN112164091A (en) * 2020-08-25 2021-01-01 南京邮电大学 Mobile device human body pose estimation method based on three-dimensional skeleton extraction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王健: "一种基于深度传感器的人体动作识别方法", 《科技创新导报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113962245A (en) * 2021-09-08 2022-01-21 华南理工大学 Human behavior recognition method and system
CN113962245B (en) * 2021-09-08 2024-05-28 华南理工大学 Human behavior recognition method and system
CN114067146A (en) * 2021-09-24 2022-02-18 北京字节跳动网络技术有限公司 Evaluation method, evaluation device, electronic device and computer-readable storage medium
CN114783059A (en) * 2022-04-20 2022-07-22 浙江东昊信息工程有限公司 Temple incense and worship participation management method and system based on depth camera
CN114973403A (en) * 2022-05-06 2022-08-30 广州紫为云科技有限公司 Efficient behavior prediction method based on space-time dual-dimension feature depth network
CN114973403B (en) * 2022-05-06 2023-11-03 广州紫为云科技有限公司 Behavior prediction method based on space-time double-dimension feature depth network

Also Published As

Publication number Publication date
CN112800892B (en) 2022-08-26

Similar Documents

Publication Publication Date Title
CN112800892B (en) Human body posture recognition method based on openposition
CN105930767B (en) A kind of action identification method based on human skeleton
CN110135375B (en) Multi-person attitude estimation method based on global information integration
CN106650687B (en) Posture correction method based on depth information and skeleton information
CN106682598B (en) Multi-pose face feature point detection method based on cascade regression
Du et al. Skeleton based action recognition with convolutional neural network
CN108256421A (en) A kind of dynamic gesture sequence real-time identification method, system and device
CN106384093B (en) A kind of human motion recognition method based on noise reduction autocoder and particle filter
CN111414839B (en) Emotion recognition method and device based on gesture
CN108647663B (en) Human body posture estimation method based on deep learning and multi-level graph structure model
CN108182397B (en) Multi-pose multi-scale human face verification method
CN108875586B (en) Functional limb rehabilitation training detection method based on depth image and skeleton data multi-feature fusion
CN109086706A (en) Applied to the action identification method based on segmentation manikin in man-machine collaboration
Mapari et al. American static signs recognition using leap motion sensor
CN105469050B (en) Video behavior recognition methods based on local space time's feature description and pyramid words tree
CN112906520A (en) Gesture coding-based action recognition method and device
Tao et al. DGLFV: Deep generalized label algorithm for finger-vein recognition
CN105373810A (en) Method and system for building action recognition model
CN110956141A (en) Human body continuous action rapid analysis method based on local recognition
CN113065505A (en) Body action rapid identification method and system
CN109670401A (en) A kind of action identification method based on skeleton motion figure
CN109255293B (en) Model walking-show bench step evaluation method based on computer vision
CN115035037A (en) Limb rehabilitation training method and system based on image processing and multi-feature fusion
CN116884045B (en) Identity recognition method, identity recognition device, computer equipment and storage medium
CN114092863A (en) Human body motion evaluation method for multi-view video image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210514

Assignee: Aerospace Defense (Nanjing) Information Technology Co.,Ltd.

Assignor: NANJING University OF POSTS AND TELECOMMUNICATIONS

Contract record no.: X2023980049894

Denomination of invention: A human pose recognition method based on Openpose

Granted publication date: 20220826

License type: Common License

Record date: 20231205

EE01 Entry into force of recordation of patent licensing contract
TR01 Transfer of patent right

Effective date of registration: 20240617

Address after: 230000, Room 402, Building 3, Lianhaiyun Chuanggang, Intersection of Yanqiao Road and Chengxihu Road, Taohua Town, Feixi County, Hefei City, Anhui Province

Patentee after: Hefei Yiqian Information Technology Co.,Ltd.

Country or region after: China

Address before: 210000 No. 186 Software Avenue, Yuhuatai District, Nanjing, Jiangsu Province

Patentee before: NANJING University OF POSTS AND TELECOMMUNICATIONS

Country or region before: China