CN111539364A - Multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting - Google Patents

Multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting Download PDF

Info

Publication number
CN111539364A
CN111539364A CN202010354377.2A CN202010354377A CN111539364A CN 111539364 A CN111539364 A CN 111539364A CN 202010354377 A CN202010354377 A CN 202010354377A CN 111539364 A CN111539364 A CN 111539364A
Authority
CN
China
Prior art keywords
human
classifier
joint
human body
limb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010354377.2A
Other languages
Chinese (zh)
Other versions
CN111539364B (en
Inventor
杨忠
吴有龙
田小敏
宋爱国
徐宝国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinling Institute of Technology
Original Assignee
Jinling Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinling Institute of Technology filed Critical Jinling Institute of Technology
Priority to CN202010354377.2A priority Critical patent/CN111539364B/en
Publication of CN111539364A publication Critical patent/CN111539364A/en
Application granted granted Critical
Publication of CN111539364B publication Critical patent/CN111539364B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Abstract

The invention relates to a multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting, which comprises the steps of firstly, acquiring human motion data through a three-dimensional Kinect camera to obtain three views of human motion; then, tracking the position of the human skeleton in the image by using a skeleton tracking technology in Kinect for Windows SDK to obtain data information of 20 joint points of the human skeleton, and calculating the limb vector characteristic and the limb acceleration characteristic of each joint according to the human skeleton data; then, the image features with the artificial labels are used as a training set, N initialization image frames are used for correspondingly training N Knn classifiers one by one, and the weight of each classifier is distributed and updated; finally, the human behavior classes are identified by using N Knn classifiers with assigned weights.

Description

Multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting
Technical Field
The invention relates to the field of human behavior recognition and estimation, in particular to a multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting.
Background
With the continuous development of the field of computer human-computer interaction, the application of the human body action recognition technology in the aspects of intelligent monitoring, dance teaching, medical rehabilitation and the like is more and more extensive, and great convenience is provided for the daily life of people. Generally, in different application scenarios, different motion capture devices are used, and regarding human body motion capture technology, there are mainly optical and wearable. The Kinect3D somatosensory camera released by Microsoft can acquire color images and depth information of a human body and can also acquire skeleton data information of the human body.
Human behavior recognition is an important component of human motion analysis, belongs to advanced visual analysis, and mainly refers to a series of analysis and recognition of the motion pattern of a moving target by using a computer. Since the motion is a behavior pattern of a human, it is necessary to perform a visual analysis of the behavior of the human body in order to analyze the behavior of the human body using a computer. With the continuous development of vision analysis theory, more requirements are put on the traditional vision analysis, such as: multiple sensors are integrated, and the efficiency and the precision are high.
Disclosure of Invention
To solve the above existing problems. The invention provides a multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting, and solves the problem of human behavior recognition. To achieve this object:
the invention provides a multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting, which comprises the following specific steps of:
step 1: meanwhile, a three-dimensional Kinect camera is used for collecting human motion data to obtain three views of human motion;
step 2: the method comprises the steps of tracking the positions of human bones in an image by utilizing a bone tracking technology in Kinect for Windows SDK, obtaining data information of 20 joint points of the human bones, and calculating the limb vector characteristics and the limb acceleration characteristics of each joint according to the human bone data;
and step 3: taking the image features with the artificial labels as a training set, training N Knn classifiers by using N initialization image frames in a one-to-one correspondence manner, and simultaneously distributing and updating the weight of each classifier;
and 4, step 4: and (3) for a human body behavior multi-frame image of an unknown class, extracting the limb vector characteristics and the limb acceleration characteristics of the multi-frame image according to the step 2, sending the characteristics into N Knn classifiers which are distributed with weights, and identifying the class of the human body behavior.
As a further improvement of the present invention, the human body bone joint data information in step 2 is as follows:
the method comprises the following steps of collecting human motion information by using a bone tracking technology in Kinect for Windows SDK, finally obtaining 20 bone joint point three-dimensional data information of a human body, representing each joint by using the number of A-T, and calculating the joint angle of human bones by the following formula 1:
Figure RE-GDA0002518165000000021
wherein, θ is the size of the joint angle at the time t of each frame of bone data, u (t) and v (t) are two joint vectors at the time t, respectively, and 17 pieces of human joint angle information can be finally obtained by the formula 1.
As a further improvement of the present invention, the limb vector characteristics and the limb acceleration characteristics of each joint calculated in step 2 are as follows:
according to human body structure, a human body can be divided into five major parts, and the bone tracking technology in the Kinect for Windows SDK can obtain data information of all the joint points, including:
1. head T (t), neck C (t), spine D (t), and buttocks G (t);
2. left hand L (t), left wrist J (t), left elbow H (t), and right shoulder A (t);
3. right hand M (t), right wrist K (t), right elbow I (t), and right shoulder B (t);
4. left foot R (t), left ankle P (t), left knee N (t), and left hip E (t);
5. right foot S (t), right ankle Q (t), right knee O (t), and right hip F (t);
the joint vector characteristics of five human bodies can be solved by the following formula:
Figure RE-GDA0002518165000000022
because each bone node has different contribution degrees to human body action expression, two main action joint angles are selected from each part, and the angular velocity characteristic of the human body limb joint is calculated by using the formula 3:
ω(t)=θ(t+1)-θ(t) (3)
θ (t) is the magnitude of the joint angle of the t frames, where the angular velocity characteristic of the torso portion is calculated by selecting the angle θ4And theta9The left arm part is selected from theta3And theta2Right arm part selected from theta6And theta7The left leg part is selected from theta12And theta13The right leg part is selected from theta15And theta16(ii) a The angular velocity characteristics of each part of the human body represent the motion conditions of the whole body of limbs and trunk of the human body;
the bending of the limbs and the trunk of the human body can be embodied by the change of the distance between the joint points, namely the acceleration characteristics of the joint points can depict the bending degree of the limbs and the trunk of the human body:
v(t)=d(t+1)-d(t) (4)
where v (t) is the velocity characteristic of t frames, and d (t) is the Euclidean distance between the head and end joint points of five parts of the human body.
As a further improvement of the present invention, the weight of each classifier is updated in step 3 as follows:
in order to optimize the recognition of the multi-classifier to the human behavior and fully utilize the information of the three-dimensional Kinect sensor, the invention distributes N Knn classifiers to N image frames:
firstly, extracting the limb vector characteristics and the limb acceleration characteristics of each joint from a plurality of multiframe images with artificial labels through the step 2, and carrying out the following steps according to the ratio of 4: the proportion of 1 is respectively used as a training set sample and a test set sample; assuming that the total number of collected multi-frame images of human body behaviors is N, N Knn classifiers are established, and the distances from the test samples to the training samples are calculated by the following formula 5:
Figure RE-GDA0002518165000000031
wherein xiIs a test characteristic sample, x'iIs a training set feature sample; and finding out k training sample points of the training samples closest to the test sample through the Euclidean distance, and simultaneously determining the category of the test sample in the classifier according to a classification decision rule:
Figure RE-GDA0002518165000000032
wherein, yi∈{c1,c2,…,ckIs a training sample x'iClass of instance, I is an indicator function when yi=ciIf so, I is 1, otherwise, I is 0;
after N Knn classifiers are first obtained, each classifier is assigned a weight Wm={wm1,wm2,…,wmNA classifier represented by M, M is assigned with weight times in an iteration mode 1, 2;
when m is 1, i.e. during the first iteration, each Knn classifier is assigned the same weight:
Figure RE-GDA0002518165000000041
the weights of each Knn classifier will be continuously updated during subsequent iterations of the test sample, and the updating of the weights is to adaptively adjust the weights of each classifier:
Figure RE-GDA0002518165000000042
Figure RE-GDA0002518165000000043
Figure RE-GDA0002518165000000044
Figure RE-GDA0002518165000000045
P'mnis the result of the classification of the test sample by the classifier, PnIs the actual manual annotation category, S (P'mn=Pn) Representing a value of-1 when the test sample is correctly classified and 1 otherwise; therefore, the optimization of the multiple classifiers can be realized, so that the interference in the image is reduced, and the robustness of the classifier is improved.
As a further improvement of the present invention, the human behavior categories identified in step 4 are as follows:
for a human body behavior multi-frame image of an unknown class, extracting the limb vector characteristics and the limb acceleration characteristics of the multi-frame image according to the step 2, sending the characteristics into N Knn classifiers which are distributed with weights, and identifying the class of the human body behavior to obtain the classification result p of each classifierijI 1, 2., N, j 1, 2., k, N is the number of classifiers, j is the type of the human behavior class, and the determination of the unknown human behavior class can be obtained by equation 12:
Classi(A)=max(w(m+1)ipij)i=1,2,…,N j=1,2,...,k (12)
w(m+1)ithe weights of the ith Knn classifier after m iterations are carried out, and finally the classification category with the largest weight is determined as the human behavior category of the multi-frame image.
The invention relates to a multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting,
has the advantages that:
1. the method comprises the steps that a three-dimensional Kinect sensor is used for collecting human motion data, and the collected data contain more human motion information;
2. according to the invention, the limb vector characteristics and the limb acceleration characteristics of each joint are extracted according to human body structure, so that the unknown human skeleton can be better tracked;
3. the Knn multi-classifier structure capable of updating the weight is designed, so that the robustness and the recognition rate of the model are improved;
4. the invention provides an important technical means for human behavior recognition
Drawings
FIG. 1 is a flow chart of the overall algorithm principle;
FIG. 2 is a numbering diagram of human skeletal joints;
fig. 3 is a 17-person skeleton joint angle information map.
Detailed Description
The invention is described in further detail below with reference to the following detailed description and accompanying drawings:
the invention provides a multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting, the overall algorithm principle flow of the invention is shown in figure 1, and the steps are as follows:
step 1: build lithium cell experiment platform, this platform can real-time measurement lithium cell working data, include: the experimental data of the voltage, the current, the impedance, the ambient temperature and the like of the charging and discharging of the battery can carry out a cycle charging and discharging experiment on the lithium battery;
step 1: meanwhile, a three-dimensional Kinect camera is used for collecting human motion data to obtain three views of human motion;
step 2: the method comprises the steps of tracking the positions of human bones in an image by utilizing a bone tracking technology in Kinect for Windows SDK, obtaining data information of 20 joint points of the human bones, and calculating the limb vector characteristics and the limb acceleration characteristics of each joint according to the human bone data;
the human body bone joint data information in the step 2 is specifically described as follows:
the human motion information is collected by using a bone tracking technology in Kinect for Windows SDK, and finally, 20 pieces of human bone joint point three-dimensional data information are obtained, each joint is represented by the number of A-T, the number of the human bone joint point is shown in figure 2, and the joint angle of the human bone is calculated by the following formula 1:
Figure RE-GDA0002518165000000061
wherein θ is the size of the joint angle at time t of each frame of bone data, u (t) and v (t) are two joint vectors at time t, and 17 pieces of human joint angle information can be finally obtained by formula 1, as shown in fig. 3.
The calculation of the limb vector characteristics and the limb acceleration characteristics of each joint in the step 2 is specifically described as follows:
according to human body structure, a human body can be divided into five major parts, and the bone tracking technology in the Kinect for Windows SDK can obtain data information of all the joint points, including:
head T (t), neck C (t), spine D (t), and buttocks G (t);
1. left hand L (t), left wrist J (t), left elbow H (t), and right shoulder A (t);
2. right hand M (t), right wrist K (t), right elbow I (t), and right shoulder B (t);
3. left foot R (t), left ankle P (t), left knee N (t), and left hip E (t);
4. right foot S (t), right ankle Q (t), right knee O (t), and right hip F (t);
the joint vector characteristics of five human bodies can be solved by the following formula:
Figure RE-GDA0002518165000000062
because each bone node has different contribution degrees to human body action expression, two main action joint angles are selected from each part, and the angular velocity characteristic of the human body limb joint is calculated by using the formula 3:
ω(t)=θ(t+1)-θ(t) (3)
θ (t) is the magnitude of the joint angle of the t frames, where the angular velocity characteristic of the torso portion is calculated using the angle θ in FIG. 34And theta9The left arm part is selected from theta3And theta2Right arm part selected from theta6And theta7The left leg part is selected from theta12And theta13The right leg part is selected from theta15And theta16(ii) a Of parts of the human bodyThe angular velocity characteristics represent the overall movement of the limbs and trunk of the human body;
the bending of the limbs and the trunk of the human body can be embodied by the change of the distance between the joint points, namely the acceleration characteristics of the joint points can depict the bending degree of the limbs and the trunk of the human body:
v(t)=d(t+1)-d(t) (4)
where v (t) is the velocity characteristic of t frames, and d (t) is the Euclidean distance between the head and end joint points of five parts of the human body.
And step 3: taking the image features with the artificial labels as a training set, training N Knn classifiers by using N initialization image frames in a one-to-one correspondence manner, and simultaneously distributing and updating the weight of each classifier;
the updating of the weight of each classifier in step 3 is described in detail as follows:
in order to optimize the recognition of the multi-classifier to the human behavior and fully utilize the information of the three-dimensional Kinect sensor, the invention distributes N Knn classifiers to N image frames:
firstly, extracting the limb vector characteristics and the limb acceleration characteristics of each joint from a plurality of multiframe images with artificial labels through the step 2, and carrying out the following steps according to the ratio of 4: the proportion of 1 is respectively used as a training set sample and a test set sample; assuming that the total number of collected multi-frame images of human body behaviors is N, N Knn classifiers are established, and the distances from the test samples to the training samples are calculated by the following formula 5:
Figure RE-GDA0002518165000000071
wherein xiIs a test characteristic sample, x'iIs a training set feature sample; and finding out k training sample points of the training samples closest to the test sample through the Euclidean distance, and simultaneously determining the category of the test sample in the classifier according to a classification decision rule:
Figure RE-GDA0002518165000000072
wherein, yi∈{c1,c2,...,ckIs a training sample x'iClass of instance, I is an indicator function when yi=ciIf so, I is 1, otherwise, I is 0;
after N Knn classifiers are first obtained, each classifier is assigned a weight Wm={wm1,wm2,...,wmNA classifier represented by M, M is assigned with weight times in an iteration mode 1, 2;
when m is 1, i.e. during the first iteration, each Knn classifier is assigned the same weight:
Figure RE-GDA0002518165000000073
the weights of each Knn classifier will be continuously updated during subsequent iterations of the test sample, and the updating of the weights is to adaptively adjust the weights of each classifier:
Figure RE-GDA0002518165000000081
Figure RE-GDA0002518165000000082
Figure RE-GDA0002518165000000083
Figure RE-GDA0002518165000000084
P'mnis the result of the classification of the test sample by the classifier, PnIs the actual manual annotation category, S (P'mn=Pn) Representing a value of-1 when the test sample is correctly classified and 1 otherwise; therefore, the optimization of the multiple classifiers can be realized, so that the interference in the image is reduced, and the robustness of the classifier is improved.
And 4, step 4: and (3) for a human body behavior multi-frame image of an unknown class, extracting the limb vector characteristics and the limb acceleration characteristics of the multi-frame image according to the step 2, sending the characteristics into N Knn classifiers which are distributed with weights, and identifying the class of the human body behavior.
The identification of the human behavior category in step 4 is specifically described as follows:
for a human body behavior multi-frame image of an unknown class, extracting the limb vector characteristics and the limb acceleration characteristics of the multi-frame image according to the step 2, sending the characteristics into N Knn classifiers which are distributed with weights, and identifying the class of the human body behavior to obtain the classification result p of each classifierijI 1, 2., N, j 1, 2., k, N is the number of classifiers, j is the type of the human behavior class, and the determination of the unknown human behavior class can be obtained by equation 12:
Classi(A)=max(w(m+1)ipij)i=1,2,…,N j=1,2,...,k (12)
w(m+1)ithe weights of the ith Knn classifier after m iterations are carried out, and finally the classification category with the largest weight is determined as the human behavior category of the multi-frame image.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, but any modifications or equivalent variations made according to the technical spirit of the present invention are within the scope of the present invention as claimed.

Claims (5)

1. The multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting specifically comprises the following steps,
step 1: meanwhile, a three-dimensional Kinect camera is used for collecting human motion data to obtain three views of human motion;
step 2: the method comprises the steps of tracking the positions of human bones in an image by utilizing a bone tracking technology in Kinect for Windows SDK, obtaining data information of 20 joint points of the human bones, and calculating the limb vector characteristics and the limb acceleration characteristics of each joint according to the human bone data;
and step 3: taking the image features with the artificial labels as a training set, training N Knn classifiers by using N initialization image frames in a one-to-one correspondence manner, and simultaneously distributing and updating the weight of each classifier;
and 4, step 4: and (3) for a human body behavior multi-frame image of an unknown class, extracting the limb vector characteristics and the limb acceleration characteristics of the multi-frame image according to the step 2, sending the characteristics into N Knn classifiers which are distributed with weights, and identifying the class of the human body behavior.
2. The multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting according to claim 1, characterized in that: the human body bone joint data information in the step 2 is as follows:
the method comprises the following steps of collecting human motion information by using a bone tracking technology in Kinect for Windows SDK, finally obtaining 20 bone joint point three-dimensional data information of a human body, representing each joint by using the number of A-T, and calculating the joint angle of human bones by the following formula 1:
Figure FDA0002472968100000011
wherein, θ is the size of the joint angle at the time t of each frame of bone data, u (t) and v (t) are two joint vectors at the time t, respectively, and 17 pieces of human joint angle information can be finally obtained by the formula 1.
3. The multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting according to claim 1, characterized in that: calculating the limb vector characteristics and the limb acceleration characteristics of each joint in the step 2 as follows:
according to human body structure, a human body can be divided into five major parts, and the bone tracking technology in the Kinect for Windows SDK can obtain data information of all the joint points, including:
1. head T (t), neck C (t), spine D (t), and buttocks G (t);
2. left hand L (t), left wrist J (t), left elbow H (t), and right shoulder A (t);
3. right hand M (t), right wrist K (t), right elbow I (t), and right shoulder B (t);
4. left foot R (t), left ankle P (t), left knee N (t), and left hip E (t);
5. right foot S (t), right ankle Q (t), right knee O (t), and right hip F (t);
the joint vector characteristics of five human bodies can be solved by the following formula:
Figure FDA0002472968100000021
because each bone node has different contribution degrees to human body action expression, two main action joint angles are selected from each part, and the angular velocity characteristic of the human body limb joint is calculated by using the formula 3:
ω(t)=θ(t+1)-θ(t) (3)
θ (t) is the magnitude of the joint angle of the t frames, where the angular velocity characteristic of the torso portion is calculated by selecting the angle θ4And theta9The left arm part is selected from theta3And theta2Right arm part selected from theta6And theta7The left leg part is selected from theta12And theta13The right leg part is selected from theta15And theta16(ii) a The angular velocity characteristics of each part of the human body represent the motion conditions of the whole body of limbs and trunk of the human body;
the bending of the limbs and the trunk of the human body can be embodied by the change of the distance between the joint points, namely the acceleration characteristics of the joint points can depict the bending degree of the limbs and the trunk of the human body:
v(t)=d(t+1)-d(t) (4)
where v (t) is the velocity characteristic of t frames, and d (t) is the Euclidean distance between the head and end joint points of five parts of the human body.
4. The multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting according to claim 1, characterized in that: the weight of each classifier is updated in step 3 as follows:
in order to optimize the recognition of the multi-classifier to the human behavior and fully utilize the information of the three-dimensional Kinect sensor, the invention distributes N Knn classifiers to N image frames:
firstly, extracting the limb vector characteristics and the limb acceleration characteristics of each joint from a plurality of multiframe images with artificial labels through the step 2, and carrying out the following steps according to the ratio of 4: the proportion of 1 is respectively used as a training set sample and a test set sample; assuming that the total number of collected multi-frame images of human body behaviors is N, N Knn classifiers are established, and the distances from the test samples to the training samples are calculated by the following formula 5:
Figure FDA0002472968100000022
wherein xiIs a test feature sample, xi' is a training set feature sample; and finding out k training sample points of the training samples closest to the test sample through the Euclidean distance, and simultaneously determining the category of the test sample in the classifier according to a classification decision rule:
Figure FDA0002472968100000023
wherein, yi∈{c1,c2,…,ckIs the training sample xi' class of instances, I is an indicator function when yi=ciIf so, I is 1, otherwise, I is 0;
after N Knn classifiers are first obtained, each classifier is assigned a weight Wm={wm1,wm2,…,wmNA classifier represented by M, M is assigned with weight times in an iteration mode 1, 2;
when m is 1, i.e. during the first iteration, each Knn classifier is assigned the same weight:
Figure FDA0002472968100000031
the weights of each Knn classifier will be continuously updated during subsequent iterations of the test sample, and the updating of the weights is to adaptively adjust the weights of each classifier:
Figure FDA0002472968100000032
Figure FDA0002472968100000033
Figure FDA0002472968100000034
Figure FDA0002472968100000035
Pm'nis the result of the classification of the test sample by the classifier, PnIs the actual artificial annotation class, S (P), of the test samplem'n=Pn) Representing a value of-1 when the test sample is correctly classified and 1 otherwise; therefore, the optimization of the multiple classifiers can be realized, so that the interference in the image is reduced, and the robustness of the classifier is improved.
5. The multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting according to claim 1, characterized in that: the human behavior categories identified in the step 4 are as follows:
for a human body behavior multi-frame image of an unknown class, extracting the limb vector characteristics and the limb acceleration characteristics of the multi-frame image according to the step 2, sending the characteristics into N Knn classifiers which are distributed with weights, and identifying the class of the human body behavior to obtain the classification result p of each classifierijI 1, 2., N, j 1, 2., k, N is the number of classifiers, j is the type of the human behavior class, and the determination of the unknown human behavior class can be obtained by equation 12:
Classi(A)=max(w(m+1)ipij) i=1,2,…,N j=1,2,...,k (12)
w(m+1)ithe weights of the ith Knn classifier after m iterations are carried out, and finally the classification category with the largest weight is determined as the human behavior category of the multi-frame image.
CN202010354377.2A 2020-04-29 2020-04-29 Multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting Active CN111539364B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010354377.2A CN111539364B (en) 2020-04-29 2020-04-29 Multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010354377.2A CN111539364B (en) 2020-04-29 2020-04-29 Multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting

Publications (2)

Publication Number Publication Date
CN111539364A true CN111539364A (en) 2020-08-14
CN111539364B CN111539364B (en) 2021-07-23

Family

ID=71979015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010354377.2A Active CN111539364B (en) 2020-04-29 2020-04-29 Multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting

Country Status (1)

Country Link
CN (1) CN111539364B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298013A (en) * 2021-06-08 2021-08-24 Tcl通讯(宁波)有限公司 Motion correction method, motion correction device, storage medium and electronic equipment
WO2024045208A1 (en) * 2022-08-31 2024-03-07 Hong Kong Applied Science And Technology Research Institute Co., Ltd Method and system for detecting short-term stress and generating alerts inside the indoor environment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104866860A (en) * 2015-03-20 2015-08-26 武汉工程大学 Indoor human body behavior recognition method
CN105551059A (en) * 2015-12-08 2016-05-04 国网山西省电力公司技能培训中心 Power transformation simulation human body motion capturing method based on optical and inertial body feeling data fusion
JP6534499B1 (en) * 2019-03-20 2019-06-26 アースアイズ株式会社 MONITORING DEVICE, MONITORING SYSTEM, AND MONITORING METHOD

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104866860A (en) * 2015-03-20 2015-08-26 武汉工程大学 Indoor human body behavior recognition method
CN105551059A (en) * 2015-12-08 2016-05-04 国网山西省电力公司技能培训中心 Power transformation simulation human body motion capturing method based on optical and inertial body feeling data fusion
JP6534499B1 (en) * 2019-03-20 2019-06-26 アースアイズ株式会社 MONITORING DEVICE, MONITORING SYSTEM, AND MONITORING METHOD

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298013A (en) * 2021-06-08 2021-08-24 Tcl通讯(宁波)有限公司 Motion correction method, motion correction device, storage medium and electronic equipment
WO2024045208A1 (en) * 2022-08-31 2024-03-07 Hong Kong Applied Science And Technology Research Institute Co., Ltd Method and system for detecting short-term stress and generating alerts inside the indoor environment

Also Published As

Publication number Publication date
CN111539364B (en) 2021-07-23

Similar Documents

Publication Publication Date Title
CN111144217B (en) Motion evaluation method based on human body three-dimensional joint point detection
CN106650687B (en) Posture correction method based on depth information and skeleton information
CN107423729B (en) Remote brain-like three-dimensional gait recognition system oriented to complex visual scene and implementation method
CN111881887A (en) Multi-camera-based motion attitude monitoring and guiding method and device
CN107423730A (en) A kind of body gait behavior active detecting identifying system and method folded based on semanteme
CN101894278B (en) Human motion tracing method based on variable structure multi-model
CN112069933A (en) Skeletal muscle stress estimation method based on posture recognition and human body biomechanics
CN109325466B (en) Intelligent motion guidance system and method based on motion recognition technology
CN111008583B (en) Pedestrian and rider posture estimation method assisted by limb characteristics
CN105512621A (en) Kinect-based badminton motion guidance system
Singh et al. Human pose estimation using convolutional neural networks
CN111539364B (en) Multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting
CN111531537B (en) Mechanical arm control method based on multiple sensors
CN111914643A (en) Human body action recognition method based on skeleton key point detection
CN113516005A (en) Dance action evaluation system based on deep learning and attitude estimation
Ko et al. CNN and bi-LSTM based 3D golf swing analysis by frontal swing sequence images
CN113663312A (en) Micro-inertia-based non-apparatus body-building action quality evaluation method
CN113033501A (en) Human body classification method and device based on joint quaternion
CN111833439A (en) Artificial intelligence-based ammunition throwing analysis and mobile simulation training method
CN111310655A (en) Human body action recognition method and system based on key frame and combined attention model
CN116386137A (en) Mobile terminal design method for lightweight recognition of Taiji boxing
CN116030533A (en) High-speed motion capturing and identifying method and system for motion scene
Endres et al. Graph-based action models for human motion classification
CN207529395U (en) A kind of body gait behavior active detecting identifying system folded based on semanteme
CN117671738B (en) Human body posture recognition system based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant