CN109086698A - A kind of human motion recognition method based on Fusion - Google Patents

A kind of human motion recognition method based on Fusion Download PDF

Info

Publication number
CN109086698A
CN109086698A CN201810803749.8A CN201810803749A CN109086698A CN 109086698 A CN109086698 A CN 109086698A CN 201810803749 A CN201810803749 A CN 201810803749A CN 109086698 A CN109086698 A CN 109086698A
Authority
CN
China
Prior art keywords
sensor node
matrix
indicate
data
carried out
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810803749.8A
Other languages
Chinese (zh)
Other versions
CN109086698B (en
Inventor
王哲龙
郭明
王英睿
赵红宇
仇森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201810803749.8A priority Critical patent/CN109086698B/en
Publication of CN109086698A publication Critical patent/CN109086698A/en
Application granted granted Critical
Publication of CN109086698B publication Critical patent/CN109086698B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The present invention relates to human actions to identify field, provides a kind of human motion recognition method based on Fusion, comprising: using the N number of inertial sensor node for being individually fixed in human body different parts, acquires human action data;Window segmentation is carried out to each sensor node human action data collected using sliding window cutting techniques, obtains multiple action data segments of each sensor node;Feature extraction is carried out to the action data segment of each sensor node, obtains corresponding feature vector;Feature Dimension Reduction is carried out using feature vector of the RLDA algorithm to each sensor node of acquisition;Parameter training and modeling are carried out using the feature vector after each sensor node dimensionality reduction as training data, obtains corresponding hierarchical fusion model;Using obtained hierarchical fusion model, human action identification is carried out.The present invention can effectively overcome drawback of the single classifier in identification process, can effectively improve human action accuracy of identification.

Description

A kind of human motion recognition method based on Fusion
Technical field
The present invention relates to human action identification field more particularly to a kind of human actions based on Fusion Recognition methods.
Background technique
Human action identification technology is a kind of new man-machine interaction mode risen in recent decades, has been increasingly becoming The hot issue that domestic and foreign scholars study.Human action be primarily referred to as human body action mode and people to environment or The reaction of object, human body is by the compound movements of limbs, to describe or express complicated human action.It can be said that human body is dynamic Making majority is to need to embody by the movement of human body limb.By the movement to human body come the movement of research and probe human body Just become a very effective approach of analysis human action.Human action identification based on inertial sensor is pattern-recognition One emerging research field in field, essence generate when obtaining people's movement by one or more inertial sensors first Then motor message pre-processes data, feature extraction and selection, is finally classified according to the feature of extraction to movement And identification.
During using inertial sensor research human action identification, it can not be suitable for using single sorting algorithm The identification of all people's body movement.This is because there are certain decision errors for each single classifier, single classification is utilized Device not can solve all practical problems centainly;In addition, movement has random and randomness, this is just under physical condition Considerably increase the difficulty of identification.Many researchers generally use multiple Classifiers Combination technology pair when identifying some compound actions Mankind's activity in practical application is monitored.Decision (or being Decision fusion) is carried out to raising identity in conjunction with multi-categorizer Can have a great impact.Decision fusion can effectively improve the classification performance of identifying system, improve the robustness of identifying system.
Summary of the invention
Present invention mainly solves the single sorting algorithms of the prior art can not be suitable for the identification that all people's body acts, Single classifier proposes that a kind of human action based on Fusion is known there are technical problems such as certain decision errors Other method can effectively overcome drawback of the single classifier in identification process, utilize hierarchical fusion mould proposed by the invention Type recognition result obtained is substantially better than traditional recognition method.
The present invention provides a kind of human motion recognition methods based on Fusion, comprising the following steps:
Step 100, using the N number of inertial sensor node for being individually fixed in human body different parts, human action number is acquired According to;
Step 200, each sensor node human action data collected is carried out using sliding window cutting techniques Window segmentation, obtains multiple action data segments of each sensor node;
Step 300, feature extraction is carried out to the action data segment of each sensor node, obtain corresponding feature to Amount;
Step 400, Feature Dimension Reduction is carried out using feature vector of the RLDA algorithm to each sensor node of acquisition;
Step 500, using the feature vector after each sensor node dimensionality reduction as training data carry out parameter training and Modeling, obtains corresponding hierarchical fusion model, including step 501 is to 506:
Step 501, cross validation method is rolled over by k, the feature vector after the dimensionality reduction of each sensor node is carried out Verifying obtains each movement for the contribution rate of each classifier;
Step 502, the evaluations matrix of following Multiple Classifier Fusion layer is established according to contribution rate:
Wherein, Y indicates that evaluations matrix, c indicate that action classification, k presentation class device quantity, i indicate i-th of inertial sensor Node, mijIndicate j-th of classifier relative to i-th of inertial sensor node, yqjQ-th of movement is expressed as at j-th point Contribution rate under class device;
Step 503, the evaluations matrix obtained according to step 502 obtains the Shannon of each classifier using following formula Entropy:
Wherein, ejIndicate that Shannon entropy, η are a constants, and η=1/log2(c);
The amount of redundancy of this classifier is obtained according to Shannon entropy, and using following formula:
rj=1-ej
Wherein, rjIndicate amount of redundancy;
The weighted value of i-th of sensor node, j-th classifier is obtained by following formula:
Wherein,Indicate the weighted value of i-th of sensor node, j-th classifier;
The output result of i-th of sensor node is obtained by following formula:
Wherein, λi,qIndicate that test sample x has been assigned to q class;
Step 504, it for the feature vector after the dimensionality reduction in i-th of sensor node, is obtained q-th by following formula Act the discrimination of class:
Wherein,Indicate the discrimination of q-th of movement class;
Step 505, the evaluations matrix such as lower sensor fused layer is established according to discrimination:
Step 506, the evaluations matrix obtained according to step 505 obtains the Shannon of each sensor using following formula Entropy:
The amount of redundancy of this sensor is obtained according to Shannon entropy, and using following formula:
Pass through the output weight of each sensor node of following formula:
Obtain following hierarchical fusion model:
Wherein, λqIndicate that test sample is assigned to q class;
Step 600, using obtained hierarchical fusion model, human action identification is carried out.
Preferably, window is carried out to each sensor node human action data collected using sliding window cutting techniques Mouth segmentation, comprising:
For i-th of sensor node, enabling the size of split window is l, if the length of exercise data matrix isThen Exercise data matrix AiIt can be divided intoA data window, the segmentation data matrix size in each window are (l × 6) dimension and every two adjacent data window have 50% repetitive rate.
Preferably, feature extraction is carried out to the action data segment of each sensor node, the feature of extraction includes: three axis Acceleration information and the root mean square of three axis angular rate data, absolute mean square deviation, kurtosis, covariance, zero-crossing rate and energy.
Preferably, Feature Dimension Reduction is carried out using feature vector of the RLDA algorithm to each sensor node of acquisition, including Following steps:
Step 401, it for characteristic vector space corresponding to i-th of sensor node, obtains spreading square in corresponding class Battle array and between class scatter matrix:
Wherein, SωIndicate within-class scatter matrix, SbIndicate between class scatter matrix, μaIndicate in a class all feature vectors and Mean value, μ indicate characteristic vector space XiAll feature vector sums be averaged;
Step 402, invertible matrix is solved according to contract matrix theorem and matrix basic transformation, obtains following formula:
PTSωP=In
Wherein, P indicates invertible matrix,For SbCharacteristic value, and InIndicate that n ties up unit matrix;
Step 403, according to step 402 acquired results, following optimal projection matrix is obtained using Fei Sheer decision criteria:
φopt=KPT
Wherein, φoptIndicate optimal projection matrix, K=φ (PT)-1,Indicate projection matrix to be solved;
Step 404, Feature Dimension Reduction is carried out using optimal projection matrix.
A kind of human motion recognition method based on Fusion provided by the invention, is mainly utilized matrix Contract theorem in improves traditional LDA algorithm, improved Feature Dimension Reduction algorithm can effectively reduce by In the huge disturbance that the inverse time of the characteristic value in estimation within-class scatter matrix is generated due to small deviation, to be conducive to mention High algorithm performance;In action recognition level, the present invention mainly proposes a kind of new hierarchical fusion algorithm, and the blending algorithm is main Including two layers, first layer is Multiple Classifier Fusion layer, and the second layer is sensor fused layer, and each layer of the main entropy of output weight Method obtains.Algorithm proposed by the present invention carries out the determination in complete by Information Entropy, can effectively improve the robust of disaggregated model Property, by design layered can effectively raising movement accuracy of identification.
Detailed description of the invention
Fig. 1 is the implementation flow chart of the human motion recognition method the present invention is based on Fusion;
Fig. 2 is hierarchical fusion model schematic of the invention.
Specific embodiment
To keep the technical problems solved, the adopted technical scheme and the technical effect achieved by the invention clearer, below The present invention is described in further detail in conjunction with the accompanying drawings and embodiments.It is understood that specific implementation described herein Example is used only for explaining the present invention rather than limiting the invention.It also should be noted that for ease of description, attached drawing In only some but not all of the content related to the present invention is shown.
Fig. 1 is the implementation flow chart of the human motion recognition method the present invention is based on Fusion.Such as Fig. 1 institute Show, the human motion recognition method provided in an embodiment of the present invention based on Fusion, detailed process is as follows:
Step 100, using the N number of inertial sensor node for being individually fixed in human body different parts, human action number is acquired According to.
Specifically, N number of inertial sensor node is individually fixed in first N number of position of human body, each sensor node A three axis accelerometer and a three-axis gyroscope are respectively included, and the action data of acquisition is uploaded by means of receiving node To host computer data processing platform (DPP);Then human action data, such as station, race, upstairs human body are acquired using N number of sensor node The data of movement.Human action data includes the 3-axis acceleration data and three axis angular rate data of each sensor node. (i ∈ { 1,2, N }) a sensor node for i-th, the human action data of acquisition include x-axis, y-axis and z-axis Acceleration information ai=[aix,aiy,aiz] and x-axis, the angular velocity data ang of y-axis and z-axisi=[angix,angiy,angiz], then For i-th of sensor node, 3-axis acceleration data and three axis angular rate data composition have 6 the original motion data arranged Matrix Ai=[ai,angi]=[aix,aiy,aiz,angix,angiy,angiz]。
Step 200, each sensor node human action data collected is carried out using sliding window cutting techniques Window segmentation, obtains multiple action data segments of each sensor node.
After action data collected in obtaining step 100, window segmentation is carried out to action data.The present embodiment is main Window division is carried out to action data using sliding window cutting techniques: selecting the window size of regular length first, then moves Dynamic window is split action data, and two adjacent windows have 50% repetitive rate.
Particularly, for i-th of sensor node, enabling the size of split window is l, if the length of exercise data matrix isThen exercise data matrix AiIt can be divided intoA data window, the segmentation data matrix in each window Size is the repetitive rate that (l × 6) dimension and every two adjacent data window have 50%.
Step 300, feature extraction is carried out to the action data segment of each sensor node, obtain corresponding feature to Amount.
After the initial data corresponding to each sensor is divided into multiple data windows, need in each window Divide data matrix and carry out feature extraction, the feature of extraction mainly includes following 6 kinds:
1, the root mean square (Root mean square, RMS) of 3-axis acceleration data and three axis angular rate data is equal Root, expression formula are respectivelyWherein T={ x, y, z } is indicated Three axis directions, i indicate i-th of sensor;
2, the absolute mean square deviation (Mean absolute deviation, MAD) of 3-axis acceleration data and three shaft angles speed The absolute mean square deviation of degree evidence, expression formula are respectively
Wherein, T={ x, y, z } indicates three axis directions, and i indicates i-th of biography Sensor;
3, the covariance of the covariance (Covariance) of 3-axis acceleration data and three axis angular rate data, difference It is expressed asWherein T1,T2={ x, y, z } indicates three axis directions, i table Show i-th of sensor;
4, the kurtosis of the kurtosis (Kurtosis) of 3-axis acceleration data and three axis angular rate data, formula mainly indicate ForWherein T ={ x, y, z } indicates three axis directions, and i indicates i-th of sensor,Indicate the mean value of acceleration information in window,Table Show the variance of acceleration information in window,Indicate the mean value of angular velocity data in window,Indicate window interior angle The variance of speed data;
5, the zero-crossing rate of 3-axis acceleration dataAnd three axis angular rate The zero-crossing rate of dataWherein T={ x, y, z } indicates three axis directions, and i indicates i-th of sensor;
6, the energy of 3-axis acceleration data and the energy of three axis angular rate data, formula are expressed as
WithWherein T={ x, y, z } indicates three axis sides To i indicates i-th of sensor, xang,T,jIndicate raw data matrix aiTObtained coefficient after Fast Fourier Transform, xang,T,jIndicate raw data matrix angiTObtained coefficient after Fast Fourier Transform.
After to above 6 kinds of feature extractions, corresponding to j-th of data window of i-th of sensing node, it can get as follows Feature vector:
Step 400, Feature Dimension Reduction is carried out using feature vector of the RLDA algorithm to each sensor node of acquisition.
Specifically, the invention proposes RLDA after having extracted feature for the action data of each sensor node acquisition Algorithm is used to carry out dimensionality reduction to feature space.Its key step is as follows:
Step 401, for characteristic vector space X corresponding to i-th of sensor nodei, calculate in corresponding class and spread Matrix SωAnd between class scatter matrix Sb, calculation formula is respectively as follows:
Wherein, SωIndicate within-class scatter matrix, SbIndicate between class scatter matrix, μaIndicate in a class all feature vectors and Mean value, μaIndicate the mean value of all feature vector sums in a class, μ indicates characteristic vector space XiAll feature vector sums It is average.
Step 402, invertible matrix P is solved according to contract matrix theorem and matrix basic transformation, so that following formula is set up:
PTSωP=In
Wherein, P indicates invertible matrix,For SbCharacteristic value, and InIndicate that n ties up unit matrix.
Step 403, according to step 402 acquired results, elementary variation is carried out to Fei Sheer decision criteria, obtains solving optimal Projection matrix.
Specifically, Fei Sheer, which maximizes criterion, to be indicated are as follows:
Wherein,Projection matrix to be solved is indicated, due to SbAnd SωAll be it is positive semi-definite, in addition, again according to linear discriminant Analyze S known to (Linear discriminant analysis, LDA) theoryωFor positive definite matrix.It is then fixed according to contract matrix An invertible matrix P is certainly existed known to reason, so thatPTSωP=In, here,It is matrix SbCharacteristic value, InIndicate that n ties up unit matrix.Then Fei Sheer maximizes criterion and can convert For following form:
It enablesK=φ (PT)-1, then Fei Sheer maximizes criterion and can convert are as follows:
Then the optimal projection matrix of linear discriminant analysis can obtain φ by following formulaopt:
φopt=KPT
Wherein, φoptIndicate optimal projection matrix.
Step 404, Feature Dimension Reduction is carried out using optimal projection matrix.
Step 500, using the feature vector after each sensor node dimensionality reduction as training data carry out parameter training and Modeling, obtains corresponding hierarchical fusion model.
Fig. 2 is hierarchical fusion model schematic of the invention.Referring to Fig. 2, the hierarchical fusion model of the embodiment of the present invention is used To identify human body various motion, which mainly includes two layers.First layer can be referred to as Multiple Classifier Fusion layer, basic Thought is exactly the classification results that multiple classifiers are merged using the thought of majority voting method then, and the strategy of fusion is mainly in conjunction with power The thought of weight, and the weight of the corresponding result of decision is mainly obtained by Information Entropy.The second layer is referred to as sensor fused layer, main It to be exactly the output for the sensor that fusion is bundled in the multiple positions of body as a result, the strategy of fusion is still and is obtained by Information Entropy Weighted value carries out decision.The specific steps of which are as follows:
Step 501, cross validation method is rolled over by k, the feature vector after the dimensionality reduction of each sensor node is carried out Verifying obtains each movement for the contribution rate of each classifier.
For rolling over cross validation method, training data by k from the training dataset after the dimensionality reduction that a sensor obtains Collection can be divided into k parts by random.Then by this k group data, we can be clipped to k basic classification device relative to c with score The identification accurate rate of a action classification, this k identification accurate rate are marked as c, yqjQ-th of movement is expressed as to classify at j-th Identification accurate rate under device.Here yqjQ-th of movement is considered for the contribution rate of j-th of classifier.
Step 502, the evaluations matrix Y of following Multiple Classifier Fusion layer is established according to contribution rate:
Wherein, Y indicates that evaluations matrix, c indicate that action classification, k presentation class device quantity, i (0≤i≤N) indicate i-th Inertial sensor node, mijIndicate jth (0≤j≤k) a classifier relative to i-th of inertial sensor node, yqjIt indicates For contribution rate of a movement of q (0≤q≤c) under j-th of classifier;
Step 503, the evaluations matrix obtained according to step 502 obtains the Shannon of each classifier using following formula Entropy:
Wherein, ejIndicate Shannon entropy, j indicates that j-th of classifier, η are a constants, and η=1/log2(c)。
Then according to Shannon entropy, and the amount of redundancy r of this classifier is obtained using following formulaj:
rj=1-ej
And the weighted value of i-th of sensor node, j-th classifier is obtained by following formula
The output result of i-th of sensor node is obtained by following formula:
Wherein, λi,qIndicate that test sample x has been assigned to q class.
Step 504, for the feature vector (training data) after the dimensionality reduction in i-th of sensor node, pass through following public affairs Formula obtains the discrimination of q-th of movement class
Wherein,Indicate the discrimination of q-th of movement class.
Step 505, the evaluations matrix such as lower sensor fused layer is established according to discrimination
Step 506, the evaluations matrix obtained according to step 505 obtains the Shannon of each sensor using following formula Entropy
Then according to Shannon entropy, and the amount of redundancy of this sensor is obtained using following formula
And pass through the output weight of each sensor node of following formula
Obtain following hierarchical fusion model:
Wherein, λqIndicate that test sample is assigned to q class.For test sample, last result of decision λq, can be melted by layer Molding type obtains.
Step 600, using obtained hierarchical fusion model, human action identification is carried out.
When test data is input in corresponding hierarchical fusion model, corresponding classification results, Jin Ershi can be obtained Existing human action identification.
Explanation that the present invention will be further explained by way of example below:
For example, acquiring human action data by five sensor nodes, each sensor node includes that three axis add Speedometer and a three-axis gyroscope, sample frequency 50HZ.Experimental subjects has 8 people altogether, and the age is between 24-34 years old.Five Sensor node is individually placed to the right hand wrist of experimental subjects, left hand arm, waist, right crus of diaphragm ankle, at left foot thigh.In addition, The designed movement of this experiment includes: to walk (to execute on the treadmill of gymnasium, the speed of setting is respectively 3km/h, 5km/h, is held The about 3 minutes every time row time);Run (it being executed on the treadmill of gymnasium, the speed of setting is respectively 6km/h, 8km/h, 12km/h executes about 3 minutes every time time);Rope skipping, (practical to execute);Cycling (executes, executing the time is 3 in campus Minute);It goes upstairs and (is executed in campus);It goes downstairs and (is executed in campus);Gymnastics (practical to execute).In addition, the original number of acquisition It is handled in MATLAB according to meeting, and combine the recognizer write, obtain recognition result to the end.This example is adopted altogether (8 people × 5 sensors × 10 movements) 400 action sequences are collected, each action sequence is about 10000 sampled points.It is each dynamic Making sequence all includes 3-axis acceleration data and three axis angular rate data.
Then, window segmentation is carried out to the action sequence of acquisition.For example, for i-th (i=1,2,3,4,5) a sensor Node, taking the size of split window is 256, i.e., every 256 sampled points are a data window.If the length of exercise data matrix ForThen each exercise data sequence can be divided intoA data window, the segmentation data in each window Matrix size is the repetitive rate that 256 × 6 dimensions and every two adjacent data window have 50%.
After obtaining data window, each data window is needed to carry out feature extraction, as previously described, the feature of extraction It include: root mean square, absolute mean square deviation, kurtosis, covariance, zero-crossing rate and energy.For extracting three in each data window Axle acceleration degree accordingly and the feature of three axis angular rate data, extraction specially for:
The dimension of feature in each data window is (36 × 1) dimension, and each feature vector is counted as a data sample This, is then identified and is classified to data sample again.
After having extracted feature, no matter for test data and training data, data set is needed to carry out dimension-reduction treatment.Drop The method of dimension is RLDA algorithm proposed by the invention.Data sample obtained for each sensor, each sample Intrinsic dimensionality is 36, and the intrinsic dimensionality of each sensing data data sample obtained is reduced to 9 dimensions using RLDA algorithm Below.
For the exercise data after dimensionality reduction, test data is identified using the hierarchical fusion algorithm proposed in the present invention Classification, to assess algorithm performance proposed in the present invention.Single classifier used in blending algorithm specifically includes that recently Adjacent classifier (KNN), Naive Bayes Classifier (NB), decision tree C4.5 classifier (C4.5), support vector machines (SVM) are hidden Markov classifier (HMM).Used classifier is all identical in blending algorithm.
This example is mainly using staying a proof method to evaluate algorithm.The data conduct of 1 experimental subjects is taken out first Then test data, the data of remaining 7 people are circuited sequentially as training data.Last experimental result is to 8 test datas Results are averaged to obtain final result.
Table 1 gives the classification accuracy obtained under different recognition methods.Wherein this example gives direct benefit Recognition result obtained when identification test sample is directly carried out with single classifier;In addition this example gives classical classification Device blending algorithm-majority voting method (MV) recognition result obtained.As can be seen that method proposed by the present invention from result Highest accuracy of identification can be obtained, reaches 96.96%.
Classification accuracy obtained under the different recognition methods of table 1
Method KNN NB SVM C4.5 HMM MV The present invention
Average recognition rate 84.55% 84.79% 87.59% 82.48% 89.17% 94.77% 96.96%
Human motion recognition method provided in this embodiment based on Fusion, first result contragradient transformation Traditional linear discriminant analysis algorithm is improved, a kind of new feature selecting algorithm-RLDA, the algorithms are proposed The main characteristic value inverse to within-class scatter matrix using contract matrix transformation reevaluates, thus reduce due to compared with The error disturbance that small eigenvalue estimate is inaccurate and generates, improves arithmetic accuracy.The invention also provides a kind of layerings to melt For molding type to identify a variety of human actions, which mainly includes two layers.First layer is Multiple Classifier Fusion layer, the second layer It is mainly obtained by Information Entropy for each layer of sensor fused layer of decision weights.There are two excellent for blending algorithm proposed by the present invention Gesture, using for first layer structure can make final output result more accurate, can play to a certain extent strong The advantage of classifier, in addition the use of upper Information Entropy can guarantee that this sorting algorithm has stronger robustness in algorithm.
Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent Present invention has been described in detail with reference to the aforementioned embodiments for pipe, those skilled in the art should understand that: its is right Technical solution documented by foregoing embodiments is modified, or is equally replaced to some or all of the technical features It changes, the range for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution.

Claims (4)

1. a kind of human motion recognition method based on Fusion, which comprises the following steps:
Step 100, using the N number of inertial sensor node for being individually fixed in human body different parts, human action data is acquired;
Step 200, window is carried out to each sensor node human action data collected using sliding window cutting techniques Segmentation, obtains multiple action data segments of each sensor node;
Step 300, feature extraction is carried out to the action data segment of each sensor node, obtains corresponding feature vector;
Step 400, Feature Dimension Reduction is carried out using feature vector of the RLDA algorithm to each sensor node of acquisition;
Step 500, parameter training and modeling are carried out using the feature vector after each sensor node dimensionality reduction as training data, Obtain corresponding hierarchical fusion model, including step 501 is to 506:
Step 501, cross validation method is rolled over by k, the feature vector after the dimensionality reduction of each sensor node is verified, Each movement is obtained for the contribution rate of each classifier;
Step 502, the evaluations matrix of following Multiple Classifier Fusion layer is established according to contribution rate:
Wherein, Y indicates that evaluations matrix, c indicate that action classification, k presentation class device quantity, i indicate i-th of inertial sensor section Point, mijIndicate j-th of classifier relative to i-th of inertial sensor node, yqjQ-th of movement is expressed as to classify at j-th Contribution rate under device;
Step 503, the evaluations matrix obtained according to step 502 obtains the Shannon entropy of each classifier using following formula:
Wherein, ejIndicate that Shannon entropy, η are a constants, and η=1/log2(c);
The amount of redundancy of this classifier is obtained according to Shannon entropy, and using following formula:
rj=1-ej
Wherein, rjIndicate amount of redundancy;
The weighted value of i-th of sensor node, j-th classifier is obtained by following formula:
Wherein,Indicate the weighted value of i-th of sensor node, j-th classifier;
The output result of i-th of sensor node is obtained by following formula:
Wherein, λi,qIndicate that test sample x has been assigned to q class;
Step 504, for the feature vector after the dimensionality reduction in i-th of sensor node, q-th of movement is obtained by following formula The discrimination of class:
Wherein,Indicate the discrimination of q-th of movement class;
Step 505, the evaluations matrix such as lower sensor fused layer is established according to discrimination:
Step 506, the evaluations matrix obtained according to step 505 obtains the Shannon entropy of each sensor using following formula:
The amount of redundancy of this sensor is obtained according to Shannon entropy, and using following formula:
Pass through the output weight of each sensor node of following formula:
Obtain following hierarchical fusion model:
Wherein, λqIndicate that test sample is assigned to q class;
Step 600, using obtained hierarchical fusion model, human action identification is carried out.
2. the human motion recognition method according to claim 1 based on Fusion, which is characterized in that benefit Window segmentation is carried out to each sensor node human action data collected with sliding window cutting techniques, comprising:
For i-th of sensor node, enabling the size of split window is l, if the length of exercise data matrix isThen move number According to matrix AiIt can be divided intoA data window, the segmentation data matrix size in each window are (l × 6) Dimension and every two adjacent data window have 50% repetitive rate.
3. the human motion recognition method according to claim 1 based on Fusion, which is characterized in that right The action data segment of each sensor node carries out feature extraction, and the feature of extraction includes: 3-axis acceleration data and three axis The root mean square of angular velocity data, absolute mean square deviation, kurtosis, covariance, zero-crossing rate and energy.
4. the human motion recognition method according to claim 1 based on Fusion, which is characterized in that benefit Feature Dimension Reduction is carried out with feature vector of the RLDA algorithm to each sensor node of acquisition, comprising the following steps:
Step 401, for characteristic vector space corresponding to i-th of sensor node, obtain corresponding within-class scatter matrix with And between class scatter matrix:
Wherein, SωIndicate within-class scatter matrix, SbIndicate between class scatter matrix, μaIndicate the equal of all feature vector sums in a class Value, μ indicate characteristic vector space XiAll feature vector sums be averaged;
Step 402, invertible matrix is solved according to contract matrix theorem and matrix basic transformation, obtains following formula:
PTSωP=In
Wherein, P indicates invertible matrix,For SbCharacteristic value, and InIndicate that n ties up unit matrix;
Step 403, according to step 402 acquired results, following optimal projection matrix is obtained using Fei Sheer decision criteria:
φopt=KPT
Wherein, φoptIndicate optimal projection matrix, K=φ (PT)-1,Indicate projection matrix to be solved;
Step 404, Feature Dimension Reduction is carried out using optimal projection matrix.
CN201810803749.8A 2018-07-20 2018-07-20 Human body action recognition method based on multi-sensor data fusion Active CN109086698B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810803749.8A CN109086698B (en) 2018-07-20 2018-07-20 Human body action recognition method based on multi-sensor data fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810803749.8A CN109086698B (en) 2018-07-20 2018-07-20 Human body action recognition method based on multi-sensor data fusion

Publications (2)

Publication Number Publication Date
CN109086698A true CN109086698A (en) 2018-12-25
CN109086698B CN109086698B (en) 2021-06-25

Family

ID=64838379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810803749.8A Active CN109086698B (en) 2018-07-20 2018-07-20 Human body action recognition method based on multi-sensor data fusion

Country Status (1)

Country Link
CN (1) CN109086698B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784418A (en) * 2019-01-28 2019-05-21 东莞理工学院 A kind of Human bodys' response method and system based on feature recombination
CN109902565A (en) * 2019-01-21 2019-06-18 深圳市烨嘉为技术有限公司 The Human bodys' response method of multiple features fusion
CN109919034A (en) * 2019-01-31 2019-06-21 厦门大学 A kind of identification of limb action with correct auxiliary training system and method
CN110058699A (en) * 2019-04-28 2019-07-26 电子科技大学 A kind of user behavior recognition method based on Intelligent mobile equipment sensor
CN110377159A (en) * 2019-07-24 2019-10-25 张洋 Action identification method and device
CN110796188A (en) * 2019-10-23 2020-02-14 华侨大学 Multi-type inertial sensor collaborative construction worker work efficiency monitoring method
CN112016430A (en) * 2020-08-24 2020-12-01 郑州轻工业大学 Hierarchical action identification method for multi-mobile-phone wearing positions
CN112434669A (en) * 2020-12-14 2021-03-02 武汉纺织大学 Multi-information fusion human behavior detection method and system
CN113057628A (en) * 2021-04-04 2021-07-02 北京泽桥传媒科技股份有限公司 Inertial sensor based motion capture method
CN113065581A (en) * 2021-03-18 2021-07-02 重庆大学 Vibration fault migration diagnosis method for reactance domain adaptive network based on parameter sharing
CN114241603A (en) * 2021-12-17 2022-03-25 中南民族大学 Shuttlecock action recognition and level grade evaluation method and system based on wearable equipment
CN114832277A (en) * 2022-05-20 2022-08-02 广东沃莱科技有限公司 Rope skipping mode identification method and rope skipping
CN115731602A (en) * 2021-08-24 2023-03-03 中国科学院深圳先进技术研究院 Human body activity recognition method, device, equipment and storage medium based on topological representation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268577A (en) * 2014-06-27 2015-01-07 大连理工大学 Human body behavior identification method based on inertial sensor
CN105868779A (en) * 2016-03-28 2016-08-17 浙江工业大学 Method for identifying behavior based on feature enhancement and decision fusion
CN106210269A (en) * 2016-06-22 2016-12-07 南京航空航天大学 A kind of human action identification system and method based on smart mobile phone

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268577A (en) * 2014-06-27 2015-01-07 大连理工大学 Human body behavior identification method based on inertial sensor
CN105868779A (en) * 2016-03-28 2016-08-17 浙江工业大学 Method for identifying behavior based on feature enhancement and decision fusion
CN106210269A (en) * 2016-06-22 2016-12-07 南京航空航天大学 A kind of human action identification system and method based on smart mobile phone

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ORESTI BANOS ET AL.: "Multi-sensor Fusion Based on Asymmetric Decision Weighting for Robust Activity Recognition", 《NEURAL PROCESS LETTERS》 *
姜鸣,王哲龙: "基于无线传感网络的人体动作识别系统", 《2009中国自动化大会暨两化融合高峰会议论文集》 *
陈野 等: "基于BSN和神经网络的人体日常动作识别方法", 《大连理工大学学报》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902565A (en) * 2019-01-21 2019-06-18 深圳市烨嘉为技术有限公司 The Human bodys' response method of multiple features fusion
CN109784418B (en) * 2019-01-28 2020-11-17 东莞理工学院 Human behavior recognition method and system based on feature recombination
CN109784418A (en) * 2019-01-28 2019-05-21 东莞理工学院 A kind of Human bodys' response method and system based on feature recombination
CN109919034A (en) * 2019-01-31 2019-06-21 厦门大学 A kind of identification of limb action with correct auxiliary training system and method
CN110058699B (en) * 2019-04-28 2021-04-27 电子科技大学 User behavior identification method based on intelligent mobile device sensor
CN110058699A (en) * 2019-04-28 2019-07-26 电子科技大学 A kind of user behavior recognition method based on Intelligent mobile equipment sensor
CN110377159A (en) * 2019-07-24 2019-10-25 张洋 Action identification method and device
CN110796188A (en) * 2019-10-23 2020-02-14 华侨大学 Multi-type inertial sensor collaborative construction worker work efficiency monitoring method
CN110796188B (en) * 2019-10-23 2023-04-07 华侨大学 Multi-type inertial sensor collaborative construction worker work efficiency monitoring method
CN112016430B (en) * 2020-08-24 2022-10-11 郑州轻工业大学 Hierarchical action identification method for multi-mobile-phone wearing positions
CN112016430A (en) * 2020-08-24 2020-12-01 郑州轻工业大学 Hierarchical action identification method for multi-mobile-phone wearing positions
CN112434669A (en) * 2020-12-14 2021-03-02 武汉纺织大学 Multi-information fusion human behavior detection method and system
CN112434669B (en) * 2020-12-14 2023-09-26 武汉纺织大学 Human body behavior detection method and system based on multi-information fusion
CN113065581A (en) * 2021-03-18 2021-07-02 重庆大学 Vibration fault migration diagnosis method for reactance domain adaptive network based on parameter sharing
CN113065581B (en) * 2021-03-18 2022-09-16 重庆大学 Vibration fault migration diagnosis method for reactance domain self-adaptive network based on parameter sharing
CN113057628A (en) * 2021-04-04 2021-07-02 北京泽桥传媒科技股份有限公司 Inertial sensor based motion capture method
CN115731602A (en) * 2021-08-24 2023-03-03 中国科学院深圳先进技术研究院 Human body activity recognition method, device, equipment and storage medium based on topological representation
CN114241603A (en) * 2021-12-17 2022-03-25 中南民族大学 Shuttlecock action recognition and level grade evaluation method and system based on wearable equipment
CN114832277A (en) * 2022-05-20 2022-08-02 广东沃莱科技有限公司 Rope skipping mode identification method and rope skipping
CN114832277B (en) * 2022-05-20 2024-02-06 广东沃莱科技有限公司 Rope skipping mode identification method and rope skipping

Also Published As

Publication number Publication date
CN109086698B (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN109086698A (en) A kind of human motion recognition method based on Fusion
CN104268577B (en) Human body behavior identification method based on inertial sensor
CN110245718A (en) A kind of Human bodys' response method based on joint time-domain and frequency-domain feature
CN110334573B (en) Human motion state discrimination method based on dense connection convolutional neural network
Tao et al. Gait based biometric personal authentication by using MEMS inertial sensors
WO2023035093A1 (en) Inertial sensor-based human body behaviour recognition method
Whelan et al. Leveraging IMU data for accurate exercise performance classification and musculoskeletal injury risk screening
Wang et al. A2dio: Attention-driven deep inertial odometry for pedestrian localization based on 6d imu
CN108717548A (en) A kind of increased Activity recognition model update method of facing sensing device dynamic and system
Lu et al. MFE-HAR: multiscale feature engineering for human activity recognition using wearable sensors
Tu et al. A Review of Human Motion Monitoring Methods using Wearable Sensors.
CN111259956A (en) Rapid identification method for unconventional behaviors of people based on inertial sensor
Zhou et al. A self-supervised human activity recognition approach via body sensor networks in smart city
CN115273237B (en) Human body posture and action recognition method based on integrated random configuration neural network
Zhang et al. Multi-STMT: Multi-level Network for Human Activity Recognition Based on Wearable Sensors
Ambati et al. A comparative study of machine learning approaches for human activity recognition
Srivastava et al. Hierarchical human activity recognition using GMM
Zhang et al. PCA & HMM based arm gesture recognition using inertial measurement unit
Arshad et al. Gait-based human identification through minimum gait-phases and sensors
İsmail et al. Human activity recognition based on smartphone sensor data using cnn
Kusuma et al. Health Monitoring with Smartphone Sensors and Machine Learning Techniques
Zainudin et al. Hybrid relief-f differential evolution feature selection for accelerometer actions
Yang et al. Recognition of human activities based on decision optimization model
Raziff et al. Gait identification using one-vs-one classifier model
Wang et al. A smartphone location independent activity recognition method based on the angle feature

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant