CN112016430B - Hierarchical action identification method for multi-mobile-phone wearing positions - Google Patents

Hierarchical action identification method for multi-mobile-phone wearing positions Download PDF

Info

Publication number
CN112016430B
CN112016430B CN202010855813.4A CN202010855813A CN112016430B CN 112016430 B CN112016430 B CN 112016430B CN 202010855813 A CN202010855813 A CN 202010855813A CN 112016430 B CN112016430 B CN 112016430B
Authority
CN
China
Prior art keywords
action
motion
node
classification
hierarchical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010855813.4A
Other languages
Chinese (zh)
Other versions
CN112016430A (en
Inventor
王昌海
李敏
梁辉
崔建涛
李玉华
张世征
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou University of Light Industry
Original Assignee
Zhengzhou University of Light Industry
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou University of Light Industry filed Critical Zhengzhou University of Light Industry
Priority to CN202010855813.4A priority Critical patent/CN112016430B/en
Publication of CN112016430A publication Critical patent/CN112016430A/en
Application granted granted Critical
Publication of CN112016430B publication Critical patent/CN112016430B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a hierarchical action recognition method for a multi-mobile-phone wearing position, which comprises the following steps of: defining the action set to be recognized as
Figure DDA0003803163420000011
The position set worn by the mobile phone is
Figure DDA0003803163420000012
Figure DDA0003803163420000013
The collected motion samples have n 1 ×n 2 A distribution of species, each distribution being called a modality; acquiring motion sensing data of each mode and corresponding motion types, converting the motion sensing data into motion samples represented by using feature vectors, and forming the feature vectors of all the motion samples into an ordered training sample set
Figure DDA0003803163420000014
Figure DDA0003803163420000015
Setting action labels for each action sample according to the action type corresponding to the action sample to form an ordered label set
Figure DDA0003803163420000016
And constructing a hierarchical classification tree by utilizing X and Y, and classifying the feature vectors of the motion samples with unknown motion types by utilizing the hierarchical classification tree to obtain corresponding motion labels. The invention reduces the time consumption of action recognition, improves the action recognition accuracy of the intermediate node and fully ensures the integral stability of the action recognition method.

Description

Hierarchical action identification method for multi-mobile-phone wearing position
Technical Field
The invention relates to the technical field of human body action recognition, in particular to a layering action recognition method facing a multi-mobile-phone wearing position.
Background
At present, smart phones are widely applied to daily life of people, and besides basic functions of communication, short messages and the like, the smart phones also integrate common inertial sensors such as accelerometers, gyroscopes and the like. When a user uses the smart phone, the sensors can generate serialized sensing data along with the change of human body actions. Using these sensory data to identify which actions the user has performed is an important research topic in the industry today. The current common actions to be recognized comprise walking, running, standing still, going upstairs and downstairs and the like, and the recognition results of the actions can be used for common applications such as estimation of daily exercise quantity of a user, behavior analysis of the user, indoor navigation and the like. How to accurately identify these daily actions is a core technology for such applications.
The biggest problem in recognizing human body actions by using a smart phone is the diversity of wearing positions of the smart phone. Unlike products with fixed wearing positions such as smart bracelets and smart eyes, smart phones may be placed in various body positions such as jacket pockets, trouser pockets, bags, and held in the hands. For the same action, the sensing data captured by the smartphone at different body positions is different, which results in that for the same action, the smartphone captures a plurality of different distributions of sensing data. Furthermore, the data captured by the similar actions at different body positions have great difference, and if the similar actions are walking, the data captured by the mobile phone in the trousers pocket and the jacket pocket have obvious difference; from another perspective, the data captured at different body locations for different actions may be similar, for example, the data from a cell phone walking in a pants pocket may be similar to the data from a cell phone running in a coat pocket. The problem causes that the accuracy rate of human body action recognition based on the smart phone is often low, and the problem of improving the action recognition accuracy rate under the condition of the diversity of the wearing positions of the smart phone is a difficult problem in the action recognition field.
Aiming at the problem of low accuracy rate of motion recognition caused by the diversity of wearing positions of the mobile phone, the following 3 types of solutions are mainly provided at present:
1. respectively training recognition models aiming at different body positions, judging the body position of the mobile phone in the recognition process, and then recognizing actions by using the models in corresponding positions. The identification accuracy of the method greatly depends on the identification accuracy of the position of the mobile phone. Under the condition that a user uses the mobile phone randomly, the identification accuracy of the position of the mobile phone is reduced, and the action identification accuracy is low.
2. Regardless of the position of the mobile phone, the influence of the wearing position of the mobile phone is reduced by extracting the motion characteristics with small influence of the position of the mobile phone in the characteristic extraction stage, wherein the characteristics comprise an included angle characteristic, a low-pass filtering characteristic, a deep neural network characteristic and the like. However, such a method cannot be used independently, and ultimately, the recognition of actions needs to be completed by combining with a classification model.
3. On the basis of the scheme 2, the generalization capability of the motion recognition model to the body position is improved by increasing the sample size of model training and optimizing the recognition algorithm, so that the motion recognition accuracy is improved, and more representative methods comprise methods such as classifier combination, hierarchical classification and the like. The classifier combination method identifies actions by training a plurality of classification models, and the increase of the number of the classification models causes the time consumption of the method to be larger, so the method is not suitable for scenes with lower equipment electric quantity. The hierarchical identification method improves the identification effect by dividing the action identification problem into a plurality of sub-problems and determines whether to use the lower model for re-identification or not according to the credibility of the identification result. The application of the method in the last two years shows that the method has an obvious defect that the identification accuracy of a lower-layer classification node constructed by using an easily-confused sample is low, and the superposition of a plurality of classification layers easily causes accumulated errors. The problem causes that the overall stability of the method is low, and the identification accuracy of the action is very low under the condition that the position of the mobile phone changes frequently.
In summary, although there are some solutions for diversification of the wearing position of the mobile phone at present, these methods still have certain defects in terms of accuracy of recognition of motion, time consumption of recognition, stability of the method, and the like.
Disclosure of Invention
The invention aims to provide a layered action recognition method for a multi-mobile-phone wearing position, which can improve the accuracy of action recognition, reduce the time consumption in the recognition process and improve the stability of the recognition method.
In order to achieve the purpose, the invention adopts the technical scheme that:
a hierarchical action recognition method for a multi-mobile-phone wearing position is characterized by comprising the following steps:
defining the action set to be recognized as
Figure GDA0003803163410000029
The position set worn by the mobile phone is
Figure GDA0003803163410000021
Figure GDA0003803163410000022
Wherein n is 1 ≥2,n 2 More than or equal to 1, the collected motion samples have n 1 ×n 2 A distribution of species, each distribution being called a modality;
acquiring motion sensing data of each modality and corresponding motion types, converting the motion sensing data into motion samples represented by using feature vectors, and converting the feature vectors of all the motion samples into feature vectorsQuantitative composition ordered training sample set
Figure GDA0003803163410000023
Setting action labels for each action sample according to the action type corresponding to the action sample to form an ordered label set
Figure GDA0003803163410000024
Wherein X r Set of motion samples representing the modality r, the number of samples being m r Denotes y r The action label representing the modality r, r takes the value of 1,2, \8230;, n 1 ×n 2
And constructing a hierarchical classification tree based on the similarity between the modes by utilizing the ordered training sample set X and the ordered label set Y, and classifying the feature vectors X of the action samples with unknown action types by utilizing the hierarchical classification tree to obtain the action labels Y corresponding to X.
Constructing a hierarchical classification tree based on similarity between modalities by utilizing an ordered training sample set X and an ordered label set Y, and specifically comprising the following steps of:
A. defining classification nodes N = { P, y, left, right } in a hierarchical classification tree, wherein P is a logistic regression model of the classification nodes, y is action labels of the classification nodes, and left and right respectively store indexes of left and right subtrees of the classification nodes;
B. defining k as iteration index, and the initial value of k is n 1 ×n 2 +1, initialize ordered leaf node set
Figure GDA0003803163410000025
Figure GDA0003803163410000026
Wherein N is r Representing the leaf node, N, corresponding to the modality r r The logistic regression model in (1) is null, the action label is the action label of the mode r, and the indexes of the left subtree and the right subtree are null;
C. computing the mean vector of each element in X
Figure GDA0003803163410000027
The formula is as follows:
Figure GDA0003803163410000028
wherein m is t Denotes the number of samples of the t-th element in X, X th A feature vector representing the h sample of the t element in X;
D. calculating the similarity d of the mean vector between any two elements in X ij The calculation formula is as follows:
Figure GDA0003803163410000031
E. selecting two elements X with the maximum similarity of the mean vectors i And X j If it corresponds to y j ≠y j Or y i And y j If the values are all-1, entering a step F, otherwise, entering a step G;
F. constructing a new classification node N k Let X i Is positive, X j Training N for negatives k Logistic regression model N k P, let classification node N k Left subtree index N of k ·left=N i Classification node N k Right subtree index N k ·right=N j Classification node N k Action tag N of k Y = -1, define new category label y k ,y k = -1, then proceed to step H;
G. constructing a new classification node N k Classification node N k Logistic regression model N k P, index N of left and right subtrees k Left and N k Right, and action tag N k Y is both equal to N i Same, define a new category label y k ,y k Value of (a) and y i The same, then enter step H;
H. taking out X from X i And X j And X is i And X j Union of (1) X k As a new element into X and taking Y out of Y i And y j And is combined with y k Putting into Y as new element, and taking N out of N i And N j And N is k Putting the new element into N', adding 1 to the value of k, and then entering the step I;
I. and if the number of the elements in the N 'is more than 1, returning to the step C, otherwise, ending the algorithm, and taking the rest elements in the N' as root nodes of the hierarchical classification tree.
Classifying the feature vectors x of the motion samples with unknown motion types by using a hierarchical classification tree to obtain motion labels y corresponding to x, wherein the specific steps are as follows:
J. taking the root node of the hierarchical classification tree as a current node T, and then entering a step K;
K. judging whether the logistic regression model T.P of the current node is empty or not, if so, finishing the algorithm, returning the action label T.y of the current node as the action label of x, otherwise, classifying x by using the logistic regression model T.P of the current node, if the classification result is positive, entering the step L, and if the classification result is negative, entering the step M;
l, taking the left sub-tree T.left of the current node as a new current node T, and then returning to the step K;
and M, taking the right subtree T.right of the current node as a new current node T, and then returning to the step K.
The invention takes the human body action recognition based on the inertial sensor in the smart phone as the application background, and aims to solve the problem of low action recognition accuracy under the condition of diversified wearing positions of the smart phone. Aiming at the characteristic that similar action samples are distributed differently when the mobile phone is positioned at different body positions, the hierarchical action recognition method based on the similarity calculation among the modals is provided, and compared with the existing action recognition method, the method has the following advantages that:
1. the method does not need to identify the body position of the mobile phone, avoids the problem of action identification error caused by the identification error of the position of the mobile phone when the position of the mobile phone changes frequently, and the final action identification accuracy is higher than that of the first type of existing scheme;
2. the action recognition process of the invention is a process of traversing a binary classification tree, the number of the passed classification nodes is equal to the depth of the tree and is far less than the classifier combination method in the third class scheme, therefore, the time consumption in the action recognition process of the invention is far less than the classifier combination method;
3. according to the method, the hierarchical classification nodes are constructed by calculating the similarity between the modes, and compared with the hierarchical classification method in the third scheme, the method fully considers the classifier design principle of reducing the intra-class distance and expanding the inter-class distance. The advantages of this design are two: firstly, the action recognition accuracy of the non-root node is far higher than that of the existing scheme, and then the final recognition accuracy is higher than that of the existing scheme; and secondly, the error accumulation is greatly reduced by improving the action recognition accuracy of the intermediate node, and the stability of the scheme is higher than that of the existing scheme under the condition of frequent change of the position of the mobile phone.
Drawings
FIG. 1 is a schematic diagram illustrating a hierarchical classification tree construction process according to the present invention;
FIG. 2 is a schematic diagram of a hierarchical classification tree action recognition process according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
The invention relates to a hierarchical action recognition method for a multi-mobile-phone wearing position, which comprises the following steps of:
assuming that the set of actions to be recognized is
Figure GDA0003803163410000041
The positions possibly worn by the mobile phone are set as
Figure GDA0003803163410000042
Wherein n is 1 ≥2,n 2 Not less than 1. When the human body does each type of motion, the mobile phone can be in any wearing position, so the collected motion samples have n in total 1 ×n 2 The distribution is called a mode.
Acquiring motion sensing data of each modality and corresponding motion types by methods of manual acquisition, labeling and the like, converting original motion sensing data into motion samples represented by one-dimensional feature vectors by utilizing the prior art (such as denoising, windowing, feature extraction and the like), and forming the feature vectors of all the motion samples into an ordered training sample set
Figure GDA0003803163410000043
Setting action labels for each action sample according to the action type corresponding to the action sample to form an ordered label set
Figure GDA0003803163410000044
Wherein X r Set of motion samples representing the modality r, the number of samples being m r Denotes y r The action label representing the modality r, r takes the value of 1,2, \8230;, n 1 ×n 2 . Since there may be multiple action sample sets corresponding to the same action type in the training sample set X, there may be multiple identical action labels in the ordered label set Y.
And constructing a hierarchical classification tree based on the similarity between the modes by utilizing the ordered training sample set X and the ordered label set Y, and classifying the feature vectors X of the action samples with unknown action types by utilizing the hierarchical classification tree to obtain the action labels Y corresponding to X.
As shown in fig. 1, the input of the hierarchical classification tree constructed by the present application is an ordered training sample set X and an ordered label set Y, and the return value is a root node of the hierarchical classification tree, and the specific steps are as follows:
A. in order to describe the final hierarchical classification tree conveniently, defining classification nodes N = { P, y, left, right } in the hierarchical classification tree, wherein P is a logistic regression model of the classification nodes, the logistic regression model is obtained by training sample sets of different modes in the construction process of the hierarchical classification tree, y is an action label of the classification nodes, and left and right respectively store indexes of left and right subtrees of the classification nodes;
B. defining k as iteration index, and the initial value of k is n 1 ×n 2 +1, initialize the ordered set of leaf nodes
Figure GDA0003803163410000051
Figure GDA0003803163410000052
Wherein N is r Representing the leaf node, N, corresponding to the modality r r The logistic regression model in (1) is null, the action label is the action label of the mode r, and the indexes of the left subtree and the right subtree are null;
C. computing the mean vector of each element in X
Figure GDA0003803163410000053
The formula is as follows:
Figure GDA0003803163410000054
wherein m is t Denotes the number of samples of the t-th element in X, X th A feature vector representing the h sample of the t element in X;
D. calculating the similarity d of the mean vector between any two elements in X ij The calculation formula is as follows:
Figure GDA0003803163410000055
E. selecting two elements X with the maximum similarity of the mean vectors i And X j If it corresponds to y i ≠y j Or y i And y j If the values are all-1, entering a step F, otherwise, entering a step G;
F. constructing a new classification node N k Let X i Is positive, X j Training for negative classes N k Logistic regression model N k P, order classification node N k Left subtree index N of k ·left=N i Classification node N k Right subtree index N k ·right=N j Classification node N k Action tag N of k Y = -1, define new category label y k ,y k = -1, then go to step H;
G. constructing a new classification node N k Classification node N k Logistic regression model N k P, index N of left and right subtrees k Left and N k Right, and action tag N k Y is both equal to N i Same, define a new category label y k ,y k Value of (a) and y i The same, then entering step H;
H. taking out X from X i And X j And X is i And X j Union of (1) X k As new element into X, taking Y out of Y i And y j And is combined with y k As new element in Y, taking N out of N i And N j And N is k Putting the new element into N', adding 1 to the value of k, and then entering the step I;
I. and if the number of the elements in the N 'is more than 1, returning to the step C, otherwise, ending the algorithm, and taking the rest elements in the N' as root nodes of the hierarchical classification tree.
The hierarchical classification tree constructed by the method is a binary tree, the logistic regression model of the leaf nodes is empty, the indexes of the left subtree and the right subtree are empty, and the action tags are action tags of corresponding modes. And (3) the logistic regression model of the non-leaf node of the hierarchical classification tree and the left subtree index and the right subtree index are not empty, the hierarchical classification tree is obtained by training a training sample set and an ordered label set, and the action label of the non-leaf node is set to be-1 or other numerical values different from the action label of any leaf node.
As shown in fig. 2, a hierarchical classification tree is used to classify feature vectors x of motion samples with unknown motion types to obtain motion labels y corresponding to x, and the specific steps are as follows:
J. taking the root node of the hierarchical classification tree as a current node T, and then entering a step K;
K. judging whether the logistic regression model T.P of the current node is empty or not, if so, ending the algorithm, and returning the action label T.y of the current node as the action label of x, otherwise, classifying x by using the logistic regression model T.P of the current node, if the classification result is positive, entering the step L, and if the classification result is negative, entering the step M;
l, taking the left sub-tree T.left of the current node as a new current node T, and then returning to the step K;
and M, taking the right subtree T.right of the current node as a new current node T, and then returning to the step K.
When the intelligent mobile phone is used for identifying the human body action, the body position of the mobile phone does not need to be identified, so that the problem of action identification error caused by the identification error of the mobile phone position when the position of the mobile phone is frequently changed is solved, and the action identification accuracy is effectively improved; according to the invention, hierarchical classification nodes are constructed by calculating the similarity between the modalities, and the action recognition process is a process of traversing the binary classification tree, so that the time consumption of action recognition is greatly reduced, the action recognition accuracy of the intermediate nodes is improved, and the integral stability of the action recognition method is fully ensured.
The present invention is not limited to the above preferred embodiments, and any modifications, equivalent substitutions, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (2)

1. A hierarchical action recognition method facing a multi-mobile-phone wearing position is characterized by comprising the following steps:
defining the action set to be recognized as
Figure FDA0003803163400000011
The position set worn by the mobile phone is
Figure FDA0003803163400000012
Figure FDA0003803163400000013
Wherein n is 1 ≥2,n 2 More than or equal to 1, the collected motion samples have n 1 ×n 2 A distribution of species, each distribution being called a modality;
obtaining motion sensing data of each mode and corresponding motion types, converting the motion sensing data into motion samples represented by feature vectors, and forming feature vectors of all the motion samples into an ordered training sample set
Figure FDA0003803163400000014
Setting action labels for each action sample according to the action type corresponding to the action sample to form an ordered label set
Figure FDA0003803163400000015
Wherein X r Motion sample set representing modality r, the number of samples of which is m r Denotes y r The action label representing the modality r, r takes the value of 1,2, \8230;, n 1 ×n 2
Constructing a hierarchical classification tree based on the similarity between the modalities by utilizing an ordered training sample set X and an ordered label set Y, and classifying the feature vectors X of the action samples with unknown action types by utilizing the hierarchical classification tree to obtain action labels Y corresponding to X;
constructing a hierarchical classification tree based on similarity between modalities by utilizing an ordered training sample set X and an ordered label set Y, and specifically comprising the following steps of:
A. defining a classification node N = { P, y, left, right } in a hierarchical classification tree, wherein P is a logistic regression model of the classification node, y is an action label of the classification node, and left and right respectively store indexes of left and right subtrees of the classification node;
B. defining k as iteration index, and the initial value of k is n 1 ×n 2 +1, initialize ordered leaf node set
Figure FDA0003803163400000016
Figure FDA0003803163400000017
Wherein N is r Representing the leaf node, N, corresponding to the modality r r The logistic regression model in (1) is null, the action label is the action label of the mode r, and the indexes of the left subtree and the right subtree are null;
C. computing the mean vector of each element in X
Figure FDA0003803163400000018
The formula is as follows:
Figure FDA0003803163400000019
wherein m is t Denotes the number of samples of the t-th element in X, X th A feature vector representing the h sample of the t element in X;
D. calculating the similarity d of the mean vector between any two elements in X ij The calculation formula is as follows:
Figure FDA00038031634000000110
E. selecting two elements X with the maximum similarity of the mean vectors i And X j If it corresponds to y i ≠y j Or y i And y j If the values are all-1, entering a step F, otherwise, entering a step G;
F. constructing a new classification node N k Let X i Is positive, X j Training for negative classes N k Logistic regression model N k P, order classification node N k Left subtree index N of k .left=N i Classification node N k Right subtree index N of k .right=N j Classification node N k Action tag N of k Y = -1, define new category label y k ,y k = -1, then proceed to step H;
G. constructing a new classification node N k Classification node N k Logistic regression model N k P, left,Index N of the right subtree k Left and N k Right, and action tag N k Y is all equal to N i Same, define a new category label y k ,y k Value of (a) and y i The same, then enter step H;
H. taking out X from X i And X j And X is i And X j Union of (1) X k As a new element into X and taking Y out of Y i And y j And is combined with y k Putting into Y as new element, and taking N out of N i And N j And N is k Putting the new element into N', adding 1 to the value of k, and then entering the step I;
I. and if the number of the elements in the N 'is more than 1, returning to the step C, otherwise, ending the algorithm, and taking the rest elements in the N' as root nodes of the hierarchical classification tree.
2. The multi-mobile-phone wearing position-oriented hierarchical motion recognition method according to claim 1, wherein a hierarchical classification tree is used to classify feature vectors x of motion samples of unknown motion types to obtain motion labels y corresponding to x, and the method comprises the following specific steps:
J. taking the root node of the hierarchical classification tree as a current node T, and then entering a step K;
K. judging whether the logistic regression model T.P of the current node is empty or not, if so, finishing the algorithm, returning the action label T.y of the current node as the action label of x, otherwise, classifying x by using the logistic regression model T.P of the current node, if the classification result is positive, entering the step L, and if the classification result is negative, entering the step M;
l, taking the left sub-tree T.left of the current node as a new current node T, and then returning to the step K;
and M, taking the right subtree T.right of the current node as a new current node T, and then returning to the step K.
CN202010855813.4A 2020-08-24 2020-08-24 Hierarchical action identification method for multi-mobile-phone wearing positions Active CN112016430B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010855813.4A CN112016430B (en) 2020-08-24 2020-08-24 Hierarchical action identification method for multi-mobile-phone wearing positions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010855813.4A CN112016430B (en) 2020-08-24 2020-08-24 Hierarchical action identification method for multi-mobile-phone wearing positions

Publications (2)

Publication Number Publication Date
CN112016430A CN112016430A (en) 2020-12-01
CN112016430B true CN112016430B (en) 2022-10-11

Family

ID=73505662

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010855813.4A Active CN112016430B (en) 2020-08-24 2020-08-24 Hierarchical action identification method for multi-mobile-phone wearing positions

Country Status (1)

Country Link
CN (1) CN112016430B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8484025B1 (en) * 2012-10-04 2013-07-09 Google Inc. Mapping an audio utterance to an action using a classifier
CN107122752A (en) * 2017-05-05 2017-09-01 北京工业大学 A kind of human action comparison method and device
CN107577785A (en) * 2017-09-15 2018-01-12 南京大学 A kind of level multi-tag sorting technique suitable for law identification
CN107948933A (en) * 2017-11-14 2018-04-20 中国矿业大学 A kind of shared bicycle localization method based on smart mobile phone action recognition
CN109086698A (en) * 2018-07-20 2018-12-25 大连理工大学 A kind of human motion recognition method based on Fusion
CN111310812A (en) * 2020-02-06 2020-06-19 佛山科学技术学院 Data-driven hierarchical human activity recognition method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8484025B1 (en) * 2012-10-04 2013-07-09 Google Inc. Mapping an audio utterance to an action using a classifier
CN107122752A (en) * 2017-05-05 2017-09-01 北京工业大学 A kind of human action comparison method and device
CN107577785A (en) * 2017-09-15 2018-01-12 南京大学 A kind of level multi-tag sorting technique suitable for law identification
CN107948933A (en) * 2017-11-14 2018-04-20 中国矿业大学 A kind of shared bicycle localization method based on smart mobile phone action recognition
CN109086698A (en) * 2018-07-20 2018-12-25 大连理工大学 A kind of human motion recognition method based on Fusion
CN111310812A (en) * 2020-02-06 2020-06-19 佛山科学技术学院 Data-driven hierarchical human activity recognition method and system

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Action Recognition by Hierarchical Mid-level Action Elements;Tian Lan et al.;《2015 IEEE International Conference on Computer Vision》;20151231;第4552-4560页 *
Action Recognition by Hierarchical Sequence Summarization;Yale Song et al.;《CVPR2013》;20131231;第3562-3569页 *
WOODY: A Post-Process Method for Smartphone-Based Activity Recognition;CHANGHAI WANG et al.;《IEEE Access》;20180904;第49611-49625页 *
基于层次分类的手机位置无关的动作识别;王昌海 等;《电子与信息学报》;20170131;第39卷(第1期);第191-197页 *
面向大规模图像分类的层次化多任务学习算法研究;郑昱;《中国博士学位论文全文数据库 信息科技辑》;20190115;I138-173 *

Also Published As

Publication number Publication date
CN112016430A (en) 2020-12-01

Similar Documents

Publication Publication Date Title
Tang et al. Multiscale deep feature learning for human activity recognition using wearable sensors
Feng et al. Few-shot learning-based human activity recognition
Ignatov et al. Human activity recognition using quasiperiodic time series collected from a single tri-axial accelerometer
CN103824051B (en) Local region matching-based face search method
CN105956560B (en) A kind of model recognizing method based on the multiple dimensioned depth convolution feature of pondization
CN102938065B (en) Face feature extraction method and face identification method based on large-scale image data
CN102938070B (en) A kind of behavior recognition methods based on action subspace and weight behavior model of cognition
Uddin et al. A guided random forest based feature selection approach for activity recognition
CN110674875A (en) Pedestrian motion mode identification method based on deep hybrid model
CN111428658B (en) Gait recognition method based on modal fusion
CN111401303B (en) Cross-visual angle gait recognition method with separated identity and visual angle characteristics
Huang et al. Clothing landmark detection using deep networks with prior of key point associations
CN104408405A (en) Face representation and similarity calculation method
Islam et al. A CNN based approach for garments texture design classification
CN110555463B (en) Gait feature-based identity recognition method
CN107145852A (en) A kind of character recognition method based on homologous cosine losses function
Huang et al. A study on multiple wearable sensors for activity recognition
CN106406516A (en) Local real-time movement trajectory characteristic extraction and identification method for smartphone
Sheng et al. Unsupervised embedding learning for human activity recognition using wearable sensor data
Sezavar et al. DCapsNet: Deep capsule network for human activity and gait recognition with smartphone sensors
Liu et al. Extreme learning machine for time sequence classification
Li et al. Multiresolution fusion convolutional network for open set human activity recognition
CN105550642B (en) Gender identification method and system based on multiple dimensioned linear Differential Characteristics low-rank representation
CN112016430B (en) Hierarchical action identification method for multi-mobile-phone wearing positions
CN106599988B (en) A kind of multistage semantic feature extraction method of intelligence wearable device behavioral data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant