CN116578910B - Training action recognition method and system - Google Patents
Training action recognition method and system Download PDFInfo
- Publication number
- CN116578910B CN116578910B CN202310855833.5A CN202310855833A CN116578910B CN 116578910 B CN116578910 B CN 116578910B CN 202310855833 A CN202310855833 A CN 202310855833A CN 116578910 B CN116578910 B CN 116578910B
- Authority
- CN
- China
- Prior art keywords
- historical
- data
- angular velocity
- current
- reconstructed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000009471 action Effects 0.000 title claims abstract description 168
- 238000000034 method Methods 0.000 title claims abstract description 65
- 238000012549 training Methods 0.000 title claims abstract description 54
- 239000011159 matrix material Substances 0.000 claims abstract description 136
- 230000001133 acceleration Effects 0.000 claims abstract description 127
- 230000004927 fusion Effects 0.000 claims description 25
- 238000013507 mapping Methods 0.000 claims description 21
- 238000004364 calculation method Methods 0.000 claims description 20
- 230000008569 process Effects 0.000 claims description 19
- 238000013527 convolutional neural network Methods 0.000 claims description 6
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 238000010845 search algorithm Methods 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 230000008859 change Effects 0.000 description 3
- 238000012216 screening Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a training action recognition method and a training action recognition system, which belong to the technical field of action recognition, wherein the training action recognition method comprises the following steps: s1, collecting a current motion data set and a historical motion data set of a user; s2, reconstructing the historical acceleration data and the historical angular velocity data at each historical moment to generate corresponding reconstructed acceleration data and reconstructed angular velocity data; s3, generating a historical action matrix according to the reconstructed acceleration data and the reconstructed angular velocity data of each historical moment, and generating a current action matrix according to the current acceleration data and the current angular velocity data; s4, recognizing training actions according to the historical action matrix and the current action matrix. According to the training action recognition method, the motion data at the current moment and the historical moment are collected through the three-axis sensor, the motion data are effectively reconstructed, redundant parameters of the motion data can be removed, the influence of noise on training action recognition is avoided, and the method can be suitable for recognition of complex scenes.
Description
Technical Field
The invention belongs to the technical field of motion recognition, and particularly relates to a training motion recognition method and system.
Background
With the development of sensor technology, motion recognition applications have become a hotspot of interest. Motion recognition has been widely used in the fields of sports and motor skills training, etc. For example, when a user wears sports equipment with a multi-axis sensor, the sports equipment collects sports data and transmits the collected sports data to an action computing device, the action computing device can be arranged in an intelligent terminal such as a mobile phone and a tablet computer, and after the action computing device performs calculation analysis on the sports data transmitted by the data collecting device, position and posture information and the like when the user moves can be obtained, so that a basis is provided for the user to share data, obtain action guidance and the like.
However, the conventional motion recognition algorithm adopts a convolutional neural network algorithm, is often limited by the accuracy degree of motion capture, and is difficult to have higher accuracy.
Disclosure of Invention
The invention provides a training action recognition method and a training action recognition system for solving the problems.
The technical scheme of the invention is as follows: a training action recognition method comprises the following steps:
s1, acquiring a current motion data set and a historical motion data set of a user by using a three-axis sensor; the current motion data set comprises current acceleration data and current angular velocity data at the current moment, and the historical motion data set comprises historical acceleration data and historical angular velocity data at each historical moment;
s2, reconstructing the historical acceleration data and the historical angular velocity data at each historical moment to generate corresponding reconstructed acceleration data and reconstructed angular velocity data;
s3, generating a historical action matrix according to the reconstructed acceleration data and the reconstructed angular velocity data of each historical moment, and generating a current action matrix according to the current acceleration data and the current angular velocity data;
s4, recognizing training actions according to the historical action matrix and the current action matrix.
Further, S2 comprises the following sub-steps:
s21, extracting characteristic coefficients of historical acceleration data and characteristic coefficients of historical angular velocity data at each historical moment;
s22, generating a historical data dictionary matrix according to the characteristic coefficient of the historical acceleration data and the characteristic coefficient of the historical angular velocity data;
s23, performing sparse decomposition on the historical data dictionary matrix by using a characteristic symbol search algorithm to generate sparse coefficients;
s24, taking the product of the historical acceleration data and the sparse coefficient as fitting acceleration data, and taking the product of the historical angular velocity data and the sparse coefficient as fitting angular velocity data;
s25, respectively carrying out normalization processing on the fitting acceleration data and the fitting angular velocity data to generate reconstruction acceleration data and reconstruction angular velocity data.
Further, in S21, the characteristic coefficient of the historical acceleration dataa 0 The calculation formula of (2) is as follows:
;
in the method, in the process of the invention,Nrepresents the total number of historical time instants,a xn represent the firstnTime of historyxThe historical acceleration of the direction is used to determine,a x n(-1) represent the firstn-1 historic momentxThe historical acceleration of the direction is used to determine,a x n(+1) represent the firstn+1 historic momentsxThe historical acceleration of the direction is used to determine,a yn represent the firstnTime of historyyThe historical acceleration of the direction is used to determine,a y n(-1) represent the firstn-1 historic momentyThe historical acceleration of the direction is used to determine,a y n(+1) represent the firstn+1 historic momentsyThe historical acceleration of the direction is used to determine,a zn represent the firstnTime of historyzThe historical acceleration of the direction is used to determine,a z n(-1) represent the firstn-1 historic momentzThe historical acceleration of the direction is used to determine,a z n(+1) represent the firstn+1 historic momentszThe historical acceleration of the direction is used to determine,T 1 representing the period over which the historical acceleration data is collected,a xmax representation ofxThe maximum historical acceleration of the direction is,a ymax representation ofyThe maximum historical acceleration of the direction is,a zmax representation ofzMaximum historical acceleration of direction;
s21, characteristic coefficient of historical angular velocity dataw 0 The calculation formula of (2) is as follows:
;
in the method, in the process of the invention,w xn represent the firstnTime of historyxThe historical angular velocity of the direction is set,w x n(-1) represent the firstn-1 historic momentxThe historical angular velocity of the direction is set,w x n(+1) represent the firstn+1 historic momentsxThe historical angular velocity of the direction is set,w yn represent the firstnTime of historyyThe historical angular velocity of the direction is set,w y n(-1) represent the firstn-1 historic momentyThe historical angular velocity of the direction is set,w y n(+1) represent the firstn+1 historic momentsyThe historical angular velocity of the direction is set,w zn represent the firstnTime of historyzThe historical angular velocity of the direction is set,w z n(-1) represent the firstn-1 historic momentzThe historical angular velocity of the direction is set,w z n(+1) represent the firstn+1 historic momentszThe historical angular velocity of the direction is set,T 2 representing the period over which the user's historical angular velocity data is collected,w xmax representation ofxThe maximum historical angular velocity of the direction,w ymax representation ofyThe maximum historical angular velocity of the direction,w zmax representation ofzMaximum historical angular velocity of direction.
Further, in S3, the specific method for generating the history action matrix is as follows: sequentially sequencing the reconstructed acceleration data and the reconstructed angular velocity data at all historical moments from large to small to generate a reconstructed motion data set, equally dividing the reconstructed motion data set into a first reconstructed motion data subset and a second reconstructed motion data subset, respectively calculating a first mapping characteristic value of the first reconstructed motion data subset and a second mapping characteristic value of the second reconstructed motion data subset, removing data smaller than the second mapping characteristic value in the first reconstructed motion data subset, removing data smaller than the first mapping characteristic value in the second reconstructed motion data subset, generating a latest reconstructed motion data set, and generating a current historical motion matrix according to the latest reconstructed motion data.
Further, a first mapping characteristic value of the first reconstructed motion data subsetc 1 The calculation formula of (2) is as follows:
;
in the method, in the process of the invention,σ 1 representing the standard deviation of all data in the first reconstructed motion data subset,Cthe constant is represented by a value that is a function of,A m representing the first reconstructed motion data subsetmThe data of the plurality of data,A m-1 representing the first reconstructed motion data subsetmThe number of data to be processed is-1,Mthe number of data representing the first reconstructed motion data subset,μ ave-1 representing the mean of all reconstructed acceleration data in the first subset of reconstructed motion data,μ ave-2 representing a mean value of all reconstructed angular velocity data in the first subset of reconstructed motion data;
second mapping feature values of the second reconstructed motion data subsetc 2 The calculation formula of (2) is as follows:
;
in the method, in the process of the invention,σ 2 representing the standard deviation of all data in the second reconstructed motion data subset,B k representing the first in the second reconstructed motion data subsetkThe data of the plurality of data,B k-1 representing the first in the second reconstructed motion data subsetkThe number of data to be processed is-1,Kthe number of data representing the second subset of reconstructed motion data,μ ave-3 representing the mean of all reconstructed acceleration data in the second subset of reconstructed motion data,μ ave-4 representing the mean of all reconstructed angular velocity data in the second subset of reconstructed motion data.
Further, in S3, the specific method for generating the current action matrix is as follows: taking the average value between the current acceleration data and the current angular velocity data as an element of a current action matrix, filling the rest elements with 1, and generating the current action matrix.
Further, S4 includes the following steps:
s41, fusing the historical action matrix and the current action matrix to generate a fusion feature matrix;
s42, extracting fusion characteristic coefficients of the fusion characteristic matrix;
s43, extracting characteristic values of a historical action matrix and characteristic values of a current action matrix;
s44, judging whether the average value between the characteristic value of the historical action matrix and the characteristic value of the current action matrix is larger than the fusion characteristic coefficient of the fusion characteristic matrix, if so, entering S45, otherwise, entering S46;
s45, taking a historical action matrix and a current action matrix as input of a convolutional neural network to generate training actions;
s46, performing integral operation on the current acceleration data, the current angular velocity data and the period of collecting the current motion data set to generate training actions.
Further, in S41, feature matrices are fusedZThe calculation formula of (2) is as follows:
;
in the method, in the process of the invention,Xthe historical action matrix is represented and is displayed,Yrepresenting the current action matrix of the device,representing the hadamard product operation.
Further, in S42, the fusion feature coefficients of the fusion feature matrixdThe calculation formula of (2) is as follows:
;
in the method, in the process of the invention,Xthe historical action matrix is represented and is displayed,Yrepresenting the current action matrix of the device,λ 1 the eigenvalues of the historical action matrix are represented,λ 2 representing the eigenvalues of the current action matrix,Irepresenting the identity matrix of the cell, Z represents the fusion feature matrix, I.I 2 Representing a second order norm operation.
The beneficial effects of the invention are as follows:
(1) According to the training action recognition method, the motion data at the current moment and the historical moment are collected through the three-axis sensor, the motion data are effectively reconstructed, redundant parameters of the motion data can be removed, and the influence of noise on training action recognition is avoided;
(2) According to the training action recognition method, by constructing a historical action matrix and a current action matrix, carrying out fusion operation, eigenvalue extraction and other operations on the two action matrices, judging the training action to be an instant action or a continuous action through eigenvalue comparison, and adopting different processing methods aiming at different types of actions, the recognized training action is more accurate, and the recognition of complex scenes can be adapted.
Based on the method, the invention also provides a training action recognition system which comprises a data acquisition unit, a data reconstruction unit, an action matrix generation unit and a training action recognition unit;
the data acquisition unit is used for acquiring a current motion data set and a historical motion data set of a user by using the three-axis sensor; the current motion data set comprises current acceleration data and current angular velocity data at the current moment, and the historical motion data set comprises historical acceleration data and historical angular velocity data at each historical moment;
the data reconstruction unit is used for reconstructing the historical acceleration data and the historical angular velocity data at each historical moment to generate corresponding reconstructed acceleration data and reconstructed angular velocity data;
the action matrix generation unit is used for generating a historical action matrix according to the reconstructed acceleration data and the reconstructed angular velocity data of each historical moment and generating a current action matrix according to the current acceleration data and the current angular velocity data;
the training action recognition unit is used for recognizing training actions according to the historical action matrix and the current action matrix.
The beneficial effects of the invention are as follows: the training action recognition system can recognize different types of actions through the processes of data acquisition, reconstruction data, matrix generation, action recognition and the like, and the action recognition accuracy is improved.
Drawings
FIG. 1 is a flow chart of a training action recognition method;
fig. 2 is a block diagram of a training motion recognition system.
Detailed Description
Embodiments of the present invention are further described below with reference to the accompanying drawings.
As shown in fig. 1, the present invention provides a training action recognition method, which includes the following steps:
s1, acquiring a current motion data set and a historical motion data set of a user by using a three-axis sensor; the current motion data set comprises current acceleration data and current angular velocity data at the current moment, and the historical motion data set comprises historical acceleration data and historical angular velocity data at each historical moment;
s2, reconstructing the historical acceleration data and the historical angular velocity data at each historical moment to generate corresponding reconstructed acceleration data and reconstructed angular velocity data;
s3, generating a historical action matrix according to the reconstructed acceleration data and the reconstructed angular velocity data of each historical moment, and generating a current action matrix according to the current acceleration data and the current angular velocity data;
s4, recognizing training actions according to the historical action matrix and the current action matrix.
In an embodiment of the present invention, S2 comprises the following sub-steps:
s21, extracting characteristic coefficients of historical acceleration data and characteristic coefficients of historical angular velocity data at each historical moment;
s22, generating a historical data dictionary matrix according to the characteristic coefficient of the historical acceleration data and the characteristic coefficient of the historical angular velocity data;
s23, performing sparse decomposition on the historical data dictionary matrix by using a characteristic symbol search algorithm to generate sparse coefficients;
s24, taking the product of the historical acceleration data and the sparse coefficient as fitting acceleration data, and taking the product of the historical angular velocity data and the sparse coefficient as fitting angular velocity data;
s25, respectively carrying out normalization processing on the fitting acceleration data and the fitting angular velocity data to generate reconstruction acceleration data and reconstruction angular velocity data.
The method has the advantages that the historical acceleration data and the historical angular velocity data at the historical moment are subjected to fitting reconstruction, so that characteristic values (used for representing data characteristics) of the data can be extracted, the data change trend at the current moment can be predicted (used for constructing a historical action matrix in a subsequent step and analogizing with the current action matrix), and the dimension and noise of the data can be reduced.
In the embodiment of the present invention, in S21, the characteristic coefficient of the historical acceleration dataa 0 The calculation formula of (2) is as follows:
;
in the method, in the process of the invention,Nrepresents the total number of historical time instants,a xn represent the firstnTime of historyxThe historical acceleration of the direction is used to determine,a x n(-1) represent the firstn-1 historic momentxThe historical acceleration of the direction is used to determine,a x n(+1) represent the firstn+1 historic momentsxThe historical acceleration of the direction is used to determine,a yn represent the firstnTime of historyyThe historical acceleration of the direction is used to determine,a y n(-1) represent the firstn-1 historic momentyThe historical acceleration of the direction is used to determine,a y n(+1) represent the firstn+1 historic momentsyThe historical acceleration of the direction is used to determine,a zn represent the firstnTime of historyzThe historical acceleration of the direction is used to determine,a z n(-1) represent the firstn-1 historic momentzThe historical acceleration of the direction is used to determine,a z n(+1) represent the firstn+1 historic momentszThe historical acceleration of the direction is used to determine,T 1 representing the period over which the historical acceleration data is collected,a xmax representation ofxThe maximum historical acceleration of the direction is,a ymax representation ofyThe maximum historical acceleration of the direction is,a zmax representation ofzMaximum historical acceleration of direction;
s21, characteristic coefficient of historical angular velocity dataw 0 The calculation formula of (2) is as follows:
;
in the method, in the process of the invention,w xn represent the firstnTime of historyxThe historical angular velocity of the direction is set,w x n(-1) represent the firstn-1 historic momentxThe historical angular velocity of the direction is set,w x n(+1) represent the firstn+1 historic momentsxThe historical angular velocity of the direction is set,w yn represent the firstnTime of historyyThe historical angular velocity of the direction is set,w y n(-1) represent the firstn-1 historic momentyThe historical angular velocity of the direction is set,w y n(+1) represent the firstn+1 historic momentsyThe historical angular velocity of the direction is set,w zn represent the firstnTime of historyzThe historical angular velocity of the direction is set,w z n(-1) represent the firstn-1 historic momentzThe historical angular velocity of the direction is set,w z n(+1) represent the firstn+1 historic momentszThe historical angular velocity of the direction is set,T 2 representing the period over which the user's historical angular velocity data is collected,w xmax representation ofxThe maximum historical angular velocity of the direction,w ymax representation ofyThe maximum historical angular velocity of the direction,w zmax representation ofzMaximum historical angular velocity of direction.
And calculating the acceleration difference between the current historical moment and the last historical moment in three directions and the acceleration difference between the next historical moment and the current historical moment, wherein the acceleration difference is used as a first factor of a characteristic coefficient, and the first factor is obtained through the calculation of the acceleration values of the current historical moment and the adjacent historical moment and can characterize the acceleration data change of each historical moment. And calculating the maximum historical acceleration value in three directions as a second factor of the characteristic coefficient, wherein the second factor can represent the maximum amplitude change of the acceleration data at the historical moment. Taking the product of the first factor and the second factor as a characteristic coefficient, the amplitude variation of the acceleration value at each moment can be characterized. The characteristic coefficients of the historical angular velocity data are the same.
In the embodiment of the present invention, in S3, a specific method for generating the historical motion matrix is as follows: sequentially sequencing the reconstructed acceleration data and the reconstructed angular velocity data at all historical moments from large to small to generate a reconstructed motion data set, equally dividing the reconstructed motion data set into a first reconstructed motion data subset and a second reconstructed motion data subset, respectively calculating a first mapping characteristic value of the first reconstructed motion data subset and a second mapping characteristic value of the second reconstructed motion data subset, removing data smaller than the second mapping characteristic value in the first reconstructed motion data subset, removing data smaller than the first mapping characteristic value in the second reconstructed motion data subset, generating a latest reconstructed motion data set, and generating a current historical motion matrix according to the latest reconstructed motion data.
In an embodiment of the invention, the first mapping characteristic value of the first reconstructed motion data subsetc 1 The calculation formula of (2) is as follows:
;
in the method, in the process of the invention,σ 1 representing the standard deviation of all data in the first reconstructed motion data subset,Cthe constant is represented by a value that is a function of,A m representing the first reconstructed motion data subsetmThe data of the plurality of data,A m-1 representing the first reconstructed motion data subsetmThe number of data to be processed is-1,Mthe number of data representing the first reconstructed motion data subset,μ ave-1 representing the mean of all reconstructed acceleration data in the first subset of reconstructed motion data,μ ave-2 representing a mean value of all reconstructed angular velocity data in the first subset of reconstructed motion data;
second mapping feature values of the second reconstructed motion data subsetc 2 The calculation formula of (2) is as follows:
;
in the method, in the process of the invention,σ 2 representing the standard deviation of all data in the second reconstructed motion data subset,B k representing the first in the second reconstructed motion data subsetkThe data of the plurality of data,B k-1 representing the first in the second reconstructed motion data subsetkThe number of data to be processed is-1,Kthe number of data representing the second subset of reconstructed motion data,μ ave-3 representing the mean of all reconstructed acceleration data in the second subset of reconstructed motion data,μ ave-4 representing the mean of all reconstructed angular velocity data in the second subset of reconstructed motion data.
And when the reconstructed motion data set is constructed, the reconstructed acceleration data and the reconstructed angular velocity data are mixed and sequenced, so that the two data are more fused, and the generated historical action matrix can simultaneously characterize the characteristics of the reconstructed acceleration data and the reconstructed angular velocity data. And screening the first reconstructed motion data subset by using the second mapping characteristic value, screening the second reconstructed motion data subset by using the first mapping characteristic value, and performing cross screening on the two mapping characteristic values to remove redundant data to the greatest extent so as to avoid that the characteristic values of the respective subsets are greatly influenced by the elements of the respective subsets and noise data cannot be effectively removed. And arranging the data in the latest reconstructed motion data set to generate a current historical motion matrix.
In the embodiment of the present invention, in S3, a specific method for generating the current action matrix is as follows: taking the average value between the current acceleration data and the current angular velocity data as an element of a current action matrix, filling the rest elements with 1, and generating the current action matrix.
Only the current acceleration data and the current angular velocity data are contained in the current motion data set. In S4, the current action matrix needs to be multiplied by the historical action matrix, so that the number of columns of the current action matrix is equal to the number of rows of the historical action matrix, an average value of the two data is taken as one of the elements of the current action matrix, and the other elements are supplemented by 1.
In the embodiment of the present invention, S4 includes the following steps:
s41, fusing the historical action matrix and the current action matrix to generate a fusion feature matrix;
s42, extracting fusion characteristic coefficients of the fusion characteristic matrix;
s43, extracting characteristic values of a historical action matrix and characteristic values of a current action matrix;
s44, judging whether the average value between the characteristic value of the historical action matrix and the characteristic value of the current action matrix is larger than the fusion characteristic coefficient of the fusion characteristic matrix, if so, entering S45, otherwise, entering S46;
s45, taking a historical action matrix and a current action matrix as input of a convolutional neural network to generate training actions;
s46, performing integral operation on the current acceleration data, the current angular velocity data and the period of collecting the current motion data set to generate training actions.
The historical action matrix and the current action matrix are fused, so that the relevance of the fusion characteristic coefficients can be improved, and the situation that the fusion characteristic coefficients have obvious differences is avoided. S44, judging the average value of the characteristic values, and when the average value between the characteristic values of the historical action matrix and the characteristic values of the current action matrix is larger than the characteristic of the fusion characteristic matrix, indicating that the training action of the user is an instantaneous action; when the average value between the characteristic value of the historical action matrix and the characteristic value of the current action matrix is smaller than or equal to the characteristic of the fusion characteristic matrix, the training action of the user is represented as continuous action.
When the training action of the user is instantaneous action, the two action matrixes are input into the convolutional neural network to perform operations such as rolling and pooling, and the support vector machine classifier of the convolutional neural network can finish human body posture estimation. When the training action of the user is continuous action, the data is integrated to generate the training action.
In the embodiment of the present invention, in S41, the feature matrix is fusedZThe calculation formula of (2) is as follows:
;
in the method, in the process of the invention,Xthe historical action matrix is represented and is displayed,Yrepresenting the current action matrix of the device,representing the hadamard product operation.
In the embodiment of the present invention, in S42, the feature coefficients of the feature matrix are fuseddThe calculation formula of (2) is as follows:
;
in the method, in the process of the invention,Xthe historical action matrix is represented and is displayed,Yrepresenting the current action matrix of the device,λ 1 the eigenvalues of the historical action matrix are represented,λ 2 representing the eigenvalues of the current action matrix,Irepresenting the identity matrix of the cell, Z represents the fusion feature matrix, I.I 2 Representing a second order norm operation.
Based on the above method, the invention also provides a training action recognition system, as shown in fig. 2, comprising a data acquisition unit, a data reconstruction unit, an action matrix generation unit and a training action recognition unit;
the data acquisition unit is used for acquiring a current motion data set and a historical motion data set of a user by using the three-axis sensor; the current motion data set comprises current acceleration data and current angular velocity data at the current moment, and the historical motion data set comprises historical acceleration data and historical angular velocity data at each historical moment;
the data reconstruction unit is used for reconstructing the historical acceleration data and the historical angular velocity data at each historical moment to generate corresponding reconstructed acceleration data and reconstructed angular velocity data;
the action matrix generation unit is used for generating a historical action matrix according to the reconstructed acceleration data and the reconstructed angular velocity data of each historical moment and generating a current action matrix according to the current acceleration data and the current angular velocity data;
the training action recognition unit is used for recognizing training actions according to the historical action matrix and the current action matrix.
Those of ordinary skill in the art will recognize that the embodiments described herein are for the purpose of aiding the reader in understanding the principles of the present invention and should be understood that the scope of the invention is not limited to such specific statements and embodiments. Those of ordinary skill in the art can make various other specific modifications and combinations from the teachings of the present disclosure without departing from the spirit thereof, and such modifications and combinations remain within the scope of the present disclosure.
Claims (5)
1. A training motion recognition method, comprising the steps of:
s1, acquiring a current motion data set and a historical motion data set of a user by using a three-axis sensor; the current motion data set comprises current acceleration data and current angular velocity data at the current moment, and the historical motion data set comprises historical acceleration data and historical angular velocity data at each historical moment;
s2, reconstructing the historical acceleration data and the historical angular velocity data at each historical moment to generate corresponding reconstructed acceleration data and reconstructed angular velocity data;
s3, generating a historical action matrix according to the reconstructed acceleration data and the reconstructed angular velocity data of each historical moment, and generating a current action matrix according to the current acceleration data and the current angular velocity data;
s4, recognizing training actions according to the historical action matrix and the current action matrix;
the step S2 comprises the following substeps:
s21, extracting characteristic coefficients of historical acceleration data and characteristic coefficients of historical angular velocity data at each historical moment;
s22, generating a historical data dictionary matrix according to the characteristic coefficient of the historical acceleration data and the characteristic coefficient of the historical angular velocity data;
s23, performing sparse decomposition on the historical data dictionary matrix by using a characteristic symbol search algorithm to generate sparse coefficients;
s24, taking the product of the historical acceleration data and the sparse coefficient as fitting acceleration data, and taking the product of the historical angular velocity data and the sparse coefficient as fitting angular velocity data;
s25, respectively carrying out normalization processing on the fitting acceleration data and the fitting angular velocity data to generate reconstruction acceleration data and reconstruction angular velocity data;
in the S21, characteristic coefficients of the historical acceleration dataa 0 The calculation formula of (2) is as follows:
;
in the method, in the process of the invention,Nrepresents the total number of historical time instants,a xn represent the firstnTime of historyxThe historical acceleration of the direction is used to determine,a x n(-1) represent the firstn-1 historic momentxThe historical acceleration of the direction is used to determine,a x n(+1) represent the firstn+1 historic momentsxThe historical acceleration of the direction is used to determine,a yn represent the firstnTime of historyyThe historical acceleration of the direction is used to determine,a y n(-1) represent the firstn-1 historic momentyThe historical acceleration of the direction is used to determine,a y n(+1) represent the firstn+1 historic momentsyThe historical acceleration of the direction is used to determine,a zn represent the firstnTime of historyzThe historical acceleration of the direction is used to determine,a z n(-1) represent the firstn-1 historic momentzThe historical acceleration of the direction is used to determine,a z n(+1) represent the firstn+1 historic momentszThe historical acceleration of the direction is used to determine,T 1 representing the period over which the historical acceleration data is collected,a xmax representation ofxThe maximum historical acceleration of the direction is,a ymax representation ofyThe maximum historical acceleration of the direction is,a zmax representation ofzMaximum historical acceleration of direction;
in the S21, characteristic coefficients of the historical angular velocity dataw 0 The calculation formula of (2) is as follows:
;
in the method, in the process of the invention,w xn represent the firstnTime of historyxThe historical angular velocity of the direction is set,w x n(-1) represent the firstn-1 historic momentxThe historical angular velocity of the direction is set,w x n(+1) represent the firstn+1 historic momentsxThe historical angular velocity of the direction is set,w yn represent the firstnTime of historyyThe historical angular velocity of the direction is set,w y n(-1) represent the firstn-1 historic momentyThe historical angular velocity of the direction is set,w y n(+1) represent the firstn+1 historic momentsyThe historical angular velocity of the direction is set,w zn represent the firstnTime of historyzThe historical angular velocity of the direction is set,w z n(-1) represent the firstn-1 historic momentzThe historical angular velocity of the direction is set,w z n(+1) represent the firstn+1 historic momentszThe historical angular velocity of the direction is set,T 2 representing the period over which the user's historical angular velocity data is collected,w xmax representation ofxThe maximum historical angular velocity of the direction,w ymax representation ofyThe maximum historical angular velocity of the direction,w zmax representation ofzMaximum historical angular velocity of direction;
the step S4 comprises the following steps:
s41, fusing the historical action matrix and the current action matrix to generate a fusion feature matrix;
s42, extracting fusion characteristic coefficients of the fusion characteristic matrix;
s43, extracting characteristic values of a historical action matrix and characteristic values of a current action matrix;
s44, judging whether the average value between the characteristic value of the historical action matrix and the characteristic value of the current action matrix is larger than the fusion characteristic coefficient of the fusion characteristic matrix, if so, entering S45, otherwise, entering S46;
s45, taking a historical action matrix and a current action matrix as input of a convolutional neural network to generate training actions;
s46, performing integral operation on the current acceleration data, the current angular velocity data and the period of collecting the current motion data set to generate training actions;
the training action recognition method is realized by a training action recognition system, and the system comprises a data acquisition unit, a data reconstruction unit, an action matrix generation unit and a training action recognition unit;
the data acquisition unit is used for acquiring a current motion data set and a historical motion data set of a user by using the three-axis sensor; the current motion data set comprises current acceleration data and current angular velocity data at the current moment, and the historical motion data set comprises historical acceleration data and historical angular velocity data at each historical moment;
the data reconstruction unit is used for reconstructing the historical acceleration data and the historical angular velocity data at each historical moment to generate corresponding reconstructed acceleration data and reconstructed angular velocity data;
the action matrix generation unit is used for generating a historical action matrix according to the reconstructed acceleration data and the reconstructed angular velocity data of each historical moment and generating a current action matrix according to the current acceleration data and the current angular velocity data;
the training action recognition unit is used for recognizing training actions according to the historical action matrix and the current action matrix.
2. The training action recognition method according to claim 1, wherein in S3, the specific method for generating the historical action matrix is as follows: sequentially sequencing the reconstructed acceleration data and the reconstructed angular velocity data at all historical moments from large to small to generate a reconstructed motion data set, equally dividing the reconstructed motion data set into a first reconstructed motion data subset and a second reconstructed motion data subset, respectively calculating a first mapping characteristic value of the first reconstructed motion data subset and a second mapping characteristic value of the second reconstructed motion data subset, removing data smaller than the second mapping characteristic value in the first reconstructed motion data subset, removing data smaller than the first mapping characteristic value in the second reconstructed motion data subset, generating a latest reconstructed motion data set, and generating a current historical motion matrix according to the latest reconstructed motion data;
first mapping eigenvalues of the first reconstructed motion data subsetc 1 The calculation formula of (2) is as follows:
;
in the method, in the process of the invention,σ 1 representing the standard deviation of all data in the first reconstructed motion data subset,Cthe constant is represented by a value that is a function of,A m representing the first reconstructed motion data subsetmThe data of the plurality of data,A m-1 representing the first reconstructed motion data subsetmThe number of data to be processed is-1,Mthe number of data representing the first reconstructed motion data subset,μ ave-1 representing the mean of all reconstructed acceleration data in the first subset of reconstructed motion data,μ ave-2 representing a mean value of all reconstructed angular velocity data in the first subset of reconstructed motion data;
second mapping feature values of the second subset of reconstructed motion datac 2 The calculation formula of (2) is as follows:
;
in the method, in the process of the invention,σ 2 representing the standard deviation of all data in the second reconstructed motion data subset,B k representing the first in the second reconstructed motion data subsetkThe data of the plurality of data,B k-1 representing the first in the second reconstructed motion data subsetkThe number of data to be processed is-1,Kthe number of data representing the second subset of reconstructed motion data,μ ave-3 representing the mean of all reconstructed acceleration data in the second subset of reconstructed motion data,μ ave-4 representing the mean of all reconstructed angular velocity data in the second subset of reconstructed motion data.
3. The training action recognition method according to claim 1, wherein in S3, the specific method for generating the current action matrix is: taking the average value between the current acceleration data and the current angular velocity data as an element of a current action matrix, filling the rest elements with 1, and generating the current action matrix.
4. The training motion recognition method according to claim 1, wherein in S41, feature matrices are fusedZThe calculation formula of (2) is as follows:
;
in the method, in the process of the invention,Xthe historical action matrix is represented and is displayed,Yrepresenting the current action matrix of the device,representing the hadamard product operation.
5. The training motion recognition method according to claim 1, wherein in S42, the feature coefficients of the feature matrix are fuseddThe calculation formula of (2) is as follows:
;
in the method, in the process of the invention,Xthe historical action matrix is represented and is displayed,Yrepresenting the current action matrix of the device,λ 1 the eigenvalues of the historical action matrix are represented,λ 2 representing the eigenvalues of the current action matrix,Irepresenting the identity matrix of the cell, Z represents the fusion feature matrix, I.I 2 Representing a second order norm operation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310855833.5A CN116578910B (en) | 2023-07-13 | 2023-07-13 | Training action recognition method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310855833.5A CN116578910B (en) | 2023-07-13 | 2023-07-13 | Training action recognition method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116578910A CN116578910A (en) | 2023-08-11 |
CN116578910B true CN116578910B (en) | 2023-09-15 |
Family
ID=87534528
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310855833.5A Active CN116578910B (en) | 2023-07-13 | 2023-07-13 | Training action recognition method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116578910B (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007334479A (en) * | 2006-06-13 | 2007-12-27 | Advanced Telecommunication Research Institute International | Driving motion analysis apparatus and driving motion analysis method |
CN103984416A (en) * | 2014-06-10 | 2014-08-13 | 北京邮电大学 | Gesture recognition method based on acceleration sensor |
CN105184325A (en) * | 2015-09-23 | 2015-12-23 | 歌尔声学股份有限公司 | Human body action recognition method and mobile intelligent terminal |
WO2017156835A1 (en) * | 2016-03-18 | 2017-09-21 | 深圳大学 | Smart method and system for body building posture identification, assessment, warning and intensity estimation |
CN109886068A (en) * | 2018-12-20 | 2019-06-14 | 上海至玄智能科技有限公司 | Action behavior recognition methods based on exercise data |
CN110096499A (en) * | 2019-04-10 | 2019-08-06 | 华南理工大学 | A kind of the user object recognition methods and system of Behavior-based control time series big data |
CN110245718A (en) * | 2019-06-21 | 2019-09-17 | 南京信息工程大学 | A kind of Human bodys' response method based on joint time-domain and frequency-domain feature |
CN110308795A (en) * | 2019-07-05 | 2019-10-08 | 济南大学 | A kind of dynamic gesture identification method and system |
WO2020192326A1 (en) * | 2019-03-22 | 2020-10-01 | 京东方科技集团股份有限公司 | Method and system for tracking head movement |
CN113671492A (en) * | 2021-07-29 | 2021-11-19 | 西安电子科技大学 | SAMP reconstruction method for forward-looking imaging of maneuvering platform |
CN114218586A (en) * | 2021-12-09 | 2022-03-22 | 杭州数鲲科技有限公司 | Business data intelligent management method and device, electronic equipment and storage medium |
CN114898275A (en) * | 2022-06-08 | 2022-08-12 | 成都航空职业技术学院 | Student activity track analysis method |
-
2023
- 2023-07-13 CN CN202310855833.5A patent/CN116578910B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007334479A (en) * | 2006-06-13 | 2007-12-27 | Advanced Telecommunication Research Institute International | Driving motion analysis apparatus and driving motion analysis method |
CN103984416A (en) * | 2014-06-10 | 2014-08-13 | 北京邮电大学 | Gesture recognition method based on acceleration sensor |
CN105184325A (en) * | 2015-09-23 | 2015-12-23 | 歌尔声学股份有限公司 | Human body action recognition method and mobile intelligent terminal |
WO2017156835A1 (en) * | 2016-03-18 | 2017-09-21 | 深圳大学 | Smart method and system for body building posture identification, assessment, warning and intensity estimation |
CN109886068A (en) * | 2018-12-20 | 2019-06-14 | 上海至玄智能科技有限公司 | Action behavior recognition methods based on exercise data |
WO2020192326A1 (en) * | 2019-03-22 | 2020-10-01 | 京东方科技集团股份有限公司 | Method and system for tracking head movement |
CN110096499A (en) * | 2019-04-10 | 2019-08-06 | 华南理工大学 | A kind of the user object recognition methods and system of Behavior-based control time series big data |
CN110245718A (en) * | 2019-06-21 | 2019-09-17 | 南京信息工程大学 | A kind of Human bodys' response method based on joint time-domain and frequency-domain feature |
CN110308795A (en) * | 2019-07-05 | 2019-10-08 | 济南大学 | A kind of dynamic gesture identification method and system |
CN113671492A (en) * | 2021-07-29 | 2021-11-19 | 西安电子科技大学 | SAMP reconstruction method for forward-looking imaging of maneuvering platform |
CN114218586A (en) * | 2021-12-09 | 2022-03-22 | 杭州数鲲科技有限公司 | Business data intelligent management method and device, electronic equipment and storage medium |
CN114898275A (en) * | 2022-06-08 | 2022-08-12 | 成都航空职业技术学院 | Student activity track analysis method |
Non-Patent Citations (4)
Title |
---|
A wearable-based posture recognition system with AI-assisted approach for healthcare IoT;Zhen Hong等;《Future Generation Computer Systems》;第127卷;第286-296页 * |
基于压缩感知的移动用户行为识别方法;宋辉 等;《计算机科学》;第44卷(第2期);第313-316页 * |
面向低功耗体域网的人体运动模式远程识别技术研究;徐海东;《中国优秀硕士学位论文全文数据库 信息科技辑》(第6期);第I140-83页 * |
高职院校足球队管理模式的实践研究——以成都航空职业技术学院为例;陈文正 等;《运动精品》;第37卷(第9期);第99-100页 * |
Also Published As
Publication number | Publication date |
---|---|
CN116578910A (en) | 2023-08-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Song et al. | Constructing stronger and faster baselines for skeleton-based action recognition | |
Batool et al. | Sensors technologies for human activity analysis based on SVM optimized by PSO algorithm | |
Huang et al. | Shallow convolutional neural networks for human activity recognition using wearable sensors | |
Frank et al. | Activity and gait recognition with time-delay embeddings | |
CN111666857A (en) | Human behavior recognition method and device based on environment semantic understanding and storage medium | |
CN111814719A (en) | Skeleton behavior identification method based on 3D space-time diagram convolution | |
CN110333783B (en) | Irrelevant gesture processing method and system for robust electromyography control | |
CN111291865B (en) | Gait recognition method based on convolutional neural network and isolated forest | |
CN109447128B (en) | Micro-inertia technology-based walking and stepping in-place movement classification method and system | |
Rao et al. | Neural network classifier for continuous sign language recognition with selfie video | |
CN111753683A (en) | Human body posture identification method based on multi-expert convolutional neural network | |
Ahmad et al. | Multidomain multimodal fusion for human action recognition using inertial sensors | |
CN115238796A (en) | Motor imagery electroencephalogram signal classification method based on parallel DAMSCN-LSTM | |
Zhang et al. | Human deep squat detection method based on MediaPipe combined with Yolov5 network | |
Han et al. | The effect of axis-wise triaxial acceleration data fusion in cnn-based human activity recognition | |
CN116578910B (en) | Training action recognition method and system | |
Jain et al. | Privacy-Preserving Human Activity Recognition System for Assisted Living Environments | |
Cai et al. | Construction worker ergonomic assessment via LSTM-based multi-task learning framework | |
Shah et al. | Real-time facial emotion recognition | |
CN117152845A (en) | Human behavior recognition method based on graph neural network and attention mechanism | |
Vo et al. | Dynamic gesture classification for Vietnamese sign language recognition | |
CN114916928B (en) | Human body posture multichannel convolutional neural network detection method | |
CN115205750B (en) | Motion real-time counting method and system based on deep learning model | |
Zhang et al. | An Improved Deep Convolutional LSTM for Human Activity Recognition Using Wearable Sensors | |
Dong et al. | Multi-sensor data fusion using the influence model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |