CN116578910A - Training action recognition method and system - Google Patents

Training action recognition method and system Download PDF

Info

Publication number
CN116578910A
CN116578910A CN202310855833.5A CN202310855833A CN116578910A CN 116578910 A CN116578910 A CN 116578910A CN 202310855833 A CN202310855833 A CN 202310855833A CN 116578910 A CN116578910 A CN 116578910A
Authority
CN
China
Prior art keywords
historical
data
angular velocity
current
reconstructed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310855833.5A
Other languages
Chinese (zh)
Other versions
CN116578910B (en
Inventor
陈文正
门正兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Aeronautic Polytechnic
Original Assignee
Chengdu Aeronautic Polytechnic
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Aeronautic Polytechnic filed Critical Chengdu Aeronautic Polytechnic
Priority to CN202310855833.5A priority Critical patent/CN116578910B/en
Publication of CN116578910A publication Critical patent/CN116578910A/en
Application granted granted Critical
Publication of CN116578910B publication Critical patent/CN116578910B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Abstract

The invention discloses a training action recognition method and a training action recognition system, which belong to the technical field of action recognition, wherein the training action recognition method comprises the following steps: s1, collecting a current motion data set and a historical motion data set of a user; s2, reconstructing the historical acceleration data and the historical angular velocity data at each historical moment to generate corresponding reconstructed acceleration data and reconstructed angular velocity data; s3, generating a historical action matrix according to the reconstructed acceleration data and the reconstructed angular velocity data of each historical moment, and generating a current action matrix according to the current acceleration data and the current angular velocity data; s4, recognizing training actions according to the historical action matrix and the current action matrix. According to the training action recognition method, the motion data at the current moment and the historical moment are collected through the three-axis sensor, the motion data are effectively reconstructed, redundant parameters of the motion data can be removed, the influence of noise on training action recognition is avoided, and the method can be suitable for recognition of complex scenes.

Description

Training action recognition method and system
Technical Field
The invention belongs to the technical field of motion recognition, and particularly relates to a training motion recognition method and system.
Background
With the development of sensor technology, motion recognition applications have become a hotspot of interest. Motion recognition has been widely used in the fields of sports and motor skills training, etc. For example, when a user wears sports equipment with a multi-axis sensor, the sports equipment collects sports data and transmits the collected sports data to an action computing device, the action computing device can be arranged in an intelligent terminal such as a mobile phone and a tablet computer, and after the action computing device performs calculation analysis on the sports data transmitted by the data collecting device, position and posture information and the like when the user moves can be obtained, so that a basis is provided for the user to share data, obtain action guidance and the like.
However, the conventional motion recognition algorithm adopts a convolutional neural network algorithm, is often limited by the accuracy degree of motion capture, and is difficult to have higher accuracy.
Disclosure of Invention
The invention provides a training action recognition method and a training action recognition system for solving the problems.
The technical scheme of the invention is as follows: a training action recognition method comprises the following steps:
s1, acquiring a current motion data set and a historical motion data set of a user by using a three-axis sensor; the current motion data set comprises current acceleration data and current angular velocity data at the current moment, and the historical motion data set comprises historical acceleration data and historical angular velocity data at each historical moment;
s2, reconstructing the historical acceleration data and the historical angular velocity data at each historical moment to generate corresponding reconstructed acceleration data and reconstructed angular velocity data;
s3, generating a historical action matrix according to the reconstructed acceleration data and the reconstructed angular velocity data of each historical moment, and generating a current action matrix according to the current acceleration data and the current angular velocity data;
s4, recognizing training actions according to the historical action matrix and the current action matrix.
Further, S2 comprises the following sub-steps:
s21, extracting characteristic coefficients of historical acceleration data and characteristic coefficients of historical angular velocity data at each historical moment;
s22, generating a historical data dictionary matrix according to the characteristic coefficient of the historical acceleration data and the characteristic coefficient of the historical angular velocity data;
s23, performing sparse decomposition on the historical data dictionary matrix by using a characteristic symbol search algorithm to generate sparse coefficients;
s24, taking the product of the historical acceleration data and the sparse coefficient as fitting acceleration data, and taking the product of the historical angular velocity data and the sparse coefficient as fitting angular velocity data;
s25, respectively carrying out normalization processing on the fitting acceleration data and the fitting angular velocity data to generate reconstruction acceleration data and reconstruction angular velocity data.
Further, in S21, the characteristic coefficient of the historical acceleration dataa 0 The calculation formula of (2) is as follows:
in the method, in the process of the invention,Nrepresents the total number of historical time instants,a xn represent the firstnTime of historyxThe historical acceleration of the direction is used to determine,a x n(-1) represent the firstn-1 historic momentxThe historical acceleration of the direction is used to determine,a x n(+1) represent the firstn+1 historic momentsxThe historical acceleration of the direction is used to determine,a yn represent the firstnTime of historyyThe historical acceleration of the direction is used to determine,a y n(-1) represent the firstn-1 historic momentyThe historical acceleration of the direction is used to determine,a y n(+1) represent the firstn+1 historic momentsyThe historical acceleration of the direction is used to determine,a zn represent the firstnTime of historyzThe historical acceleration of the direction is used to determine,a z n(-1) represent the firstn-1 historic momentzThe historical acceleration of the direction is used to determine,a z n(+1) represent the firstn+1 historic momentszThe historical acceleration of the direction is used to determine,T 1 representing the period over which the historical acceleration data is collected,a xmax representation ofxThe maximum historical acceleration of the direction is,a ymax representation ofyThe maximum historical acceleration of the direction is,a zmax representation ofzMaximum historical acceleration of direction;
s21, characteristic coefficient of historical angular velocity dataw 0 The calculation formula of (2) is:
In the method, in the process of the invention,w xn represent the firstnTime of historyxThe historical angular velocity of the direction is set,w x n(-1) represent the firstn-1 historic momentxThe historical angular velocity of the direction is set,w x n(+1) represent the firstn+1 historic momentsxThe historical angular velocity of the direction is set,w yn represent the firstnTime of historyyThe historical angular velocity of the direction is set,w y n(-1) represent the firstn-1 historic momentyThe historical angular velocity of the direction is set,w y n(+1) represent the firstn+1 historic momentsyThe historical angular velocity of the direction is set,w zn represent the firstnTime of historyzThe historical angular velocity of the direction is set,w z n(-1) represent the firstn-1 historic momentzThe historical angular velocity of the direction is set,w z n(+1) represent the firstn+1 historic momentszThe historical angular velocity of the direction is set,T 2 representing the period over which the user's historical angular velocity data is collected,w xmax representation ofxThe maximum historical angular velocity of the direction,w ymax representation ofyThe maximum historical angular velocity of the direction,w zmax representation ofzMaximum historical angular velocity of direction.
Further, in S3, the specific method for generating the history action matrix is as follows: sequentially sequencing the reconstructed acceleration data and the reconstructed angular velocity data at all historical moments from large to small to generate a reconstructed motion data set, equally dividing the reconstructed motion data set into a first reconstructed motion data subset and a second reconstructed motion data subset, respectively calculating a first mapping characteristic value of the first reconstructed motion data subset and a second mapping characteristic value of the second reconstructed motion data subset, removing data smaller than the second mapping characteristic value in the first reconstructed motion data subset, removing data smaller than the first mapping characteristic value in the second reconstructed motion data subset, generating a latest reconstructed motion data set, and generating a current historical motion matrix according to the latest reconstructed motion data.
Further, a first mapping characteristic value of the first reconstructed motion data subsetc 1 The calculation formula of (2) is as follows:
in the method, in the process of the invention,σ 1 representing the standard deviation of all data in the first reconstructed motion data subset,Cthe constant is represented by a value that is a function of,A m representing the first reconstructed motion data subsetmThe data of the plurality of data,A m-1 representing the first reconstructed motion data subsetmThe number of data to be processed is-1,Mthe number of data representing the first reconstructed motion data subset,μ ave-1 representing the mean of all reconstructed acceleration data in the first subset of reconstructed motion data,μ ave-2 representing a mean value of all reconstructed angular velocity data in the first subset of reconstructed motion data;
second mapping feature values of the second reconstructed motion data subsetc 2 The calculation formula of (2) is as follows:
in the method, in the process of the invention,σ 2 representing the standard deviation of all data in the second reconstructed motion data subset,B k representing the first in the second reconstructed motion data subsetkThe data of the plurality of data,B k-1 representing the first in the second reconstructed motion data subsetkThe number of data to be processed is-1,Kthe number of data representing the second subset of reconstructed motion data,μ ave-3 representing the mean of all reconstructed acceleration data in the second subset of reconstructed motion data,μ ave-4 representing the mean of all reconstructed angular velocity data in the second subset of reconstructed motion data.
Further, in S3, the specific method for generating the current action matrix is as follows: taking the average value between the current acceleration data and the current angular velocity data as an element of a current action matrix, filling the rest elements with 1, and generating the current action matrix.
Further, S4 includes the following steps:
s41, fusing the historical action matrix and the current action matrix to generate a fusion feature matrix;
s42, extracting fusion characteristic coefficients of the fusion characteristic matrix;
s43, extracting characteristic values of a historical action matrix and characteristic values of a current action matrix;
s44, judging whether the average value between the characteristic value of the historical action matrix and the characteristic value of the current action matrix is larger than the fusion characteristic coefficient of the fusion characteristic matrix, if so, entering S45, otherwise, entering S46;
s45, taking a historical action matrix and a current action matrix as input of a convolutional neural network to generate training actions;
s46, performing integral operation on the current acceleration data, the current angular velocity data and the period of collecting the current motion data set to generate training actions.
Further, in S41, feature matrices are fusedZThe calculation formula of (2) is as follows:
in the method, in the process of the invention,Xthe historical action matrix is represented and is displayed,Yrepresenting the current action matrix of the device,representing the hadamard product operation.
Further, in S42, the fusion feature coefficients of the fusion feature matrixdThe calculation formula of (2) is as follows:
in the method, in the process of the invention,Xthe historical action matrix is represented and is displayed,Yrepresenting the current action matrix of the device,λ 1 the eigenvalues of the historical action matrix are represented,λ 2 representing the current moment of motionThe characteristic value of the array is calculated,Irepresenting the identity matrix of the cell, Z represents the fusion feature matrix, I.I 2 Representing a second order norm operation.
The beneficial effects of the invention are as follows:
(1) According to the training action recognition method, the motion data at the current moment and the historical moment are collected through the three-axis sensor, the motion data are effectively reconstructed, redundant parameters of the motion data can be removed, and the influence of noise on training action recognition is avoided;
(2) According to the training action recognition method, by constructing a historical action matrix and a current action matrix, carrying out fusion operation, eigenvalue extraction and other operations on the two action matrices, judging the training action to be an instant action or a continuous action through eigenvalue comparison, and adopting different processing methods aiming at different types of actions, the recognized training action is more accurate, and the recognition of complex scenes can be adapted.
Based on the method, the invention also provides a training action recognition system which comprises a data acquisition unit, a data reconstruction unit, an action matrix generation unit and a training action recognition unit;
the data acquisition unit is used for acquiring a current motion data set and a historical motion data set of a user by using the three-axis sensor; the current motion data set comprises current acceleration data and current angular velocity data at the current moment, and the historical motion data set comprises historical acceleration data and historical angular velocity data at each historical moment;
the data reconstruction unit is used for reconstructing the historical acceleration data and the historical angular velocity data at each historical moment to generate corresponding reconstructed acceleration data and reconstructed angular velocity data;
the action matrix generation unit is used for generating a historical action matrix according to the reconstructed acceleration data and the reconstructed angular velocity data of each historical moment and generating a current action matrix according to the current acceleration data and the current angular velocity data;
the training action recognition unit is used for recognizing training actions according to the historical action matrix and the current action matrix.
The beneficial effects of the invention are as follows: the training action recognition system can recognize different types of actions through the processes of data acquisition, reconstruction data, matrix generation, action recognition and the like, and the action recognition accuracy is improved.
Drawings
FIG. 1 is a flow chart of a training action recognition method;
fig. 2 is a block diagram of a training motion recognition system.
Detailed Description
Embodiments of the present invention are further described below with reference to the accompanying drawings.
As shown in fig. 1, the present invention provides a training action recognition method, which includes the following steps:
s1, acquiring a current motion data set and a historical motion data set of a user by using a three-axis sensor; the current motion data set comprises current acceleration data and current angular velocity data at the current moment, and the historical motion data set comprises historical acceleration data and historical angular velocity data at each historical moment;
s2, reconstructing the historical acceleration data and the historical angular velocity data at each historical moment to generate corresponding reconstructed acceleration data and reconstructed angular velocity data;
s3, generating a historical action matrix according to the reconstructed acceleration data and the reconstructed angular velocity data of each historical moment, and generating a current action matrix according to the current acceleration data and the current angular velocity data;
s4, recognizing training actions according to the historical action matrix and the current action matrix.
In an embodiment of the present invention, S2 comprises the following sub-steps:
s21, extracting characteristic coefficients of historical acceleration data and characteristic coefficients of historical angular velocity data at each historical moment;
s22, generating a historical data dictionary matrix according to the characteristic coefficient of the historical acceleration data and the characteristic coefficient of the historical angular velocity data;
s23, performing sparse decomposition on the historical data dictionary matrix by using a characteristic symbol search algorithm to generate sparse coefficients;
s24, taking the product of the historical acceleration data and the sparse coefficient as fitting acceleration data, and taking the product of the historical angular velocity data and the sparse coefficient as fitting angular velocity data;
s25, respectively carrying out normalization processing on the fitting acceleration data and the fitting angular velocity data to generate reconstruction acceleration data and reconstruction angular velocity data.
The method has the advantages that the historical acceleration data and the historical angular velocity data at the historical moment are subjected to fitting reconstruction, so that characteristic values (used for representing data characteristics) of the data can be extracted, the data change trend at the current moment can be predicted (used for constructing a historical action matrix in a subsequent step and analogizing with the current action matrix), and the dimension and noise of the data can be reduced.
In the embodiment of the present invention, in S21, the characteristic coefficient of the historical acceleration dataa 0 The calculation formula of (2) is as follows:
in the method, in the process of the invention,Nrepresents the total number of historical time instants,a xn represent the firstnTime of historyxThe historical acceleration of the direction is used to determine,a x n(-1) represent the firstn-1 historic momentxThe historical acceleration of the direction is used to determine,a x n(+1) represent the firstn+1 historic momentsxThe historical acceleration of the direction is used to determine,a yn represent the firstnTime of historyyThe historical acceleration of the direction is used to determine,a y n(-1) represent the firstn-1 historic momentyThe historical acceleration of the direction is used to determine,a y n(+1) represent the firstn+1 historic momentsyThe historical acceleration of the direction is used to determine,a zn represent the firstnTime of historyzThe historical acceleration of the direction is used to determine,a z n(-1) represent the firstn-1 historic momentzThe historical acceleration of the direction is used to determine,a z n(+1) represent the firstn+1 historic momentszThe historical acceleration of the direction is used to determine,T 1 representation ofThe period of the historical acceleration data is collected,a xmax representation ofxThe maximum historical acceleration of the direction is,a ymax representation ofyThe maximum historical acceleration of the direction is,a zmax representation ofzMaximum historical acceleration of direction;
s21, characteristic coefficient of historical angular velocity dataw 0 The calculation formula of (2) is as follows:
in the method, in the process of the invention,w xn represent the firstnTime of historyxThe historical angular velocity of the direction is set,w x n(-1) represent the firstn-1 historic momentxThe historical angular velocity of the direction is set,w x n(+1) represent the firstn+1 historic momentsxThe historical angular velocity of the direction is set,w yn represent the firstnTime of historyyThe historical angular velocity of the direction is set,w y n(-1) represent the firstn-1 historic momentyThe historical angular velocity of the direction is set,w y n(+1) represent the firstn+1 historic momentsyThe historical angular velocity of the direction is set,w zn represent the firstnTime of historyzThe historical angular velocity of the direction is set,w z n(-1) represent the firstn-1 historic momentzThe historical angular velocity of the direction is set,w z n(+1) represent the firstn+1 historic momentszThe historical angular velocity of the direction is set,T 2 representing the period over which the user's historical angular velocity data is collected,w xmax representation ofxThe maximum historical angular velocity of the direction,w ymax representation ofyThe maximum historical angular velocity of the direction,w zmax representation ofzMaximum historical angular velocity of direction.
And calculating the acceleration difference between the current historical moment and the last historical moment in three directions and the acceleration difference between the next historical moment and the current historical moment, wherein the acceleration difference is used as a first factor of a characteristic coefficient, and the first factor is obtained through the calculation of the acceleration values of the current historical moment and the adjacent historical moment and can characterize the acceleration data change of each historical moment. And calculating the maximum historical acceleration value in three directions as a second factor of the characteristic coefficient, wherein the second factor can represent the maximum amplitude change of the acceleration data at the historical moment. Taking the product of the first factor and the second factor as a characteristic coefficient, the amplitude variation of the acceleration value at each moment can be characterized. The characteristic coefficients of the historical angular velocity data are the same.
In the embodiment of the present invention, in S3, a specific method for generating the historical motion matrix is as follows: sequentially sequencing the reconstructed acceleration data and the reconstructed angular velocity data at all historical moments from large to small to generate a reconstructed motion data set, equally dividing the reconstructed motion data set into a first reconstructed motion data subset and a second reconstructed motion data subset, respectively calculating a first mapping characteristic value of the first reconstructed motion data subset and a second mapping characteristic value of the second reconstructed motion data subset, removing data smaller than the second mapping characteristic value in the first reconstructed motion data subset, removing data smaller than the first mapping characteristic value in the second reconstructed motion data subset, generating a latest reconstructed motion data set, and generating a current historical motion matrix according to the latest reconstructed motion data.
In an embodiment of the invention, the first mapping characteristic value of the first reconstructed motion data subsetc 1 The calculation formula of (2) is as follows:
in the method, in the process of the invention,σ 1 representing the standard deviation of all data in the first reconstructed motion data subset,Cthe constant is represented by a value that is a function of,A m representing the first reconstructed motion data subsetmThe data of the plurality of data,A m-1 representing the first reconstructed motion data subsetmThe number of data to be processed is-1,Mthe number of data representing the first reconstructed motion data subset,μ ave-1 representing the mean of all reconstructed acceleration data in the first subset of reconstructed motion data,μ ave-2 representing a mean value of all reconstructed angular velocity data in the first subset of reconstructed motion data;
second mapping feature values of the second reconstructed motion data subsetc 2 The calculation formula of (2) is as follows:
in the method, in the process of the invention,σ 2 representing the standard deviation of all data in the second reconstructed motion data subset,B k representing the first in the second reconstructed motion data subsetkThe data of the plurality of data,B k-1 representing the first in the second reconstructed motion data subsetkThe number of data to be processed is-1,Kthe number of data representing the second subset of reconstructed motion data,μ ave-3 representing the mean of all reconstructed acceleration data in the second subset of reconstructed motion data,μ ave-4 representing the mean of all reconstructed angular velocity data in the second subset of reconstructed motion data.
And when the reconstructed motion data set is constructed, the reconstructed acceleration data and the reconstructed angular velocity data are mixed and sequenced, so that the two data are more fused, and the generated historical action matrix can simultaneously characterize the characteristics of the reconstructed acceleration data and the reconstructed angular velocity data. And screening the first reconstructed motion data subset by using the second mapping characteristic value, screening the second reconstructed motion data subset by using the first mapping characteristic value, and performing cross screening on the two mapping characteristic values to remove redundant data to the greatest extent so as to avoid that the characteristic values of the respective subsets are greatly influenced by the elements of the respective subsets and noise data cannot be effectively removed. And arranging the data in the latest reconstructed motion data set to generate a current historical motion matrix.
In the embodiment of the present invention, in S3, a specific method for generating the current action matrix is as follows: taking the average value between the current acceleration data and the current angular velocity data as an element of a current action matrix, filling the rest elements with 1, and generating the current action matrix.
Only the current acceleration data and the current angular velocity data are contained in the current motion data set. In S4, the current action matrix needs to be multiplied by the historical action matrix, so that the number of columns of the current action matrix is equal to the number of rows of the historical action matrix, an average value of the two data is taken as one of the elements of the current action matrix, and the other elements are supplemented by 1.
In the embodiment of the present invention, S4 includes the following steps:
s41, fusing the historical action matrix and the current action matrix to generate a fusion feature matrix;
s42, extracting fusion characteristic coefficients of the fusion characteristic matrix;
s43, extracting characteristic values of a historical action matrix and characteristic values of a current action matrix;
s44, judging whether the average value between the characteristic value of the historical action matrix and the characteristic value of the current action matrix is larger than the fusion characteristic coefficient of the fusion characteristic matrix, if so, entering S45, otherwise, entering S46;
s45, taking a historical action matrix and a current action matrix as input of a convolutional neural network to generate training actions;
s46, performing integral operation on the current acceleration data, the current angular velocity data and the period of collecting the current motion data set to generate training actions.
The historical action matrix and the current action matrix are fused, so that the relevance of the fusion characteristic coefficients can be improved, and the situation that the fusion characteristic coefficients have obvious differences is avoided. S44, judging the average value of the characteristic values, and when the average value between the characteristic values of the historical action matrix and the characteristic values of the current action matrix is larger than the characteristic of the fusion characteristic matrix, indicating that the training action of the user is an instantaneous action; when the average value between the characteristic value of the historical action matrix and the characteristic value of the current action matrix is smaller than or equal to the characteristic of the fusion characteristic matrix, the training action of the user is represented as continuous action.
When the training action of the user is instantaneous action, the two action matrixes are input into the convolutional neural network to perform operations such as rolling and pooling, and the support vector machine classifier of the convolutional neural network can finish human body posture estimation. When the training action of the user is continuous action, the data is integrated to generate the training action.
In the embodiment of the present invention, in S41, the feature matrix is fusedZThe calculation formula of (2) is as follows:
in the method, in the process of the invention,Xthe historical action matrix is represented and is displayed,Yrepresenting the current action matrix of the device,representing the hadamard product operation.
In the embodiment of the present invention, in S42, the feature coefficients of the feature matrix are fuseddThe calculation formula of (2) is as follows:
in the method, in the process of the invention,Xthe historical action matrix is represented and is displayed,Yrepresenting the current action matrix of the device,λ 1 the eigenvalues of the historical action matrix are represented,λ 2 representing the eigenvalues of the current action matrix,Irepresenting the identity matrix of the cell, Z represents the fusion feature matrix, I.I 2 Representing a second order norm operation.
Based on the above method, the invention also provides a training action recognition system, as shown in fig. 2, comprising a data acquisition unit, a data reconstruction unit, an action matrix generation unit and a training action recognition unit;
the data acquisition unit is used for acquiring a current motion data set and a historical motion data set of a user by using the three-axis sensor; the current motion data set comprises current acceleration data and current angular velocity data at the current moment, and the historical motion data set comprises historical acceleration data and historical angular velocity data at each historical moment;
the data reconstruction unit is used for reconstructing the historical acceleration data and the historical angular velocity data at each historical moment to generate corresponding reconstructed acceleration data and reconstructed angular velocity data;
the action matrix generation unit is used for generating a historical action matrix according to the reconstructed acceleration data and the reconstructed angular velocity data of each historical moment and generating a current action matrix according to the current acceleration data and the current angular velocity data;
the training action recognition unit is used for recognizing training actions according to the historical action matrix and the current action matrix.
Those of ordinary skill in the art will recognize that the embodiments described herein are for the purpose of aiding the reader in understanding the principles of the present invention and should be understood that the scope of the invention is not limited to such specific statements and embodiments. Those of ordinary skill in the art can make various other specific modifications and combinations from the teachings of the present disclosure without departing from the spirit thereof, and such modifications and combinations remain within the scope of the present disclosure.

Claims (10)

1. A training motion recognition method, comprising the steps of:
s1, acquiring a current motion data set and a historical motion data set of a user by using a three-axis sensor; the current motion data set comprises current acceleration data and current angular velocity data at the current moment, and the historical motion data set comprises historical acceleration data and historical angular velocity data at each historical moment;
s2, reconstructing the historical acceleration data and the historical angular velocity data at each historical moment to generate corresponding reconstructed acceleration data and reconstructed angular velocity data;
s3, generating a historical action matrix according to the reconstructed acceleration data and the reconstructed angular velocity data of each historical moment, and generating a current action matrix according to the current acceleration data and the current angular velocity data;
s4, recognizing training actions according to the historical action matrix and the current action matrix.
2. The training action recognition method according to claim 1, wherein S2 comprises the sub-steps of:
s21, extracting characteristic coefficients of historical acceleration data and characteristic coefficients of historical angular velocity data at each historical moment;
s22, generating a historical data dictionary matrix according to the characteristic coefficient of the historical acceleration data and the characteristic coefficient of the historical angular velocity data;
s23, performing sparse decomposition on the historical data dictionary matrix by using a characteristic symbol search algorithm to generate sparse coefficients;
s24, taking the product of the historical acceleration data and the sparse coefficient as fitting acceleration data, and taking the product of the historical angular velocity data and the sparse coefficient as fitting angular velocity data;
s25, respectively carrying out normalization processing on the fitting acceleration data and the fitting angular velocity data to generate reconstruction acceleration data and reconstruction angular velocity data.
3. The training motion recognition method according to claim 2, wherein in S21, the characteristic coefficients of the historical acceleration dataa 0 The calculation formula of (2) is as follows:
in the method, in the process of the invention,Nrepresents the total number of historical time instants,a xn represent the firstnTime of historyxThe historical acceleration of the direction is used to determine,a x n(-1) represent the firstn-1 historic momentxThe historical acceleration of the direction is used to determine,a x n(+1) represent the firstn+1 historic momentsxThe historical acceleration of the direction is used to determine,a yn represent the firstnTime of historyyThe historical acceleration of the direction is used to determine,a y n(-1) represent the firstn-1 historic momentyThe historical acceleration of the direction is used to determine,a y n(+1) represent the firstn+1 historic momentsyThe historical acceleration of the direction is used to determine,a zn represent the firstnTime of historyzThe historical acceleration of the direction is used to determine,a z n(-1) represent the firstn-1Historical time of dayzThe historical acceleration of the direction is used to determine,a z n(+1) represent the firstn+1 historic momentszThe historical acceleration of the direction is used to determine,T 1 representing the period over which the historical acceleration data is collected,a xmax representation ofxThe maximum historical acceleration of the direction is,a ymax representation ofyThe maximum historical acceleration of the direction is,a zmax representation ofzMaximum historical acceleration of direction;
in the S21, characteristic coefficients of the historical angular velocity dataw 0 The calculation formula of (2) is as follows:
in the method, in the process of the invention,w xn represent the firstnTime of historyxThe historical angular velocity of the direction is set,w x n(-1) represent the firstn-1 historic momentxThe historical angular velocity of the direction is set,w x n(+1) represent the firstn+1 historic momentsxThe historical angular velocity of the direction is set,w yn represent the firstnTime of historyyThe historical angular velocity of the direction is set,w y n(-1) represent the firstn-1 historic momentyThe historical angular velocity of the direction is set,w y n(+1) represent the firstn+1 historic momentsyThe historical angular velocity of the direction is set,w zn represent the firstnTime of historyzThe historical angular velocity of the direction is set,w z n(-1) represent the firstn-1 historic momentzThe historical angular velocity of the direction is set,w z n(+1) represent the firstn+1 historic momentszThe historical angular velocity of the direction is set,T 2 representing the period over which the user's historical angular velocity data is collected,w xmax representation ofxThe maximum historical angular velocity of the direction,w ymax representation ofyThe maximum historical angular velocity of the direction,w zmax representation ofzMaximum historical angular velocity of direction.
4. The training action recognition method according to claim 1, wherein in S3, the specific method for generating the historical action matrix is as follows: sequentially sequencing the reconstructed acceleration data and the reconstructed angular velocity data at all historical moments from large to small to generate a reconstructed motion data set, equally dividing the reconstructed motion data set into a first reconstructed motion data subset and a second reconstructed motion data subset, respectively calculating a first mapping characteristic value of the first reconstructed motion data subset and a second mapping characteristic value of the second reconstructed motion data subset, removing data smaller than the second mapping characteristic value in the first reconstructed motion data subset, removing data smaller than the first mapping characteristic value in the second reconstructed motion data subset, generating a latest reconstructed motion data set, and generating a current historical motion matrix according to the latest reconstructed motion data.
5. The training action recognition method of claim 4, wherein the first mapping feature value of the first subset of reconstructed motion datac 1 The calculation formula of (2) is as follows:
in the method, in the process of the invention,σ 1 representing the standard deviation of all data in the first reconstructed motion data subset,Cthe constant is represented by a value that is a function of,A m representing the first reconstructed motion data subsetmThe data of the plurality of data,A m-1 representing the first reconstructed motion data subsetmThe number of data to be processed is-1,Mthe number of data representing the first reconstructed motion data subset,μ ave-1 representing the mean of all reconstructed acceleration data in the first subset of reconstructed motion data,μ ave-2 representing a mean value of all reconstructed angular velocity data in the first subset of reconstructed motion data;
second mapping feature values of the second subset of reconstructed motion datac 2 The calculation formula of (2) is as follows:
in the method, in the process of the invention,σ 2 representing the standard deviation of all data in the second reconstructed motion data subset,B k representing the first in the second reconstructed motion data subsetkThe data of the plurality of data,B k-1 representing the first in the second reconstructed motion data subsetkThe number of data to be processed is-1,Kthe number of data representing the second subset of reconstructed motion data,μ ave-3 representing the mean of all reconstructed acceleration data in the second subset of reconstructed motion data,μ ave-4 representing the mean of all reconstructed angular velocity data in the second subset of reconstructed motion data.
6. The training action recognition method according to claim 1, wherein in S3, the specific method for generating the current action matrix is: taking the average value between the current acceleration data and the current angular velocity data as an element of a current action matrix, filling the rest elements with 1, and generating the current action matrix.
7. The training action recognition method according to claim 1, wherein S4 comprises the steps of:
s41, fusing the historical action matrix and the current action matrix to generate a fusion feature matrix;
s42, extracting fusion characteristic coefficients of the fusion characteristic matrix;
s43, extracting characteristic values of a historical action matrix and characteristic values of a current action matrix;
s44, judging whether the average value between the characteristic value of the historical action matrix and the characteristic value of the current action matrix is larger than the fusion characteristic coefficient of the fusion characteristic matrix, if so, entering S45, otherwise, entering S46;
s45, taking a historical action matrix and a current action matrix as input of a convolutional neural network to generate training actions;
s46, performing integral operation on the current acceleration data, the current angular velocity data and the period of collecting the current motion data set to generate training actions.
8. The training motion recognition method of claim 7, wherein in S41, feature matrices are fusedZThe calculation formula of (2) is as follows:
in the method, in the process of the invention,Xthe historical action matrix is represented and is displayed,Yrepresenting the current action matrix of the device,representing the hadamard product operation.
9. The training motion recognition method according to claim 7, wherein in S42, the feature coefficients of the feature matrix are fuseddThe calculation formula of (2) is as follows:
in the method, in the process of the invention,Xthe historical action matrix is represented and is displayed,Yrepresenting the current action matrix of the device,λ 1 the eigenvalues of the historical action matrix are represented,λ 2 representing the eigenvalues of the current action matrix,Irepresenting the identity matrix of the cell, Z represents the fusion feature matrix, I.I 2 Representing a second order norm operation.
10. The training action recognition system is characterized by comprising a data acquisition unit, a data reconstruction unit, an action matrix generation unit and a training action recognition unit;
the data acquisition unit is used for acquiring a current motion data set and a historical motion data set of a user by using the three-axis sensor; the current motion data set comprises current acceleration data and current angular velocity data at the current moment, and the historical motion data set comprises historical acceleration data and historical angular velocity data at each historical moment;
the data reconstruction unit is used for reconstructing the historical acceleration data and the historical angular velocity data at each historical moment to generate corresponding reconstructed acceleration data and reconstructed angular velocity data;
the action matrix generation unit is used for generating a historical action matrix according to the reconstructed acceleration data and the reconstructed angular velocity data of each historical moment and generating a current action matrix according to the current acceleration data and the current angular velocity data;
the training action recognition unit is used for recognizing training actions according to the historical action matrix and the current action matrix.
CN202310855833.5A 2023-07-13 2023-07-13 Training action recognition method and system Active CN116578910B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310855833.5A CN116578910B (en) 2023-07-13 2023-07-13 Training action recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310855833.5A CN116578910B (en) 2023-07-13 2023-07-13 Training action recognition method and system

Publications (2)

Publication Number Publication Date
CN116578910A true CN116578910A (en) 2023-08-11
CN116578910B CN116578910B (en) 2023-09-15

Family

ID=87534528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310855833.5A Active CN116578910B (en) 2023-07-13 2023-07-13 Training action recognition method and system

Country Status (1)

Country Link
CN (1) CN116578910B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007334479A (en) * 2006-06-13 2007-12-27 Advanced Telecommunication Research Institute International Driving motion analysis apparatus and driving motion analysis method
CN103984416A (en) * 2014-06-10 2014-08-13 北京邮电大学 Gesture recognition method based on acceleration sensor
CN105184325A (en) * 2015-09-23 2015-12-23 歌尔声学股份有限公司 Human body action recognition method and mobile intelligent terminal
WO2017156835A1 (en) * 2016-03-18 2017-09-21 深圳大学 Smart method and system for body building posture identification, assessment, warning and intensity estimation
CN109886068A (en) * 2018-12-20 2019-06-14 上海至玄智能科技有限公司 Action behavior recognition methods based on exercise data
CN110096499A (en) * 2019-04-10 2019-08-06 华南理工大学 A kind of the user object recognition methods and system of Behavior-based control time series big data
CN110245718A (en) * 2019-06-21 2019-09-17 南京信息工程大学 A kind of Human bodys' response method based on joint time-domain and frequency-domain feature
CN110308795A (en) * 2019-07-05 2019-10-08 济南大学 A kind of dynamic gesture identification method and system
WO2020192326A1 (en) * 2019-03-22 2020-10-01 京东方科技集团股份有限公司 Method and system for tracking head movement
CN113671492A (en) * 2021-07-29 2021-11-19 西安电子科技大学 SAMP reconstruction method for forward-looking imaging of maneuvering platform
CN114218586A (en) * 2021-12-09 2022-03-22 杭州数鲲科技有限公司 Business data intelligent management method and device, electronic equipment and storage medium
CN114898275A (en) * 2022-06-08 2022-08-12 成都航空职业技术学院 Student activity track analysis method

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007334479A (en) * 2006-06-13 2007-12-27 Advanced Telecommunication Research Institute International Driving motion analysis apparatus and driving motion analysis method
CN103984416A (en) * 2014-06-10 2014-08-13 北京邮电大学 Gesture recognition method based on acceleration sensor
CN105184325A (en) * 2015-09-23 2015-12-23 歌尔声学股份有限公司 Human body action recognition method and mobile intelligent terminal
WO2017156835A1 (en) * 2016-03-18 2017-09-21 深圳大学 Smart method and system for body building posture identification, assessment, warning and intensity estimation
CN109886068A (en) * 2018-12-20 2019-06-14 上海至玄智能科技有限公司 Action behavior recognition methods based on exercise data
WO2020192326A1 (en) * 2019-03-22 2020-10-01 京东方科技集团股份有限公司 Method and system for tracking head movement
CN110096499A (en) * 2019-04-10 2019-08-06 华南理工大学 A kind of the user object recognition methods and system of Behavior-based control time series big data
CN110245718A (en) * 2019-06-21 2019-09-17 南京信息工程大学 A kind of Human bodys' response method based on joint time-domain and frequency-domain feature
CN110308795A (en) * 2019-07-05 2019-10-08 济南大学 A kind of dynamic gesture identification method and system
CN113671492A (en) * 2021-07-29 2021-11-19 西安电子科技大学 SAMP reconstruction method for forward-looking imaging of maneuvering platform
CN114218586A (en) * 2021-12-09 2022-03-22 杭州数鲲科技有限公司 Business data intelligent management method and device, electronic equipment and storage medium
CN114898275A (en) * 2022-06-08 2022-08-12 成都航空职业技术学院 Student activity track analysis method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ZHEN HONG等: "A wearable-based posture recognition system with AI-assisted approach for healthcare IoT", 《FUTURE GENERATION COMPUTER SYSTEMS》, vol. 127, pages 286 - 296 *
宋辉 等: "基于压缩感知的移动用户行为识别方法", 《计算机科学》, vol. 44, no. 2, pages 313 - 316 *
徐海东: "面向低功耗体域网的人体运动模式远程识别技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 6, pages 140 - 83 *
陈文正 等: "高职院校足球队管理模式的实践研究——以成都航空职业技术学院为例", 《运动精品》, vol. 37, no. 9, pages 99 - 100 *

Also Published As

Publication number Publication date
CN116578910B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
Tao et al. Worker activity recognition in smart manufacturing using IMU and sEMG signals with convolutional neural networks
Batool et al. Sensors technologies for human activity analysis based on SVM optimized by PSO algorithm
CN109472194B (en) Motor imagery electroencephalogram signal feature identification method based on CBLSTM algorithm model
Huang et al. Shallow convolutional neural networks for human activity recognition using wearable sensors
CN111814719B (en) Skeleton behavior recognition method based on 3D space-time diagram convolution
CN111814661B (en) Human body behavior recognition method based on residual error-circulating neural network
CN109447128B (en) Micro-inertia technology-based walking and stepping in-place movement classification method and system
CN111914643A (en) Human body action recognition method based on skeleton key point detection
CN111753683A (en) Human body posture identification method based on multi-expert convolutional neural network
Ahmad et al. Multidomain multimodal fusion for human action recognition using inertial sensors
CN111291865A (en) Gait recognition method based on convolutional neural network and isolated forest
Han et al. The effect of axis-wise triaxial acceleration data fusion in cnn-based human activity recognition
CN116578910B (en) Training action recognition method and system
Zhang et al. Human deep squat detection method based on MediaPipe combined with Yolov5 network
Cai et al. Construction worker ergonomic assessment via LSTM-based multi-task learning framework
CN115205750B (en) Motion real-time counting method and system based on deep learning model
Shah et al. Real-time facial emotion recognition
Dong et al. Multi-sensor data fusion using the influence model
Zhang et al. ATMLP: Attention and Time Series MLP for Fall Detection
CN115690902A (en) Abnormal posture early warning method for body building action
CN115563556A (en) Human body posture prediction method based on intelligent wearable equipment
CN112801283A (en) Neural network model, action recognition method, action recognition device and storage medium
CN114916928B (en) Human body posture multichannel convolutional neural network detection method
Ghimire et al. Classification of EEG Motor Imagery Tasks Utilizing 2D Temporal Patterns with Deep Learning.
Mendez et al. The effects of using a noise filter and feature selection in action recognition: an empirical study

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant