CN114237394B - Motion recognition method, device, equipment and medium - Google Patents

Motion recognition method, device, equipment and medium Download PDF

Info

Publication number
CN114237394B
CN114237394B CN202111523186.5A CN202111523186A CN114237394B CN 114237394 B CN114237394 B CN 114237394B CN 202111523186 A CN202111523186 A CN 202111523186A CN 114237394 B CN114237394 B CN 114237394B
Authority
CN
China
Prior art keywords
sample data
matrix
motion
motion recognition
training sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111523186.5A
Other languages
Chinese (zh)
Other versions
CN114237394A (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Transtek Medical Electronics Co Ltd
Original Assignee
Guangdong Transtek Medical Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Transtek Medical Electronics Co Ltd filed Critical Guangdong Transtek Medical Electronics Co Ltd
Priority to CN202111523186.5A priority Critical patent/CN114237394B/en
Publication of CN114237394A publication Critical patent/CN114237394A/en
Application granted granted Critical
Publication of CN114237394B publication Critical patent/CN114237394B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Dermatology (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The application discloses a motion recognition method, a device, equipment and a medium, which are applied to the technical field of motion recognition equipment and are used for solving the problem that the motion recognition method in the prior art has lower recognition precision on the motion with insignificant swing of limbs. The method comprises the following steps: collecting PPG data and acceleration data at the same time in the motion process of a user; based on PPG data and acceleration data, a motion recognition model is adopted to obtain the motion type of a user, so that the motion type recognition is carried out based on the PPG data and the acceleration data at the same time in the motion process of the user, the accuracy of the motion type recognition can be improved, the motion type with insignificant limb swing can be recognized, and the motion type recognition is carried out by adopting the motion recognition model obtained by classifying and training based on the PPG data and the acceleration data corresponding to each motion type, so that multiple motion types can be recognized, and the motion recognition efficiency can be improved.

Description

Motion recognition method, device, equipment and medium
Technical Field
The application relates to the technical field of intelligent wearable equipment, in particular to a motion recognition method, a motion recognition device, motion recognition equipment and motion recognition media.
Background
At present, most intelligent wearing equipment has a motion recognition function, so that the motion state of a user can be automatically recognized, however, the existing motion recognition method has low recognition precision on the motion with insignificant limb swing, and the recognized motion categories are few.
Disclosure of Invention
The embodiment of the application provides a motion recognition method, a device, equipment and a medium, which are used for solving the problems of low recognition precision and less recognized motion types of motion which are not obvious in limb swing in the motion recognition method in the prior art.
The technical scheme provided by the embodiment of the application is as follows:
In one aspect, an embodiment of the present application provides a motion recognition method, including:
Collecting photoplethysmogram (Photo Plethysmo Graphic, PPG) data and acceleration data at the same time in the motion process of a user;
based on PPG data and acceleration data, a motion recognition model is adopted to obtain the motion type of a user; the motion recognition model is a classification model for recognizing the motion type, which is obtained by performing classification training based on PPG data and acceleration data corresponding to each motion type.
In another aspect, an embodiment of the present application provides a motion recognition apparatus, including:
The data acquisition unit is used for acquiring PPG data and acceleration data at the same time in the motion process of the user;
the motion recognition unit is used for obtaining the motion type of the user by adopting a motion recognition model based on the PPG data and the acceleration data; the motion recognition model is a classification model for recognizing the motion type, which is obtained by performing classification training based on PPG data and acceleration data corresponding to each motion type.
In another aspect, an embodiment of the present application provides a motion recognition apparatus, including: the motion recognition system comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the computer program to realize the motion recognition method provided by the embodiment of the application.
In another aspect, an embodiment of the present application provides a motion recognition apparatus, including: the motion recognition device provided by the embodiment of the application is integrated in the motion recognition device.
On the other hand, the embodiment of the application also provides a computer readable storage medium, and the computer readable storage medium stores computer instructions which are executed by a processor to realize the motion recognition method provided by the embodiment of the application.
The embodiment of the application has the following beneficial effects:
In the embodiment of the application, the motion type recognition is performed based on the PPG data and the acceleration data at the same time in the motion process of the user, so that the accuracy of the motion type recognition can be improved, the motion type with insignificant limb swing can be recognized, and the motion type recognition is performed by adopting the motion recognition model obtained by classifying and training based on the PPG data and the acceleration data corresponding to each motion type, so that various motion types can be recognized, and the efficiency of the motion recognition can be improved.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a schematic flow chart of a training method of a motion recognition model according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of an overview of a motion recognition method according to an embodiment of the application;
FIG. 3 is a schematic diagram of a motion recognition method according to an embodiment of the present application;
FIG. 4 is a schematic functional structure of a motion recognition device according to an embodiment of the present application;
Fig. 5 is a schematic diagram of a hardware structure of a motion recognition device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantageous effects of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In order to facilitate a better understanding of the present application, technical terms related to the present application will be briefly described below.
The motion recognition model is a classification model for recognizing motion types, which is obtained by performing classification training based on PPG data and acceleration data corresponding to each motion type.
The motion recognition device is a device for recognizing the motion type in the motion process of a user, and in the embodiment of the application, the motion recognition device can be a module device integrated in the intelligent wearing device to enable the intelligent wearing device to have model training and motion recognition capability, or can be a background server for providing various services such as motion recognition service, calculation service, database service and the like for the intelligent wearing device. In practical application, when the motion recognition device is a module device integrated in the intelligent wearable device, the intelligent wearable device can realize training of a motion recognition model and motion recognition based on the motion recognition model through the motion recognition device; when the motion recognition device is a background server, the background server can realize training of the motion recognition model, and when the acceleration data and the PPG data which are acquired by the intelligent wearable device and are in the same time in the motion process of the user are received, the motion type corresponding to the acceleration data and the PPG data which are acquired by the intelligent wearable device and in the same time is recognized based on the motion recognition model, and the motion type is returned to the intelligent wearable device for display.
The intelligent wearable device is wearable intelligent devices developed by applying wearable technology, and in the embodiment of the application, the intelligent wearable device can be, but is not limited to, a motion recognition device, an ear-hanging type motion recognition device and the like, for example, the intelligent wearable device can be a smart watch, a smart bracelet, smart glasses, smart clothing and the like.
It is noted that references to "first," "second," etc. in this disclosure are for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order, and it should be understood that such terms may be interchanged where appropriate, such that the embodiments described herein may be practiced in other than those illustrated or described.
After technical terms related to the application are introduced, application scenes and design ideas of the embodiment of the application are briefly introduced.
At present, motion type recognition is generally performed according to acceleration data of a user, and the motion recognition method is difficult to recognize motion with respect to insignificant limb motion, and can recognize fewer motion types. Therefore, the motion type recognition is carried out based on the PPG data and the acceleration data at the same time in the motion process of the user, so that not only can the motion with insignificant limb swing be recognized, but also the accuracy of the motion type recognition can be realized, and the motion type recognition is carried out by adopting a motion recognition model obtained by classifying and training based on the PPG data and the acceleration data corresponding to each motion type, so that not only can a plurality of motion types be recognized, but also the efficiency of the motion recognition can be improved.
After the application scenario and the design idea of the embodiment of the present application are introduced, the technical solution provided by the embodiment of the present application is described in detail below.
Firstly, a simple introduction is made on the motion recognition model training method provided by the embodiment of the present application, and the motion recognition model training method provided by the embodiment of the present application can be applied to a motion recognition device, and referring to fig. 1, the general flow of the motion recognition model training method provided by the embodiment of the present application is as follows:
Step 101: the motion recognition device collects a sample data set; the sample data set comprises a plurality of sample data corresponding to each motion type, and each sample data comprises PPG data and acceleration data acquired at the same time.
In a specific implementation, the motion recognition device may collect, as the sample data set, a large amount of sample data including PPG data and acceleration data at the same time corresponding to each motion type such as standing, walking, going up and down stairs, running, flat riding, high-intensity riding, and the like.
In practical applications, the motion recognition device may collect n sample data, where the ratio of the sample data of standing (S), walking (W), going up and down stairs (U & D), running (R), flat riding (B1), and high-intensity riding (B2) may be S: W: U: R: B1: b2=1:1:1:2:2:2, so that the sample data amounts of running, flat riding, and high-intensity riding may be highlighted, so that the motion recognition model may learn more better signal features during the classification training later. Of course, in the embodiment of the present application, the motion recognition device may collect sample data of each motion type according to other ratios, which is not limited to collect sample data according to the above ratios, and meanwhile, the collected motion types may also be specifically set according to actual situations, which is not described herein.
Step 102: the motion recognition equipment selects part of sample data from the sample data set as training sample data, and performs slicing processing on each training sample data based on a preset data length to obtain at least one training sample data set.
In practical application, the motion recognition device may select a part of sample data from the sample data set as training sample data to perform model training, and in a specific implementation, in order to ensure that the sample feature matrix can be successfully calculated subsequently, the motion recognition device may use a sliding window with a preset data length to perform slicing processing on the training sample data when the total length of each training sample data exceeds the preset data length, so as to obtain each training sample data set.
Step 103: the motion recognition equipment respectively preprocesses the at least one training sample data set to obtain a sample feature matrix of the at least one training sample data set.
In practical applications, the motion recognition device may perform step 103 by, but not limited to, the following ways:
First, the motion recognition device generates, for each of the at least one set of training sample data, a first sample matrix and a second sample matrix based on PPG data of each of the training sample data and acceleration data of each of the training sample data, respectively.
In particular, in order to convert discrete time sequence signals into a matrix, in the embodiment of the present application, a hank matrix may be used as a sample matrix, and when the hank matrix is used as the sample matrix, the characteristics of elements on each diagonal of the hank matrix may be used to simplify the calculation steps of subsequently obtaining the sample feature matrix, reduce the calculation amount of matrix transformation, and in order to ensure the accuracy of hank matrix calculation, a preset data length may be set to n+l-1, where N represents the number of rows of the hank matrix and L represents the number of columns of the hank matrix, so that the motion recognition device performs slicing processing on each training sample data based on the preset data length n+l-1, and after obtaining at least one training sample data set, may obtain each first hank matrix element based on PPG data of each training sample data in the training sample data set, and based on each first hank matrix element(Wherein i has a value of [1, L ]) and the following formula (1) is used to generate a first Hanker matrix/>, which characterizes PPG data of each training sample data in the training sample data setAs a first sample matrix, and based on the acceleration data of each training sample data in the training sample data set, obtaining each second Hanker matrix element(Wherein i has a value of [1, L ],/>Representing the three-axis coordinates corresponding to the ith acceleration data), and based on each second hanke matrix element, generating a second hanke matrix/>, representing the acceleration data of each training sample data in the training sample data set, using the following formula (2)As a second sample matrix;
Wherein T represents a time coordinate, N 1 represents a number of rows of the first hanker matrix, L 1 represents a number of columns of the first hanker matrix, N 2 represents a number of rows of the second hanker matrix, L 2 represents a number of columns of the second hanker matrix, and T represents a transpose of the matrix.
Then, the motion recognition device generates, for each of the at least one training sample data set, a sample feature matrix for the training sample data set based on the first sample matrix and the second sample matrix of the training sample data set.
In a specific implementation, the motion recognition device may perform singular value decomposition on a first sample matrix and the second sample matrix of the at least one training sample data set, respectively, to obtain a first singular value matrix of the first sample matrix and a second singular value matrix of the second sample matrix, and then generate a sample feature matrix of the training sample data set based on the first singular value matrix and the second singular value matrix. More specifically, when the first sample matrix and the second sample matrix are hanker matrices, the motion recognition device may, for each training sample data set of the at least one training sample data set, respectively for the first hanker matrix of the training sample data setAnd a second Hanker matrix/>Singular value decomposition is carried out to obtain a first Hank matrix/>First singular value matrix/>And a second Hanker matrix/>Is a second matrix of singular values of (2)And based on the first singular value matrix/>And a second singular value matrix/> Generating a sample feature matrix of the training sample data set; wherein T characterizes the transpose of the matrix,/>Characterization of the first Hanker matrix/>First singular value matrix,/>Characterization of the first singular value matrix/>Element in/>Characterization of the first Hanker matrix/>Left singular matrix,/>Characterization of left singular matricesElement in/>Characterization of the first Hanker matrix/>R i Q characterizes the right singular matrix/>In the presence of an element of the group,Characterization of the second Hanker matrix/>Second singular value matrix,/>Characterization of the second singular value matrix/>Element in/>Characterization of the second Hanker matrix/>Left singular matrix,/>Characterization of left singular matrix/>Element in/>Characterization of the second Hanker matrix/>Right singular matrix,/>Characterization of right singular matrix/>Is a component of the group.
It should be noted that, in the embodiment of the present application, for each training sample data set in the at least one training sample data set, the motion recognition device may sort the first singular value matrix and the second singular value matrix according to the order of the singular values from the big to the small when generating the sample feature matrix of the training sample data set based on the first singular value matrix and the second singular value matrix of the training sample data set, so that the singular values in the sorted first singular value matrix and second singular value matrix respectively satisfy the condition (1) λ 1≥λ2……≥λN andAnd condition (2) lambda 1≥λ2……≥λN and/>Then, generating a sample feature matrix of the training sample data set based on each first singular value in the first singular value matrix, a modulus of each first singular value, a preset weight and each second singular value in the second singular value matrix and a modulus of each second singular value, and specifically, generating the sample feature matrix of the training sample data set by the motion recognition equipment by adopting the following formula (3);
Wherein ζ t represents a sample feature matrix; Characterizing each first singular value in the first singular value matrix; /(I) Characterizing a modulus of each first singular value in the first singular value matrix; /(I)Characterizing each second singular value in the second singular value matrix; /(I)Characterizing a modulus of each second singular value in the second singular value matrix; w represents a preset weight, and in practical application, the value of the preset weight is determined by the quality of the PPG signal acquired by the motion recognition equipment, and the higher the quality of the PPG signal, the larger the value of the weight.
In the embodiment of the application, when the motion recognition equipment generates the sample feature matrix, the first singular value matrix and the second singular value matrix are respectively sequenced according to the sequence from big singular values to small singular values, so that the singular value features with high weight can be highlighted, and the classification training of the motion recognition model is facilitated.
Further, in order to reduce the calculation amount, in the embodiment of the present application, after the motion recognition device obtains the sample feature matrix of the at least one training sample data set, a preset dimension reduction method may be further adopted to perform dimension reduction processing on the sample feature matrix of the at least one training sample data set, where in practical application, the preset dimension reduction method may be, but is not limited to, a linear calonan-loy transformation method, and specifically, the motion recognition device may use the following formula (4) to perform dimension reduction processing on the sample feature matrix of the at least one training sample data set;
K tQ=ψξt … … formula (4)
Wherein, K tQ represents the sample feature matrix after dimension reduction, Q represents the length of the sample feature matrix after dimension reduction, ψ represents the Carloman-Louis transformation matrix, and ζ t represents the sample feature matrix before dimension reduction.
In the embodiment of the application, the motion recognition equipment reduces the dimension of the sample feature matrix by adopting the linear Carlo-Lowe transformation method, so that the complexity of model training can be reduced, unnecessary calculation can be reduced, and the calculation amount of model training is greatly reduced.
Step 104: the motion recognition device respectively inputs the at least one sample feature matrix into a motion recognition model to obtain the predicted motion type of each training sample data in the at least one training sample data set.
In practical application, the motion recognition device may use a naive bayes classifier as a motion recognition model, and in implementation, the motion recognition device inputs the sample feature matrix of the at least one training sample data set into the naive bayes classifier, so as to obtain the predicted motion type of each training sample data in the at least one training sample data set.
Step 105: the motion recognition device determines a loss value using a loss function based on the predicted motion type and the labeled motion type for each of the training sample data in the at least one training sample data set.
In particular implementations, the motion recognition device may obtain the loss value using a cross entropy loss function based on the predicted motion type and the labeled motion type for each of the at least one set of training sample data.
Step 106: the motion recognition device updates model parameters of the motion recognition model based on the loss value.
In a specific implementation, the cross entropy loss function can measure the degree of difference between the predicted motion type and the labeling motion type of each training sample data in the at least one training sample data set, the smaller the loss value is, the higher the motion recognition accuracy is, and based on this, in the embodiment of the present application, the motion recognition device may calculate the loss value between the predicted motion type and the labeling motion type of each training sample data in the at least one training sample data set by using the cross entropy loss function, and determine whether the loss value reaches a preset requirement, such as less than a preset threshold, when the loss value does not reach the preset requirement, update the model parameters of the motion recognition model, and repeat the steps 104 to 105 until the loss value reaches the preset requirement, thereby obtaining the final motion recognition model. In practical application, the preset threshold value can be flexibly set according to practical requirements, and is not particularly limited herein.
Further, in order to improve recognition accuracy of the motion recognition model, in the embodiment of the application, the motion recognition device selects part of sample data from the sample data set as training sample data, after model training of the motion recognition model is completed, another part of sample data from the sample data set may be selected as verification sample data, and based on each verification sample data, optimization training is performed on the motion recognition model. In the implementation, the motion recognition device can perform optimization training on the motion recognition model by adopting a K-fold cross validation method based on each validation sample data.
Further, after the motion recognition device completes training and optimizing the motion recognition model, the motion recognition model can be used to recognize the motion type in the motion process of the user, specifically, referring to fig. 2, the general flow of the motion recognition method provided by the embodiment of the application is as follows:
step 201: the motion recognition device collects PPG data and acceleration data at the same time during the user's motion.
In the implementation, the motion recognition device can collect the PPG data of the user through the photosensitive sensor and collect the acceleration data of the user through the acceleration sensor at the same time, so that the collected PPG data and acceleration data belong to the same time.
Step 202: the motion recognition device obtains a motion type of the user by adopting a motion recognition model based on the PPG data and the acceleration data.
In the embodiment of the present application, when the motion recognition device adopts the motion recognition model based on PPG data and acceleration data to obtain the motion type of the user, the following manner may be adopted, but is not limited to:
first, the motion recognition device generates a first hanker matrix and a second hanker matrix based on PPG data and acceleration data, respectively, using the above formula (1) and the above formula (2).
And secondly, the motion recognition equipment respectively carries out singular value decomposition on the first Hank matrix and the second Hank matrix to obtain a first singular value matrix of the first Hank matrix and a second singular value matrix of the second Hank matrix.
Then, the motion recognition device sorts the first singular value matrix and the second singular value matrix according to the sequence of the singular values from large to small, sorts the first singular value matrix and the second singular value matrix respectively, and enables singular values in the sorted first singular value matrix and second singular value matrix to respectively meet the condition (1) and the condition (2).
And then, the motion recognition equipment obtains a feature matrix by adopting the formula (3) based on the first singular value matrix and the second singular value matrix after sequencing.
Finally, the motion recognition equipment adopts the formula (4) to perform dimension reduction processing on the feature matrix to obtain a dimension-reduced feature matrix, and inputs the dimension-reduced feature matrix into a motion recognition model to obtain the motion type of the user.
Further, the motion recognition device may further display the motion type, the PPG data, and the acceleration data of the user after obtaining the motion type of the user by using the motion recognition model based on the PPG data and the acceleration data.
The motion recognition method provided by the embodiment of the application is described in further detail below by using the following that the wristband type intelligent wearable device recognizes the motion type in the motion process of the user by using a naive Bayesian classifier through a background server as a specific application scene, and referring to fig. 3, the specific flow of the motion recognition method provided by the embodiment of the application is as follows:
Step 301: the wrist strap type intelligent wearable device collects PPG data of a user through a photosensitive sensor and collects acceleration data of the user through an acceleration sensor in the motion process of the user.
Step 302: the wrist strap type intelligent wearable device sends the collected PPG data and acceleration data at the same time to a background server.
Step 303: when the background server receives PPG data and acceleration data sent by the wrist strap type intelligent wearable device, the formula (1) and the formula (2) are adopted respectively to generate a first Hanker matrix and a second Hanker matrix.
Step 304: the background server carries out singular value decomposition on the first Hank matrix and the second Hank matrix respectively to obtain a first singular value matrix of the first Hank matrix and a second singular value matrix of the second Hank matrix.
Step 305: the background server obtains a feature matrix by adopting the formula (3) based on the first singular value matrix and the second singular value matrix, and performs dimension reduction processing on the feature matrix by adopting the formula (4).
Step 306: and the background server inputs the feature matrix after the dimension reduction into a motion recognition model to obtain the motion type of the user.
Step 307: and the background server sends the motion type of the user to the wrist strap type intelligent wearable device.
Step 308: the wrist strap type intelligent wearable device displays the motion type, PPG data and acceleration data of the user.
Based on the foregoing embodiments, an embodiment of the present application provides a motion recognition device, and referring to fig. 4, a motion recognition device 400 provided in an embodiment of the present application at least includes:
The data acquisition unit 401 is configured to acquire PPG data and acceleration data at the same time in a user movement process;
A motion recognition unit 402, configured to obtain a motion type of a user by using a motion recognition model based on PPG data and acceleration data; the motion recognition model is a classification model for recognizing the motion type, which is obtained by performing classification training based on PPG data and acceleration data corresponding to each motion type.
In a possible implementation manner, the motion recognition apparatus 400 provided in the embodiment of the present application further includes:
a data display unit 403 for displaying the motion type, PPG data and acceleration data of the user.
In a possible implementation manner, the motion recognition apparatus 400 provided in the embodiment of the present application further includes:
A model training unit 404 for collecting a sample data set; the sample data set comprises a plurality of sample data corresponding to each motion type, and each sample data comprises PPG data and acceleration data which are acquired at the same time; selecting part of sample data from the sample data set as training sample data, and slicing each training sample data based on a preset data length to obtain at least one training sample data set; respectively preprocessing at least one training sample data set to obtain a sample feature matrix of the at least one training sample data set; respectively inputting at least one sample feature matrix into a motion recognition model to obtain the predicted motion type of each training sample data in at least one training sample data set; determining a loss function value based on the predicted motion type and the labeling motion type of each training sample data in the at least one training sample data set; model parameters of the motion recognition model are updated based on the loss function values.
In one possible implementation, the model training unit 404 is specifically configured to, when acquiring the sample dataset:
Collecting sample data of each motion type according to a preset proportion to obtain a sample data set; wherein the preset ratio characterizes the ratio of the number of sample data for each motion type.
In one possible implementation manner, when preprocessing at least one training sample data set to obtain a sample feature matrix of at least one training sample data set, the model training unit 404 is specifically configured to:
For each training sample data set in the at least one training sample data set, generating a first sample matrix and a second sample matrix based on the PPG data of each training sample data in the training sample data set and the acceleration data of each training sample data, respectively, and generating a sample feature matrix of the training sample data set based on the first sample matrix and the second sample matrix.
In one possible implementation, when generating the first sample matrix and the second sample matrix based on the PPG data of each training sample data in the training sample data set and the acceleration data of each training sample data, the model training unit 404 is specifically configured to:
obtaining first Hank matrix elements based on PPG data of training sample data in a training sample data set, and generating a first Hank matrix as a first sample matrix based on the first Hank matrix elements; based on the acceleration data of each training sample data in the training sample data set, each second Hanker matrix element is obtained, and based on each second Hanker matrix element, a second Hanker matrix is generated as a second sample matrix.
In one possible implementation, when generating the sample feature matrix of the training sample data set based on the first sample matrix and the second sample matrix, the model training unit 404 is specifically configured to:
Performing singular value decomposition on the first sample matrix and the second sample matrix respectively to obtain a first singular value matrix of the first sample matrix and a second singular value matrix of the second sample matrix; and generating a sample feature matrix of the training sample data set based on the first singular value matrix and the second singular value matrix.
In a possible implementation, before generating the sample feature matrix of the training sample data set based on the first singular value matrix and the second singular value matrix, the model training unit 404 is further configured to:
and sequencing the first singular value matrix and the second singular value matrix according to the sequence of the singular values from big to small.
In one possible implementation, when generating the sample feature matrix of the training sample data set based on the first singular value matrix and the second singular value matrix, the model training unit 404 is specifically configured to:
And generating a sample feature matrix of the training sample data set based on each first singular value in the first singular value matrix, a modulus of each first singular value, a preset weight and each second singular value in the second singular value matrix and a modulus of each second singular value.
In a possible implementation, after generating the sample feature matrix of the training sample data set based on the first singular value matrix and the second singular value matrix, the model training unit 404 is further configured to:
And performing dimension reduction treatment on the sample feature matrix by adopting a preset dimension reduction mode.
In one possible implementation, model training unit 404 is further configured to:
And selecting another part of sample data from the sample data set as verification sample data, and optimally training the motion recognition model based on each verification sample data.
It should be noted that, the principle of the motion recognition device 400 provided in the embodiment of the present application for solving the technical problem is similar to that of the motion recognition method provided in the embodiment of the present application, so that the implementation of the motion recognition device 400 provided in the embodiment of the present application can refer to the implementation of the method provided in the embodiment of the present application, and the repetition is omitted.
After the motion recognition method and the motion recognition device provided by the embodiment of the application are introduced, the motion recognition device provided by the embodiment of the application is briefly introduced.
Referring to fig. 5, a motion recognition device 500 according to an embodiment of the present application at least includes: the motion recognition method provided by the embodiment of the application is implemented by the processor 501, the memory 502 and the computer program stored in the memory 502 and capable of running on the processor 501 when the processor 501 executes the computer program.
The motion recognition device 500 provided by embodiments of the present application may also include a bus 503 that connects the different components, including the processor 501 and the memory 502. Where bus 503 represents one or more of several types of bus structures, including a memory bus, a peripheral bus, a local bus, and so forth.
The Memory 502 may include readable media in the form of volatile Memory, such as random access Memory (Random Access Memory, RAM) 5021 and/or cache Memory 5022, and may further include Read Only Memory (ROM) 5023.
The memory 502 may also include a program tool 5025 having a set (at least one) of program modules 5024, the program modules 5024 including, but not limited to: an operating subsystem, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The motion recognition device 500 may also communicate with one or more external devices 504 (e.g., keyboard, remote control, etc.), one or more devices that enable a user to interact with the motion recognition device 500 (e.g., cell phone, computer, etc.), and/or any device that enables the motion recognition device 500 to communicate with one or more other motion recognition devices 500 (e.g., router, modem, etc.). Such communication may be through an Input/Output (I/O) interface 505. Also, the motion recognition device 500 may communicate with one or more networks (e.g., a local area network (Local Area Network, LAN), a wide area network (Wide Area Network, WAN) and/or a public network, such as the internet) via the network adapter 506. As shown in fig. 5, network adapter 506 communicates with other modules of motion recognition device 500 via bus 503. It should be appreciated that although not shown in fig. 5, other hardware and/or software modules may be used in connection with the motion recognition device 500, including, but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, disk array (Redundant Arrays of INDEPENDENT DISKS, RAID) subsystems, tape drives, data backup storage subsystems, and the like.
It should be noted that the motion recognition device 500 shown in fig. 5 is only an example, and should not be construed as limiting the function and scope of use of the embodiment of the present application.
In addition, the embodiment of the application also provides an intelligent wearable device, which comprises the motion recognition device 500 provided by the embodiment of the application, wherein the motion recognition device 500 is integrated in the intelligent wearable device.
Furthermore, the embodiment of the application provides a computer readable storage medium, and the computer readable storage medium stores computer instructions which, when executed by a processor, implement the motion recognition method provided by the embodiment of the application. Specifically, the executable program may be built into or installed in the motion recognition apparatus 500, and thus, the motion recognition apparatus 500 may implement the motion recognition method provided by the embodiment of the present application by executing the built-in or installed executable program.
It should be noted that the motion recognition method provided by the embodiment of the present application may also be implemented as a program product, which includes a program code for causing the motion recognition device 500 to perform the motion recognition method provided by the embodiment of the present application when the program product is executable on the motion recognition device 500.
The program product provided by the embodiments of the present application may employ any combination of one or more readable media, where the readable media may be a readable signal medium or a readable storage medium, and the readable storage medium may be, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof, and more specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a RAM, a ROM, an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), an optical fiber, a portable compact disk read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product provided by embodiments of the present application may be implemented as a CD-ROM and include program code that may also be run on a computing device. However, the program product provided by the embodiments of the present application is not limited thereto, and in the embodiments of the present application, the readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such a division is merely exemplary and not mandatory. Indeed, the features and functions of two or more of the elements described above may be embodied in one element in accordance with embodiments of the present application. Conversely, the features and functions of one unit described above may be further divided into a plurality of units to be embodied.
Furthermore, although the operations of the methods of the present application are depicted in the drawings in a particular order, this is not required or suggested that these operations must be performed in this particular order or that all of the illustrated operations must be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the embodiments of the present application without departing from the spirit or scope of the embodiments of the application. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims and the equivalents thereof, the present application is also intended to include such modifications and variations.

Claims (14)

1. A method of motion recognition, comprising:
Collecting photo-volume pulse wave PPG data and acceleration data at the same time in the motion process of a user;
based on the PPG data and the acceleration data, a motion recognition model is adopted to obtain the motion type of the user; the motion recognition model is a classification model for recognizing the motion type, which is obtained by performing classification training based on PPG data and acceleration data corresponding to each motion type;
The motion recognition method further comprises the following steps:
Collecting a sample data set; the sample data set comprises a plurality of sample data corresponding to each motion type, and each sample data comprises PPG data and acceleration data which are acquired at the same time;
selecting part of sample data from the sample data set as training sample data, and slicing each training sample data based on a preset data length to obtain at least one training sample data set;
Respectively preprocessing the at least one training sample data set to obtain a sample feature matrix of the at least one training sample data set;
Respectively inputting the at least one sample feature matrix into the motion recognition model to obtain the predicted motion type of each training sample data in the at least one training sample data set;
determining a loss function value based on the predicted motion type and the labeling motion type of each training sample data in the at least one training sample data set;
and updating model parameters of the motion recognition model based on the loss function value.
2. The motion recognition method of claim 1, wherein collecting the sample data set comprises:
Collecting sample data of each motion type according to a preset proportion to obtain a sample data set; wherein the preset ratio characterizes a ratio of the number of sample data for each motion type.
3. The motion recognition method of claim 1, wherein preprocessing the at least one training sample data set to obtain a sample feature matrix of the at least one training sample data set, respectively, comprises:
For each training sample data set of the at least one training sample data set, generating a first sample matrix and a second sample matrix based on PPG data of each training sample data in the training sample data set and acceleration data of each training sample data, respectively, and generating a sample feature matrix of the training sample data set based on the first sample matrix and the second sample matrix.
4. A method of motion recognition as claimed in claim 3, wherein generating the first and second sample matrices based on PPG data of each training sample data and acceleration data of each training sample data in the set of training sample data, respectively, comprises:
Obtaining each first Hank matrix element based on PPG data of each training sample data in the training sample data set, and generating a first Hank matrix as the first sample matrix based on each first Hank matrix element;
And acquiring each second Hank matrix element based on acceleration data of each training sample data in the training sample data set, and generating a second Hank matrix as the second sample matrix based on each second Hank matrix element.
5. A method of motion recognition as claimed in claim 3, wherein generating a sample feature matrix of the training sample data set based on the first sample matrix and the second sample matrix comprises:
Performing singular value decomposition on the first sample matrix and the second sample matrix respectively to obtain a first singular value matrix of the first sample matrix and a second singular value matrix of the second sample matrix;
and generating a sample feature matrix of the training sample data set based on the first singular value matrix and the second singular value matrix.
6. The motion recognition method of claim 5, further comprising, prior to generating the sample feature matrix of the training sample data set based on the first and second matrices of singular values:
and sequencing the first singular value matrix and the second singular value matrix according to the sequence of the singular values from big to small.
7. The motion recognition method of claim 5, wherein generating a sample feature matrix for the training sample data set based on the first and second matrices of singular values comprises:
Generating a sample feature matrix of the training sample data set based on each first singular value in the first singular value matrix and a modulus of each first singular value, a preset weight, and each second singular value in the second singular value matrix and a modulus of each second singular value.
8. The motion recognition method of claim 5, further comprising, after generating a sample feature matrix for the training sample data set based on the first and second matrices of singular values:
And performing dimension reduction processing on the sample feature matrix by adopting a preset dimension reduction mode.
9. The motion recognition method of any one of claims 2-8, further comprising:
And selecting another part of sample data from the sample data set as verification sample data, and optimally training the motion recognition model based on each verification sample data.
10. The motion recognition method according to any one of claims 1-8, wherein, based on the PPG data and the acceleration data, a motion recognition model is used, and after obtaining the motion type of the user, further comprising:
the motion type, the PPG data, and the acceleration data of the user are displayed.
11.A motion recognition device, comprising:
The data acquisition unit is used for acquiring the PPG data and the acceleration data of the photoplethysmogram pulse waves at the same time in the motion process of the user;
The motion recognition unit is used for obtaining the motion type of the user by adopting a motion recognition model based on the PPG data and the acceleration data; the motion recognition model is a classification model for recognizing the motion type, which is obtained by performing classification training based on PPG data and acceleration data corresponding to each motion type;
The model training unit is used for collecting a sample data set; the sample data set comprises a plurality of sample data corresponding to each motion type, and each sample data comprises PPG data and acceleration data which are acquired at the same time; selecting part of sample data from the sample data set as training sample data, and slicing each training sample data based on a preset data length to obtain at least one training sample data set; respectively preprocessing the at least one training sample data set to obtain a sample feature matrix of the at least one training sample data set; respectively inputting the at least one sample feature matrix into the motion recognition model to obtain the predicted motion type of each training sample data in the at least one training sample data set; determining a loss function value based on the predicted motion type and the labeling motion type of each training sample data in the at least one training sample data set; and updating model parameters of the motion recognition model based on the loss function value.
12. A motion recognition apparatus, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the motion recognition method according to any one of claims 1-10 when the computer program is executed.
13. An intelligent wearable device, comprising: the motion recognition device of claim 12, the motion recognition device integrated in the smart wearable device.
14. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the motion recognition method of any one of claims 1-10.
CN202111523186.5A 2021-12-13 2021-12-13 Motion recognition method, device, equipment and medium Active CN114237394B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111523186.5A CN114237394B (en) 2021-12-13 2021-12-13 Motion recognition method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111523186.5A CN114237394B (en) 2021-12-13 2021-12-13 Motion recognition method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN114237394A CN114237394A (en) 2022-03-25
CN114237394B true CN114237394B (en) 2024-05-24

Family

ID=80755512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111523186.5A Active CN114237394B (en) 2021-12-13 2021-12-13 Motion recognition method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114237394B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016061668A1 (en) * 2014-10-23 2016-04-28 2352409 Ontario Inc. Device and method for identifying subject's activity profile
CN107092894A (en) * 2017-04-28 2017-08-25 孙恩泽 A kind of motor behavior recognition methods based on LSTM models
CN110010224A (en) * 2019-03-01 2019-07-12 出门问问信息科技有限公司 User movement data processing method, device, wearable device and storage medium
CN112914536A (en) * 2021-03-24 2021-06-08 平安科技(深圳)有限公司 Motion state detection method and device, computer equipment and storage medium
US11172818B1 (en) * 2018-08-06 2021-11-16 Amazon Technologies, Inc. Streaming analytics of human body movement data
CN113749650A (en) * 2020-06-05 2021-12-07 安徽华米健康科技有限公司 Motion information acquisition method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11928894B2 (en) * 2012-09-18 2024-03-12 Origin Wireless, Inc. Method, apparatus, and system for wireless gait recognition
EP3146896B1 (en) * 2014-02-28 2020-04-01 Valencell, Inc. Method and apparatus for generating assessments using physical activity and biometric parameters
US20170188895A1 (en) * 2014-03-12 2017-07-06 Smart Monitor Corp System and method of body motion analytics recognition and alerting
US10466783B2 (en) * 2018-03-15 2019-11-05 Sanmina Corporation System and method for motion detection using a PPG sensor

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016061668A1 (en) * 2014-10-23 2016-04-28 2352409 Ontario Inc. Device and method for identifying subject's activity profile
CN107092894A (en) * 2017-04-28 2017-08-25 孙恩泽 A kind of motor behavior recognition methods based on LSTM models
US11172818B1 (en) * 2018-08-06 2021-11-16 Amazon Technologies, Inc. Streaming analytics of human body movement data
CN110010224A (en) * 2019-03-01 2019-07-12 出门问问信息科技有限公司 User movement data processing method, device, wearable device and storage medium
CN113749650A (en) * 2020-06-05 2021-12-07 安徽华米健康科技有限公司 Motion information acquisition method and device
CN112914536A (en) * 2021-03-24 2021-06-08 平安科技(深圳)有限公司 Motion state detection method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN114237394A (en) 2022-03-25

Similar Documents

Publication Publication Date Title
CN111274425B (en) Medical image classification method, device, medium and electronic equipment
CN112784778B (en) Method, apparatus, device and medium for generating model and identifying age and sex
WO2020224433A1 (en) Target object attribute prediction method based on machine learning and related device
CN112270240B (en) Signal processing method, device, electronic equipment and storage medium
CN112002410A (en) Infectious disease state prediction method and device, storage medium, and electronic device
CN114886404B (en) Electronic equipment, device and storage medium
CN111178288A (en) Human body posture recognition method and device based on local error layer-by-layer training
CN111753683A (en) Human body posture identification method based on multi-expert convolutional neural network
CN111160049B (en) Text translation method, apparatus, machine translation system, and storage medium
CN114496277B (en) Method, system, equipment and medium for optimizing multigroup chemical data of intestinal flora match
CN112884062B (en) Motor imagery classification method and system based on CNN classification model and generated countermeasure network
CN114529945A (en) Emotion recognition method, device, equipment and storage medium
CN113887501A (en) Behavior recognition method and device, storage medium and electronic equipment
CN114237394B (en) Motion recognition method, device, equipment and medium
CN115690544B (en) Multi-task learning method and device, electronic equipment and medium
CN110348581B (en) User feature optimizing method, device, medium and electronic equipment in user feature group
CN112244863A (en) Signal identification method, signal identification device, electronic device and readable storage medium
CN114783597B (en) Method and device for diagnosing multi-class diseases, electronic equipment and storage medium
CN115954019A (en) Environmental noise identification method and system integrating self-attention and convolution operation
CN114550047B (en) Behavior rate guided video behavior recognition method
CN115795025A (en) Abstract generation method and related equipment thereof
CN113111804B (en) Face detection method and device, electronic equipment and storage medium
CN112053386B (en) Target tracking method based on depth convolution characteristic self-adaptive integration
CN111881979B (en) Multi-modal data annotation device and computer-readable storage medium containing program
CN113723519A (en) Electrocardio data processing method and device based on contrast learning and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant