CN111184512B - Method for recognizing rehabilitation training actions of upper limbs and hands of stroke patient - Google Patents

Method for recognizing rehabilitation training actions of upper limbs and hands of stroke patient Download PDF

Info

Publication number
CN111184512B
CN111184512B CN201911394850.3A CN201911394850A CN111184512B CN 111184512 B CN111184512 B CN 111184512B CN 201911394850 A CN201911394850 A CN 201911394850A CN 111184512 B CN111184512 B CN 111184512B
Authority
CN
China
Prior art keywords
layer
attention
data
rehabilitation training
source separation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911394850.3A
Other languages
Chinese (zh)
Other versions
CN111184512A (en
Inventor
刘勇国
任志扬
李巧勤
杨尚明
刘朗
陈智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201911394850.3A priority Critical patent/CN111184512B/en
Publication of CN111184512A publication Critical patent/CN111184512A/en
Application granted granted Critical
Publication of CN111184512B publication Critical patent/CN111184512B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1124Determining motor skills
    • A61B5/1125Grasping motions of hands
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/389Electromyography [EMG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4836Diagnosis combined with treatment in closed-loop systems or methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Abstract

The invention discloses a method for recognizing rehabilitation training actions of upper limbs and hands of a stroke patient, which comprises the steps of performing blind source separation on electromyographic signal data by adopting a non-negative matrix decomposition model, removing non-stable muscle activation information and obtaining a stable time-varying blind source separation result; the decomposed time-varying blind source separation result data is applied to further pattern recognition, so that the recognition stability and precision are improved; the learned features are enabled to maintain both temporal and spatial characteristics through the CNN-RNN model. The CNN-RNN model can directly process data without manual data feature extraction and screening, automatically extract features and complete classification recognition, can realize end-to-end rehabilitation training action recognition analysis, and performs attention weighting on the hidden state of the second layer in the two-layer bidirectional GRU layer by combining the attention layer, so that the data with high contribution degree is endowed with larger weight, the data plays a greater role, and the precision of classification recognition is further improved.

Description

Method for recognizing rehabilitation training actions of upper limbs and hands of stroke patient
Technical Field
The invention belongs to the field of machine learning, and particularly relates to an upper limb and hand rehabilitation training action recognition method for a stroke patient.
Background
Rehabilitation training is an important treatment means in rehabilitation medicine, and mainly improves functional dyskinesia of corresponding limb parts of a patient through different exercise training, so that the motor function of the patient is recovered as much as possible, and the treatment effect is achieved. Of patients with limb dysfunction caused by stroke, 80% suffer from upper limb dysfunction, and of patients suffering from upper limb dysfunction, only 30% of patients can realize upper limb functional recovery, and 12% of patients can better recover hand function. The functional rehabilitation of the upper limbs and the hands has profound influence on the life quality and social participation of the stroke patients. The recognition of the rehabilitation training action of the stroke patient is widely applied, and the recognition result can be clinically used as a control signal of auxiliary training equipment, such as a mechanical exoskeleton, an artificial limb and the like; can also be used as an input signal for serious game rehabilitation training, such as virtual reality rehabilitation training; the interactive rehabilitation training system can realize interactive rehabilitation training or assist doctors to remotely monitor training conditions and the like in community or family rehabilitation. Surface electromyography (sEMG) is non-invasive, easy to record, and contains rich motor control information, and thus is commonly used for identification of rehabilitation exercises.
At present, a machine learning method for performing rehabilitation training action recognition by using a surface electromyogram signal mainly comprises three steps of signal preprocessing, feature extraction and classification recognition. Firstly, preprocessing an acquired original electromyographic signal, removing noise in the signal through notch filtering, band-pass filtering, full-wave rectification and the like, and segmenting an electromyographic time sequence through a threshold value method or an artificial method to segment a signal segment corresponding to a training action. In the feature extraction step, the set features need to be manually selected, and then the features are extracted from the preprocessed electromyographic signals. The characteristics mainly comprise two types of time domain characteristics and frequency domain characteristics, wherein the time domain characteristics comprise peak values, mean values, root mean square values, kurtosis, autoregressive coefficients and the like, and the frequency domain characteristics comprise power spectrums, intermediate frequencies, center of gravity frequencies, frequency root mean square and the like. And finally, using the feature set extracted in the last step and the corresponding class label as input, and applying a machine learning algorithm to train a classification recognition model. The classification methods commonly used for rehabilitation training action recognition at present mainly comprise decision classification trees, Support Vector Machines (SVM), Linear Discriminant Classifiers (LDC), naive Bayes classifiers (NB), Gaussian Mixture Models (GMM) and the like. After model training is completed, the newly acquired training action electromyographic signals can be subjected to rehabilitation training action recognition through preprocessing and feature extraction.
At present, the main flow of the recognition method for the rehabilitation training action is manual setting, feature extraction and classification recognition by applying a machine learning method.
Such method models have their limitations. The original electromyographic signals have certain non-stationarity, and the difference of different patient physical signs, the difference of stroke injury, the difference of action completion standard degree and the like can cause larger difference of electromyographic data, so that the identification is influenced, and the non-stationarity is difficult to eliminate by using basic preprocessing such as filtering rectification and the like. And such feature engineering usually requires expertise in some specific areas, thus further increasing the cost of the pretreatment. The dependency of the existing model identification performance on feature selection is high, and the influence of extracting different features on the classification identification performance is different; the information redundancy is easily caused by the fact that the characteristics are manually set and extracted and the characteristics possibly have correlation; and for physiological time series data, the time-varying information of which is also important, artificial feature extraction loses such information. The currently used classifier for machine learning (such as SVM, LDC, GMM, etc.) has poor performance of recognition and differentiation for the damaged upper limb and hand motion of stroke patients and some similar motions, such as different finger motions.
Disclosure of Invention
Aiming at the defects in the prior art, the method for recognizing the rehabilitation training action of the upper limbs and the hands of the stroke patient solves the problems of information redundancy and time-varying information loss caused by manual extraction and setting of features and poor machine learning recognition distinguishing performance.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: a stroke patient upper limb and hand rehabilitation training action recognition method comprises the following steps:
s1, collecting myoelectric signal data of rehabilitation training actions;
s2, preprocessing electromyographic signal data;
s3, decomposing the preprocessed electromyographic signal data by adopting a non-negative matrix decomposition model to obtain a plurality of blind source separation result matrixes;
s4, performing iterative training on the CNN-RNN model by adopting a plurality of blind source separation result matrixes to obtain a trained CNN-RNN model;
and S5, repeating the steps S1-S3 on the newly collected training action electromyographic data to obtain a plurality of blind source separation result matrixes, and inputting the plurality of blind source separation result matrixes into the trained CNN-RNN model to obtain the rehabilitation training action recognition category.
Further: step S3 includes the following steps:
s31, manually segmenting the preprocessed electromyographic signal data in a time dimension to obtain an electromyographic signal data matrix formed by each piece of data of the corresponding time sequence;
and S32, decomposing the electromyographic signal data matrix by adopting a non-negative matrix decomposition model to obtain a plurality of blind source separation result matrixes.
Further: the step S4 includes the following steps:
s41, establishing a CNN network model and an RNN network model, and initializing the iteration number m to be 0;
s42, inputting the blind source separation result matrixes into a CNN network model, and performing feature extraction and pooling dimension reduction operation to obtain feature vectors;
s43, inputting the feature vector into an RNN network model for processing to obtain a probability value of the predicted action category;
s44, calculating the distance Loss of the probability values of the predicted action category and the real action category through cross entropym
S45, judging the Loss of the mth timemValue sum Loss of m-1 timesm-1And if the difference value of the values is smaller than the threshold value, obtaining the trained CNN-RNN model, otherwise, updating the weight parameter and the offset parameter in the CNN network model and the weight parameter of the RNN network model by adopting a batch random gradient descent method, adding 1 to the iteration number m, and jumping to the step S42.
Further: the CNN network model in step S41 includes: three convolutional layers, three pooling layers and three active layers.
Further: the input and output of the convolutional layer are calculated by the formula:
Figure BDA0002346021670000041
wherein the content of the first and second substances,
Figure BDA0002346021670000042
is the data of the ith input channel of the l-1 th convolutional layer,
Figure BDA0002346021670000043
data of the jth output channel of the first convolutional layer, Ml-1The number of input channels of the l-1 th convolutional layer,
Figure BDA0002346021670000044
for the l-th layer of the convolution kernel weights,
Figure BDA0002346021670000045
the first layer of convolution layer is offset, l is more than or equal to 1 and less than or equal to 3; the data of the ith input channel of the 1 st convolutional layer is a blind source separation result matrix Hr×nThe ith row of data.
Further: the RNN network model in step S41 includes: two layers of bidirectional GRU layers, an attention layer and a full connection layer, wherein each layer of bidirectional GRU layer comprises a TA GRU unit;
the GRU unit comprises an updating gate and a resetting gate;
the input in the first of the two bidirectional GRU layers is a feature vector, and the output of the second layer is the input of the attention layer;
the output of the attention layer is the input of the fully connected layer.
Further: the state update equation for the GRU unit is as follows:
Figure BDA0002346021670000046
wherein, the [ alpha ], [ beta ]]Representing the connection of two vectors,. representing the inner product of the vectors,. sigma.rTo reset the weight matrix of the gate, WzIn order to update the weight matrix for the gate,
Figure BDA0002346021670000047
as a candidate set
Figure BDA0002346021670000048
Weight matrix of xtIs a feature vector, htIs an implicit state at time t, ht-1Is an implicit state at time t-1.
Further: implicit state h of the second layer of the two bidirectional GRU layerstThe input into the attention layer for processing comprises the following steps:
a1 hidden state h of the second layer of two-layer bidirectional GRU layertInput into the attention layer;
a2 weight W of initial attention layerwAnd bias bw
A3, weight W according to attention levelwAnd bias bwObtaining the hidden state h by tanh hyperbolic tangent activation functiontHidden layer representation of ut
A4, randomly initializing a weight vector uwFor the hidden layer to represent utPerforming softmax standardization to obtain the attention weight alphat
A5, implicit State htBy attention weight αtWeighting to obtain hidden state htIs expressed as a weighted attention representation qt
Further: attention weighted representation qtThe process of inputting the full connection layer for processing comprises the following steps:
b1, representing attention by weighting qtInputting the full connection layer, and performing discrete processing to obtain attention weighting ok
Figure BDA0002346021670000051
C is the number of the neurons of the full connection layer;
b2 weighting attention okAnd carrying out random inactivation operation and classification operation by using softmax to obtain the probability value of the predicted action category.
Further: the probability value of the predicted action category in step B2 is calculated as:
Figure BDA0002346021670000052
wherein s iskIs the probability value of the predicted kth action.
The invention has the beneficial effects that: a stroke patient upper limb and hand rehabilitation training action recognition method adopts a nonnegative matrix decomposition model to carry out blind source separation on electromyographic signal data, removes nonstationary muscle activation information and obtains a stable time-varying blind source separation result; the decomposed time-varying blind source separation result data is applied to further pattern recognition, so that the recognition stability and precision are improved; the CNN network model is adopted to reserve the spatial characteristics of blind source separation result data, the RNN network model fuses the characteristic data, and time dimension information is provided to facilitate the distinguishing operation of the current data. The learned features are enabled to maintain both temporal and spatial characteristics through the CNN-RNN model. The CNN-RNN model can directly process data without manual data feature extraction and screening, automatically extract features and complete classification recognition, can realize end-to-end rehabilitation training action recognition analysis, and performs attention weighting on the hidden state of the second layer in the two-layer bidirectional GRU layer by combining the attention layer, so that the data with high contribution degree is endowed with larger weight, the data plays a greater role, and the precision of classification recognition is further improved.
Drawings
FIG. 1 is a flow chart of a method for recognizing rehabilitation training actions of upper limbs and hands of a stroke patient;
FIG. 2 is a schematic diagram of a CNN network model structure;
FIG. 3 is a diagram of a GRU unit architecture;
fig. 4 is a network model structure diagram of a part of the RNN network model.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
As shown in fig. 1, a method for recognizing rehabilitation training actions of upper limbs and hands of a stroke patient includes the following steps:
s1, collecting myoelectric signal data of rehabilitation training actions;
electrodes are respectively arranged on extensor muscle groups, flexor muscle groups, biceps brachii, triceps brachii, deltoid, major thenar muscle and minor thenar muscle of a subject, 8 channels of surface electromyography (sEMG) signals are collected together, the sampling frequency is 2KHz, and the electromyography placement positions are shown in table 1.
TABLE 1 EMG ELECTRODE POSITION DESIGN
Figure BDA0002346021670000071
In the present embodiment, 25 kinds of functional movements are designed for rehabilitation training and exercise training of the upper arm, forearm, and hand, as shown in table 2. The subject relaxed sitting on the armchair. For hand, wrist and elbow motions, the double arm supports were placed in a table top fixed position for a total of 5 seconds from rest to contraction induced and posture maintained according to video or voice instructions, 6 repetitions of each motion with a rest time of 5 seconds between motions; for shoulder movements, the subject sits up on a chair without a barrier in front, and according to visual or audio instructions, from rest to causing muscle contraction to complete the movement and hold in position for 5 seconds, each movement is repeated 6 times with a rest time of 5 seconds between movements. During the experiment, video of each movement performed by a healthy person was used as a demonstration for guiding the subject to perform (intend to perform) each movement.
TABLE 2 Experimental action design
Figure BDA0002346021670000072
S2, preprocessing electromyographic signal data, namely removing power line interference noise by using a 50Hz notch filter, performing 20Hz-450Hz band-pass filtering, eliminating motion artifacts (<20) and high-frequency noise (>450), and performing full-wave rectification;
s3, decomposing the preprocessed electromyographic signal data by adopting a non-negative matrix decomposition model to obtain a plurality of blind source separation result matrixes;
step S3 includes the following steps:
s31, manually segmenting the preprocessed electromyographic signal data in a time dimension to obtain an electromyographic signal data matrix formed by each piece of data of the corresponding time sequence;
and S32, decomposing the electromyographic signal data matrix by adopting a non-negative matrix decomposition model to obtain a plurality of blind source separation result matrixes.
The non-negative matrix factorization model in step S32 is:
Xm×n=Wm×r×Hr×n
wherein, Xm×nIs an electromyographic signal data matrix of m × n dimensions, m is the number of electrodes, n is the number of measured values, Wm×rIs a muscle activity matrix of m x r dimensions, Hr×nIs a r x n dimensional blind source separation result matrix.
S4, performing iterative training on the CNN-RNN model by adopting a plurality of blind source separation result matrixes to obtain a trained CNN-RNN model;
the step S4 includes the following steps:
s41, establishing a CNN network model and an RNN network model, and initializing the iteration number m to be 0;
the CNN network model in step S41 includes: three convolutional layers, three pooling layers, and three active layers, as shown in figure 2.
The input and output of the convolutional layer are calculated by the formula:
Figure BDA0002346021670000081
wherein the content of the first and second substances,
Figure BDA0002346021670000082
is the data of the ith input channel of the l-1 th convolutional layer,
Figure BDA0002346021670000083
data of the jth output channel of the first convolutional layer, Ml-1The number of input channels of the l-1 th convolutional layer,
Figure BDA0002346021670000091
for the l-th layer of the convolution kernel weights,
Figure BDA0002346021670000092
the first layer of convolution layer is offset, l is more than or equal to 1 and less than or equal to 3; the data of the ith input channel of the 1 st convolutional layer is a blind source separation result matrix Hr×nThe ith row of data.
S42, inputting the blind source separation result matrixes into a CNN network model, and performing feature extraction and pooling dimension reduction operation to obtain feature vectors;
nonlinearity is introduced into the CNN network model through the activation layer, so that the fitting capability of the neural network is improved.
The latitude of input data is reduced through the pooling layer, and the number of parameters or weights to be trained is reduced, so that the calculation cost is reduced, and overfitting is controlled.
S43, inputting the feature vector into an RNN network model for processing to obtain a probability value of the predicted action category;
the RNN network model includes: two bidirectional GRU layers, an attention layer and a full connection layer, wherein each bidirectional GRU layer comprises T' GRU units; the GRU unit comprises an updating gate and a resetting gate;
the input in the first of the two bidirectional GRU layers is a feature vector, and the output of the second layer is the input of the attention layer;
the output of the attention layer is the input of the fully connected layer, as shown in FIG. 3.
The state update equation for the GRU unit is as follows:
Figure BDA0002346021670000093
wherein, the [ alpha ], [ beta ]]Representing the connection of two vectors,. representing the inner product of the vectors,. sigma.rTo reset the gate rtWeight matrix of WzTo update the door ztThe weight matrix of (a) is determined,
Figure BDA0002346021670000094
as a candidate set
Figure BDA0002346021670000095
Weight matrix of xtIs a feature vector, htIs an implicit state at time t, ht-1Is an implicit state at time t-1.
As shown in fig. 4, the implicit state h of the second layer of the two bidirectional GRU layers is settThe input into the attention layer for processing comprises the following steps:
a1 hidden state h of the second layer of two-layer bidirectional GRU layertInput into the attention layer;
a2 weight W of initial attention layerwAnd bias bw
A3, weight W according to attention levelwAnd bias bwObtaining the hidden state h by tanh hyperbolic tangent activation functiontHidden layer representation of ut
A4, randomly initializing a weight vector uwFor the hidden layer to represent utPerforming softmax standardization to obtain the attention weight alphat
A5, implicit State htBy attention weight αtWeighting to obtain hidden state htIs expressed as a weighted attention representation qt
Attention is paid to the representation of the force-weighted representation q in step A5tThe calculation formula of (2) is as follows:
ut=tanh(Wwht+bw)
Figure BDA0002346021670000101
qt=αtht
attention weighted representation qtThe process of inputting the full connection layer for processing comprises the following steps:
b1, representing attention by weighting qtInputting the full connection layer, and performing discrete processing to obtain attention weighting ok
Figure BDA0002346021670000102
C is the number of the neurons of the full connection layer;
b2 weighting attention okAnd carrying out random inactivation operation and classification operation by using softmax to obtain the probability value of the predicted action category.
The probability value of the predicted action category in step B2 is calculated as:
Figure BDA0002346021670000103
wherein s iskIs the probability value of the predicted kth action.
S44, calculating the distance Loss of the probability values of the predicted action category and the real action category through cross entropym
The distance Loss of the probability values of the predicted and real motion categories is calculated in step S44mThe calculation formula of (2) is as follows:
Figure BDA0002346021670000111
wherein, Batch is the number of samples in Batch, n is the nth data, OnPredicted action category probability value distribution for nth piece of data: { s1,s2…,sC},ynTrue of nth dataAn action category probability value.
S45, judging the Loss of the mth timemValue sum Loss of m-1 timesm-1And if the difference value of the values is smaller than the threshold value, obtaining the trained CNN-RNN model, otherwise, updating the weight parameter and the offset parameter in the CNN network model and the weight parameter of the RNN network model by adopting a batch random gradient descent method, adding 1 to the iteration number m, and jumping to the step S42.
And S5, repeating the steps S1-S3 on the newly collected training action electromyographic data to obtain a plurality of blind source separation result matrixes, and inputting the plurality of blind source separation result matrixes into the trained CNN-RNN model to obtain the rehabilitation training action recognition category.
The invention has the beneficial effects that: a stroke patient upper limb and hand rehabilitation training action recognition method adopts a nonnegative matrix decomposition model to carry out blind source separation on electromyographic signal data, removes nonstationary muscle activation information and obtains a stable time-varying blind source separation result; the decomposed time-varying blind source separation result data is applied to further pattern recognition, so that the recognition stability and precision are improved; the CNN network model is adopted to reserve the spatial characteristics of blind source separation result data, the RNN network model fuses the characteristic data, and time dimension information is provided to facilitate the distinguishing operation of the current data. The learned features are enabled to maintain both temporal and spatial characteristics through the CNN-RNN model. The CNN-RNN model can directly process data without manual data feature extraction and screening, automatically extract features and complete classification recognition, can realize end-to-end rehabilitation training action recognition analysis, and performs attention weighting on the hidden state of the second layer in the two-layer bidirectional GRU layer by combining the attention layer, so that the data with high contribution degree is endowed with larger weight, the data plays a greater role, and the precision of classification recognition is further improved.

Claims (5)

1. A stroke patient upper limb and hand rehabilitation training action recognition method is characterized by comprising the following steps:
s1, collecting myoelectric signal data of rehabilitation training actions;
s2, preprocessing electromyographic signal data;
s3, decomposing the preprocessed electromyographic signal data by adopting a non-negative matrix decomposition model to obtain a plurality of blind source separation result matrixes;
s4, performing iterative training on the CNN-RNN model by adopting a plurality of blind source separation result matrixes to obtain a trained CNN-RNN model;
s5, repeating the steps S1-S3 on newly collected training action electromyographic data to obtain a plurality of blind source separation result matrixes, and inputting the plurality of blind source separation result matrixes into a trained CNN-RNN model to obtain rehabilitation training action recognition categories;
step S3 includes the following steps:
s31, manually segmenting the preprocessed electromyographic signal data in a time dimension to obtain an electromyographic signal data matrix formed by each piece of data of the corresponding time sequence;
s32, decomposing the electromyographic signal data matrix by adopting a non-negative matrix decomposition model to obtain a plurality of blind source separation result matrixes;
the step S4 includes the following steps:
s41, establishing a CNN network model and an RNN network model, and initializing the iteration number m to be 0;
the RNN network model in step S41 includes: two bidirectional GRU layers, an attention layer and a full connection layer, wherein each bidirectional GRU layer comprises T' GRU units;
the GRU unit comprises an updating gate and a resetting gate;
the input in the first of the two bidirectional GRU layers is a feature vector, and the output of the second layer is the input of the attention layer;
the output of the attention layer is the input of the full connection layer;
the state update equation for the GRU unit is as follows:
Figure FDA0002949574680000021
wherein, the [ alpha ], [ beta ]]Representing the concatenation of two vectors, representing the inner product of the vectors, σIs an activation function, tanh is a hyperbolic tangent activation function, WrTo reset the weight matrix of the gate, WzIn order to update the weight matrix for the gate,
Figure FDA0002949574680000022
as a candidate set
Figure FDA0002949574680000023
Weight matrix of xtIs a feature vector, htIs an implicit state at time t, ht-1Is an implicit state at the moment t-1;
the CNN network model in step S41 includes: three layers of convolution layer, three layers of pooling layer and three layers of activation layer; the method comprises the following steps that a convolutional layer is connected with an active layer and then connected with a pooling layer to form a group of network units, and the three groups of network units are sequentially connected to form a CNN network model;
s42, inputting the blind source separation result matrixes into a CNN network model, and performing feature extraction and pooling dimension reduction operation to obtain feature vectors;
s43, inputting the feature vector into an RNN network model for processing to obtain a probability value of the predicted action category;
s44, calculating the distance Loss of the probability values of the predicted action category and the real action category through cross entropym
S45, judging the Loss of the mth timemValue sum Loss of m-1 timesm-1And if the difference value of the values is smaller than the threshold value, obtaining the trained CNN-RNN model, otherwise, updating the weight parameter and the offset parameter in the CNN network model and the weight parameter of the RNN network model by adopting a batch random gradient descent method, adding 1 to the iteration number m, and jumping to the step S42.
2. The method for recognizing rehabilitation training actions of upper limbs and hands of stroke patients as claimed in claim 1, wherein the calculation formula of the input and output of the convolutional layer is as follows:
Figure FDA0002949574680000031
wherein the content of the first and second substances,
Figure FDA0002949574680000032
is the data of the ith input channel of the l-1 th convolutional layer,
Figure FDA0002949574680000033
data of the jth output channel of the first convolutional layer, Ml-1The number of input channels of the l-1 th convolutional layer,
Figure FDA0002949574680000034
for the l-th layer of the convolution kernel weights,
Figure FDA0002949574680000035
the first layer of convolution layer is offset, l is more than or equal to 1 and less than or equal to 3; the data of the ith input channel of the 1 st convolutional layer is a blind source separation result matrix Hr×nThe ith row of data.
3. The method of claim 1, wherein the hidden state h of the second layer of the two-layer bidirectional GRU layer is used for recognizing the rehabilitation training actions of the upper limbs and the hands of the patient with stroketThe input into the attention layer for processing comprises the following steps:
a1 hidden state h of the second layer of two-layer bidirectional GRU layertInput into the attention layer;
a2 weight W of initial attention layerwAnd bias bw
A3, weight W according to attention levelwAnd bias bwObtaining the hidden state h by tanh hyperbolic tangent activation functiontHidden layer representation of ut
A4, randomly initializing a weight vector uwFor the hidden layer to represent utPerforming softmax standardization to obtain the attention weight alphat
A5, implicit State htBy paying attention toForce weight alphatWeighting to obtain hidden state htIs expressed as a weighted attention representation qt
4. The method of claim 1, wherein the attention-weighted representation q represents a weight of the patient's upper limbs and hands in the rehabilitation training action recognitiontThe process of inputting the full connection layer for processing comprises the following steps:
b1, representing attention by weighting qtInputting the full connection layer, and performing discrete processing to obtain attention weighting ok
Figure FDA0002949574680000036
C is the number of the neurons of the full connection layer;
b2 weighting attention okAnd carrying out random inactivation operation and classification operation by using softmax to obtain the probability value of the predicted action category.
5. The method for recognizing rehabilitation training actions on upper limbs and hands of patients with stroke according to claim 4, wherein the probability value of the predicted action category in the step B2 is calculated by the formula:
Figure FDA0002949574680000041
wherein s iskIs the probability value of the predicted kth action.
CN201911394850.3A 2019-12-30 2019-12-30 Method for recognizing rehabilitation training actions of upper limbs and hands of stroke patient Active CN111184512B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911394850.3A CN111184512B (en) 2019-12-30 2019-12-30 Method for recognizing rehabilitation training actions of upper limbs and hands of stroke patient

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911394850.3A CN111184512B (en) 2019-12-30 2019-12-30 Method for recognizing rehabilitation training actions of upper limbs and hands of stroke patient

Publications (2)

Publication Number Publication Date
CN111184512A CN111184512A (en) 2020-05-22
CN111184512B true CN111184512B (en) 2021-06-01

Family

ID=70684400

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911394850.3A Active CN111184512B (en) 2019-12-30 2019-12-30 Method for recognizing rehabilitation training actions of upper limbs and hands of stroke patient

Country Status (1)

Country Link
CN (1) CN111184512B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111938660B (en) * 2020-08-13 2022-04-12 电子科技大学 Stroke patient hand rehabilitation training action recognition method based on array myoelectricity
CN111950460B (en) * 2020-08-13 2022-09-20 电子科技大学 Muscle strength self-adaptive stroke patient hand rehabilitation training action recognition method
CN112043269B (en) * 2020-09-27 2021-10-19 中国科学技术大学 Muscle space activation mode extraction and recognition method in gesture motion process
CN114081513B (en) * 2021-12-13 2023-04-07 苏州大学 Electromyographic signal-based abnormal driving behavior detection method and system
CN116013548B (en) * 2022-12-08 2024-04-09 广州视声健康科技有限公司 Intelligent ward monitoring method and device based on computer vision
CN115831368B (en) * 2022-12-28 2023-06-16 东南大学附属中大医院 Rehabilitation analysis management system based on cerebral imaging stroke patient data

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018119316A1 (en) * 2016-12-21 2018-06-28 Emory University Methods and systems for determining abnormal cardiac activity
CN108388348A (en) * 2018-03-19 2018-08-10 浙江大学 A kind of electromyography signal gesture identification method based on deep learning and attention mechanism
CN109480838A (en) * 2018-10-18 2019-03-19 北京理工大学 A kind of continuous compound movement Intention Anticipation method of human body based on surface layer electromyography signal
CN109924977A (en) * 2019-03-21 2019-06-25 西安交通大学 A kind of surface electromyogram signal classification method based on CNN and LSTM
CN110337269A (en) * 2016-07-25 2019-10-15 开创拉布斯公司 The method and apparatus that user is intended to are inferred based on neuromuscular signals
CN110399846A (en) * 2019-07-03 2019-11-01 北京航空航天大学 A kind of gesture identification method based on multichannel electromyography signal correlation
CN110610172A (en) * 2019-09-25 2019-12-24 南京邮电大学 Myoelectric gesture recognition method based on RNN-CNN architecture

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10878807B2 (en) * 2015-12-01 2020-12-29 Fluent.Ai Inc. System and method for implementing a vocal user interface by combining a speech to text system and a speech to intent system
KR101785500B1 (en) * 2016-02-15 2017-10-16 인하대학교산학협력단 A monophthong recognition method based on facial surface EMG signals by optimizing muscle mixing
US11635736B2 (en) * 2017-10-19 2023-04-25 Meta Platforms Technologies, Llc Systems and methods for identifying biological structures associated with neuromuscular source signals
US10709390B2 (en) * 2017-03-02 2020-07-14 Logos Care, Inc. Deep learning algorithms for heartbeats detection
CN109308459B (en) * 2018-09-05 2022-06-24 南京大学 Gesture estimation method based on finger attention model and key point topology model
CN109359619A (en) * 2018-10-31 2019-02-19 浙江工业大学之江学院 A kind of high density surface EMG Signal Decomposition Based method based on convolution blind source separating
CN109820525A (en) * 2019-01-23 2019-05-31 五邑大学 A kind of driving fatigue recognition methods based on CNN-LSTM deep learning model

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110337269A (en) * 2016-07-25 2019-10-15 开创拉布斯公司 The method and apparatus that user is intended to are inferred based on neuromuscular signals
WO2018119316A1 (en) * 2016-12-21 2018-06-28 Emory University Methods and systems for determining abnormal cardiac activity
CN108388348A (en) * 2018-03-19 2018-08-10 浙江大学 A kind of electromyography signal gesture identification method based on deep learning and attention mechanism
CN109480838A (en) * 2018-10-18 2019-03-19 北京理工大学 A kind of continuous compound movement Intention Anticipation method of human body based on surface layer electromyography signal
CN109924977A (en) * 2019-03-21 2019-06-25 西安交通大学 A kind of surface electromyogram signal classification method based on CNN and LSTM
CN110399846A (en) * 2019-07-03 2019-11-01 北京航空航天大学 A kind of gesture identification method based on multichannel electromyography signal correlation
CN110610172A (en) * 2019-09-25 2019-12-24 南京邮电大学 Myoelectric gesture recognition method based on RNN-CNN architecture

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度神经网络的sEMG手势识别研究;张龙娇等;《计算机工程与应用》;20190605(第23期);第113-119页 *

Also Published As

Publication number Publication date
CN111184512A (en) 2020-05-22

Similar Documents

Publication Publication Date Title
CN111184512B (en) Method for recognizing rehabilitation training actions of upper limbs and hands of stroke patient
CN110765920B (en) Motor imagery classification method based on convolutional neural network
CN107736894A (en) A kind of electrocardiosignal Emotion identification method based on deep learning
CN103440498A (en) Surface electromyogram signal identification method based on LDA algorithm
CN108681396A (en) Man-machine interactive system and its method based on brain-myoelectricity bimodal nerve signal
CN112120697A (en) Muscle fatigue advanced prediction and classification method based on surface electromyographic signals
US20230244909A1 (en) Adaptive brain-computer interface decoding method based on multi-model dynamic integration
CN110013248A (en) Brain electricity tensor mode identification technology and brain-machine interaction rehabilitation system
CN109498370A (en) Joint of lower extremity angle prediction technique based on myoelectricity small echo correlation dimension
CN112043473A (en) Parallel nested and autonomous preferred classifier for brain-myoelectricity fusion perception of intelligent artificial limb
Zheng et al. Concurrent prediction of finger forces based on source separation and classification of neuron discharge information
CN113111831A (en) Gesture recognition technology based on multi-mode information fusion
CN115568866A (en) System and method for evaluating nerve injury
Wang et al. Deep convolutional neural network for decoding EMG for human computer interaction
CN110738093B (en) Classification method based on improved small world echo state network electromyography
CN110321856B (en) Time-frequency multi-scale divergence CSP brain-computer interface method and device
Mirzabagherian et al. Temporal-spatial convolutional residual network for decoding attempted movement related EEG signals of subjects with spinal cord injury
CN116831874A (en) Lower limb rehabilitation device control method based on electromyographic signals
CN110604578A (en) Human hand and hand motion recognition method based on SEMG
CN114569143A (en) Myoelectric gesture recognition method based on attention mechanism and multi-feature fusion
Elbeshbeshy et al. Electromyography signal analysis and classification using time-frequency representations and deep learning
Polikar et al. Multiresolution wavelet analysis of ERPs for the detection of Alzheimer's disease
CN110464517B (en) Electromyographic signal identification method based on wavelet weighted arrangement entropy
CN114548165A (en) Electromyographic mode classification method capable of crossing users
Zohirov Classification Of Some Sensitive Motion Of Fingers To Create Modern Biointerface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant