CN114569143A - Myoelectric gesture recognition method based on attention mechanism and multi-feature fusion - Google Patents
Myoelectric gesture recognition method based on attention mechanism and multi-feature fusion Download PDFInfo
- Publication number
- CN114569143A CN114569143A CN202210208157.8A CN202210208157A CN114569143A CN 114569143 A CN114569143 A CN 114569143A CN 202210208157 A CN202210208157 A CN 202210208157A CN 114569143 A CN114569143 A CN 114569143A
- Authority
- CN
- China
- Prior art keywords
- data
- electromyographic
- attention mechanism
- myoelectric
- electromyographic data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/389—Electromyography [EMG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/389—Electromyography [EMG]
- A61B5/397—Analysis of electromyograms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7203—Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/725—Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Biophysics (AREA)
- Surgery (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Molecular Biology (AREA)
- Medical Informatics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Signal Processing (AREA)
- Psychiatry (AREA)
- Physiology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
The invention discloses an attention mechanism and multi-feature fusion based myoelectric gesture recognition method, which relates to the field of human body biological feature recognition and comprises the following steps: step 1, acquiring electromyographic data; step 2, preprocessing the collected electromyographic data; step 3, carrying out data segmentation on the preprocessed electromyographic data; step 4, performing data enhancement on the electromyographic data after data segmentation; step 5, extracting artificial features of the electromyographic data after data enhancement; step 6, building a neural network to extract network characteristics; and 7, fusing the characteristics, performing vector series connection on the extracted network characteristics and the artificial characteristics, and inputting the network characteristics and the artificial characteristics into a classifier to obtain a classification result. The technical scheme provided by the invention combines the advantages of network extraction features and artificial experience features, and improves the attention degree of a dual attention mechanism to key information, so that the network identification is more accurate, the robustness is increased, and the classification accuracy is better.
Description
Technical Field
The invention relates to the field of human body biological feature recognition, in particular to an electromyographic gesture recognition method based on attention mechanism and multi-feature fusion.
Background
The electromyographic signals are electrical signals accompanying muscle contraction. Action potential is generated by a cerebral motor cortex, reaches muscle fibers through a spinal cord and a peripheral nervous system, and finally forms a potential field on the surface of the skin through the low-pass filtering action of the skin to generate weak electric signals. Because the electromyographic signals are generated when the muscles of the human body contract, the electromyographic signals can accurately represent the contraction condition of the muscles. The surface electromyographic signals collected from the skin surface are the superposition of action potentials of movement units in a plurality of muscle fibers on time and space, so the movement intention and the movement state of a human body can be represented by the electromyographic signals. Meanwhile, the myoelectric signal is ahead of the muscle force and generally is generated 30-150 milliseconds ahead of the limb movement, the movement intention of the human body can be predicted by utilizing the time difference, and the myoelectric signal has the advantages of no substitution for signals with larger lag, such as the myoelectric signal and the force signal, and the like, so the myoelectric signal has important research value and practical value in the aspects of clinical medicine, ergonomics, rehabilitation medicine, sports and the like.
At present, a mature sensor is used for collecting surface electromyographic signals of a human body. In addition, in order to make signals more accurate and reliable, an integrated acquisition system often acquires electromyographic signals from multiple channels, and surface electromyographic signals of the multiple channels are the main approaches in electromyographic signal research because the surface electromyographic signals cover more electromyographic features. However, the electromyographic signals also have the problems of weak electric signals, low frequency, unstable signals, large influence of individuals and the like, so that the processing of the electromyographic signals is still a little challenge at present.
At present, electromyographic signals are processed mainly in two categories: the first type is that a time-frequency characteristic group of electromyographic signals with good classification effect, which is manually summarized in a plurality of early medical researches, is combined with a classifier to classify actions; the second category is to extract feature completion classification by means of machine learning and neural networks. At present, networks such as a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) which are mainstream have good effect on classification of electromyographic signals, but the classification effect is usually good only for a training data set, and when the environment of the network changes or the muscle condition of a human body changes, the classification effect is not ideal. In general, both types of methods suffer from their own drawbacks. The first method needs manual extraction and calculation of characteristic values, and although the training time is short, the robustness is not high; and secondly, the characteristics are extracted by utilizing a neural network, so that the method is not only not suitable for various environments, but also is easy to simplify the characteristics after convolution layer by layer, and a part of characteristic information in the original signal is lost.
In recent years, there have been many studies on analysis of electromyographic signals. In the 'electromyographic signal gesture recognition method based on deep learning and attention mechanism' (patent publication No. CN108388348A), Gunn Weidong et al, the model combining long and short term memory network (LSTM) and attention can achieve good effect, but in the multi-channel electromyographic signal processing, the Convolutional Neural Network (CNN) can additionally extract the characteristic relationship among the channels, so the method can still be improved. In addition, for the processing of the electromyographic signals, guo sword et al abandoned the artificial extraction features in the 'electromyographic gesture recognition method based on multi-feature fusion CNN' (patent publication No. CN111860410A) of the invention patent application in china, and directly used the network to extract the features, but the artificial features of the combined part are not considered, so the improvement effect of the robustness of the network is not obvious. In addition, most of the existing gesture classification methods based on the electromyographic signals do not consider factors such as muscle fatigue in practical application environments, and the electromyographic signals after the muscle fatigue can cause the classification accuracy to be reduced, so that the electromyographic signals are far away from practical application. Therefore, it is also necessary to design a robust enough recognition method so that the classification result is less influenced by human and environmental factors such as muscle fatigue. In view of the above problems, the existing electromyography identification method still has a lot of improvement spaces, and meanwhile, for the problem that the electromyography signals are greatly influenced by human bodies and environments, a good processing method is not available so far, so that the gesture classification research of the electromyography mainly lies in laboratory research and is rarely really applied.
Therefore, those skilled in the art are working to develop a new myoelectric gesture recognition method with a high recognition rate and less environmental impact.
Disclosure of Invention
The traditional myoelectric gesture classification method generally only considers network extraction features or only utilizes manually extracted features for classification and identification, but no attempt is made to combine the network extraction features with the manual features to realize gesture classification at present. In addition, the existing electromyographic gesture classification method only aims at electromyographic data under ideal conditions, namely factors such as muscle fatigue and electrode displacement during collection are not considered, so that the robustness of the model is poor, and the identification accuracy of the model is greatly reduced after the data receives partial environmental interference. In view of the above defects of the prior art, the technical problem to be solved by the present invention is to overcome the problems that the electromyographic classification recognition accuracy is not high and the recognition method is greatly influenced by the environment such as muscle fatigue.
In order to achieve the purpose, the invention provides an electromyographic gesture recognition method based on attention mechanism and multi-feature fusion, which fuses network extraction features and manual extraction features, and simultaneously combines the attention mechanism of a convolutional neural network, thereby improving the accuracy of network recognition.
Specifically, the myoelectric gesture recognition method based on attention mechanism and multi-feature fusion provided by the invention comprises the following steps:
step 2, preprocessing the collected electromyographic data;
step 6, building a neural network to extract network characteristics;
and 7, fusing the characteristics, performing vector series connection on the extracted network characteristics and the artificial characteristics, and inputting the network characteristics and the artificial characteristics into a classifier to obtain a classification result.
Further, the step 1 comprises:
1.1, acquiring the electromyographic data of a plurality of types of gestures of a plurality of acquired persons by using an eight-channel electromyographic bracelet, wherein the acquired persons are in a non-muscle fatigue state, the acquired electromyographic data is normal electromyographic data, and the acquisition is repeated for a plurality of times to avoid errors;
step 1.2, additionally collecting the electromyographic data of the collected person after muscle fatigue, wherein the collected electromyographic data is fatigue electromyographic data and is used for a later network robustness test, and the collected person can be enabled to generate the muscle fatigue through movement or load.
Further, the step 2 comprises:
step 2.1, filtering the electromyographic data, reducing the frequency range of the filtered signals and reducing noise signals;
and 2.2, solving an energy value of the square of the amplitude value of each channel of the electromyographic data, summing the energy values of eight channels, dividing the electromyographic data into action section electromyographic data and rest section electromyographic data by taking one eighth of the energy value as a threshold value, and removing all the rest section electromyographic data.
Further, the step 3 comprises:
3.1, dividing the action section electromyographic data into a plurality of signal sections with fixed lengths by using a sliding window;
and 3.2, dividing the normal myoelectric data into a training set, a verification set and a test set, and taking all the fatigue myoelectric data as the test set.
Further, the step 4 includes performing data enhancement on the training set according to an expansion method of the time sequence signal, and selecting three methods to expand the original data, including dithering, scaling and amplitude warping.
Further, the step 5 includes calculating a group of related artificial features for each input sample and splicing the artificial features into a one-dimensional vector, including four different artificial features, specifically: peak Average Power (PAP), Zero Cross Average Power (ZCAP), waveform length (MWL) and time difference value (DBM), which are used to adapt to the tired electromyographic data because the four artificial features have small variation to the tired electromyographic data, wherein the four artificial features are calculated according to the following formula:
DB/W=m0-m2
Where H is the window length and ΔnRepresenting the nth derivative, s, of the signaliRepresenting the electromyographic signal at time i, s [ j ]]Representing the fourier transform of a discrete signal.
Further, the step 6 comprises:
step 6.1, designing a multi-stream convolutional neural network model structure, wherein the convolutional neural network model structure consists of a plurality of parallel convolutional neural networks, and convolution kernels with different sizes are arranged, so that the convolutional neural network model structure can extract the network characteristics under different receptive fields;
and 6.2, introducing a dual attention mechanism into the convolutional neural network model structure, wherein the position attention module layer (PAM) and the channel attention module layer (CAM) in the dual attention mechanism can enable the network characteristics to cover more parts, so that the classification accuracy is improved.
Further, in step 2.1, a butterworth filter is used, and the frequency of the filtered signal is kept between 10 and 450 Hz.
Further, the length of the sliding window in step 3.1 is 200ms, and each moving step is 50 ms.
Further, in step 6.1, two parallel convolutional neural networks are adopted to form the convolutional neural network, and the convolutional kernel adopts a rectangular convolutional kernel.
The myoelectric gesture recognition method based on attention mechanism and multi-feature fusion provided by the invention at least has the following technical effects:
the technical scheme provided by the invention is an electromyographic gesture recognition method based on attention mechanism and multi-feature fusion, combines the advantages of network extraction features and artificial experience features, improves the attention degree of the dual attention mechanism to key information, enables network recognition to be more accurate, and adds four feature values which are less affected by muscle fatigue: the peak value average power (PAP), the Zero Crossing Average Power (ZCAP), the waveform length (MWL) and the time difference value (DBM) participate in network classification, so that the robustness of the model is improved, and the classification accuracy of muscle fatigue electromyographic signals is better.
The conception, the specific structure and the technical effects of the present invention will be further described with reference to the accompanying drawings to fully understand the objects, the features and the effects of the present invention.
Drawings
FIG. 1 is a flow chart of network model building according to a preferred embodiment of the present invention;
FIG. 2 is a detailed network architecture diagram of a preferred embodiment of the present invention;
FIG. 3 is a schematic diagram of a location attention module in accordance with a preferred embodiment of the present invention;
FIG. 4 is a schematic diagram of a channel attention module in accordance with a preferred embodiment of the present invention.
Detailed Description
The technical contents of the preferred embodiments of the present invention will be more clearly and easily understood by referring to the drawings attached to the specification. The present invention may be embodied in many different forms of embodiments and the scope of the invention is not limited to the embodiments set forth herein.
The traditional myoelectric gesture classification method generally only considers network extraction features or only utilizes manually extracted features for classification and identification, but no attempt is made to combine the network extraction features with the manual features to realize gesture classification at present. In addition, the existing electromyographic gesture classification method only aims at electromyographic data under ideal conditions, namely factors such as muscle fatigue and electrode displacement during collection are not considered, so that the robustness of the model is poor, and the identification accuracy of the model is greatly reduced after the data receives partial environmental interference.
In order to solve the problems, the technical scheme adopted by the invention is an attention system and multi-feature fusion based electromyographic gesture recognition method, the method combines a convolutional neural network and a dual attention system to extract the characteristics of an electromyographic signal, and also combines a characteristic value calculated manually, so that the recognition accuracy is improved, and meanwhile, the fatigue electromyographic signal has certain robustness, a flow chart built by the whole model is shown in fig. 1, and the specific implementation comprises the following steps:
step 2, preprocessing the collected electromyographic data;
step 6, building a neural network to extract network characteristics;
and 7, fusing the characteristics, performing vector series connection on the extracted network characteristics and the artificial characteristics, and inputting the network characteristics and the artificial characteristics into a classifier to obtain a classification result.
Wherein, step 1 includes:
step 1.1, collecting myoelectric data of a plurality of types of gestures of a plurality of collected persons by using an eight-channel myoelectric bracelet, wherein the collected persons are in a non-muscle fatigue state, the collected myoelectric data are normal myoelectric data, and the collecting is repeated for a plurality of times to avoid errors;
step 1.2, additionally collecting myoelectric data of the collected person after muscle fatigue, wherein the collected myoelectric data is fatigue myoelectric data and is used for a later network robustness test, and the collected person can be enabled to generate muscle fatigue through movement or load bearing.
Specifically, the data collected by the user is used as a data set in the step 1, the eight-channel electromyography bracelet is used for collecting electromyography signals of six different gestures of multiple people, wherein the gestures comprise a fist holding gesture, a fist opening gesture, an OK gesture, a pistol gesture, an inward bent gesture and an outward bent gesture, and meanwhile, the electromyography signals corresponding to the six gestures in the muscle fatigue state are collected additionally and used as the data set. Wherein, the normal myoelectric data is used as an input training set, and the fatigue myoelectric data is used as a test set. In the process of collecting the fatigue myoelectric signals once, the muscle fatigue is induced by using a dumbbell to lift for ten minutes.
Wherein, step 2 includes:
step 2.1, filtering the electromyographic data, reducing the frequency range of the filtered signals and reducing noise signals;
and 2.2, solving an energy value of the square of the amplitude value of each channel of the electromyographic data, summing the energy values of eight channels, dividing the electromyographic data into action section electromyographic data and rest section electromyographic data by taking one eighth of the energy value as a threshold value, and removing all the rest section electromyographic data.
Specifically, in step 2, firstly, performing butterworth filtering on all the obtained electromyographic data once, and reserving a part of the signal frequency range between 10 Hz and 450Hz, namely filtering noise and reserving the electromyographic signal with better signal to noise ratio.
In addition, because there are intervals between movements in the collection process, it is necessary to segment the movement rest segment, and only the electromyographic signals during movements are retained, the specific segmentation method is as follows: respectively calculating the square of the myoelectric signal amplitude of each channel, taking the value as an energy value, summing the energy of corresponding time points of all eight channels, taking 1/8 of the highest peak value of the energy of the whole signal as a segmentation threshold value, and if the peak value is greater than the threshold value, still determining that the signal is an action section signal and keeping the action section signal; otherwise, the segmentation is removed.
Wherein, step 3 includes:
3.1, dividing the action section electromyographic data into a plurality of signal sections with fixed lengths by using a sliding window;
and 3.2, dividing the normal myoelectric data into a training set, a verification set and a test set, and taking all the fatigue myoelectric data as the test set.
Specifically, step 3 performs windowing on the obtained action segment electromyographic data, specifically, divides the electromyographic signal into 200ms lengths, wherein the sliding step length is 50ms, and then performs windowing on the normal electromyographic signal data according to the ratio of 8: 1: the ratio of 1 is divided into a training set, a verification set and a test set, and meanwhile, fatigue myoelectric data are all divided into the test set to verify the robustness of the model.
And step 4, performing data enhancement on the training set according to an expansion method of the time sequence signal, and selecting three methods to expand the original data, wherein the three methods comprise dithering, scaling and amplitude distortion. Specifically, the method includes performing dithering (Jittering), Scaling (Scaling) and amplitude Warping (magnetic Warping) according to the above expansion mode of the timing signal to expand the original data, so that the number of training samples is increased by three times.
Wherein, step 5 includes calculating a set of relevant artificial features for each input sample and stitching them into a one-dimensional vector, including four different artificial features, specifically: the Peak Average Power (PAP), the Zero Cross Average Power (ZCAP), the waveform length (MWL) and the time difference value (DBM) are used for adapting to the fatigue electromyographic data because the four artificial features have small variation on the fatigue electromyographic data, wherein the calculation formulas of the four features are as follows:
DBW=m0-m2
Where H is the window length, ΔnRepresenting the nth derivative, s, of the signaliRepresenting the electromyographic signal at time i, s [ j ]]Representing the fourier transform of a discrete signal.
Wherein, step 6 includes:
step 6.1, designing a multi-stream convolutional neural network model structure, wherein the convolutional neural network model structure consists of a plurality of parallel convolutional neural networks, and convolutional kernels with different sizes are arranged, so that the convolutional neural network model structure can extract network characteristics under different receptive fields;
and 6.2, introducing a dual attention mechanism into the convolutional neural network model structure, wherein the position attention module layer (PAM) and the channel attention module layer (CAM) in the dual attention mechanism can enable network features to cover more parts, and the classification accuracy is improved.
Specifically, a double-current convolutional neural network is built in the step (6), wherein the double-current convolutional neural network comprises two parallel convolutional networks, each convolutional network comprises three convolutional layers, a pooling layer and a full-connection layer, the sizes of convolutional kernels in each convolutional layer are different, different hierarchical features are extracted, the convolutional kernels adopt rectangular convolutional kernels, and a specific network structure diagram is shown in fig. 2.
In addition, a dual attention mechanism is added after each convolutional layer, including a Position Attention Module (PAM) attention mechanism as shown in fig. 3 and a Channel Attention Module (CAM) attention mechanism as shown in fig. 4. The attention layer in step (6.2) is added after all the convolutional layers except the last one.
In fig. 3, a represents the characteristics of the input layer network, and is a three-dimensional array including the height, width, and number of channels. B, C, D in FIG. 3 respectively show the convolution with convolution kernel of 1, so as to reduce the number of channels and the subsequent computational complexity. In fig. 3, S represents a spatial feature obtained by multiplying C by B after inversion, and a new feature layer E is obtained after weighted summation with a. In fig. 4, F represents the characteristics of the input layer network, and is a three-dimensional array including the height, width, and number of channels. And F is multiplied by the rotated F to obtain a channel characteristic X, and then the channel characteristic X is weighted and summed with F to obtain a new characteristic layer G.
The positional attention calculation formula is as follows:
the channel attention calculation formula is as follows:
and (7) connecting the network characteristic vectors extracted by the network full-connection layer with the artificial characteristic vectors in series, combining the input vectors into a one-dimensional array, and inputting the one-dimensional array into a Support Vector Machine (SVM) to obtain a final classification result, wherein only normal electromyographic signal data is used as a training set in the experimental process, and partial normal electromyographic signal data and all fatigue electromyographic signal data are used as a test set.
The model provided by the invention is a network model which has higher classification accuracy on normal electromyographic signals and certain adaptability on electromyographic signals after muscle fatigue, the problems of lower electromyographic gesture classification accuracy and large influence of the model data on the electromyographic signals after muscle fatigue are solved, and a more robust electromyographic gesture classification model is constructed.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions that can be obtained by a person skilled in the art through logical analysis, reasoning or limited experiments based on the prior art according to the concepts of the present invention should be within the scope of protection determined by the claims.
Claims (10)
1. An attention mechanism and multi-feature fusion based myoelectric gesture recognition method is characterized by comprising the following steps:
step 1, acquiring electromyographic data;
step 2, preprocessing the collected electromyographic data;
step 3, carrying out data segmentation on the preprocessed electromyographic data;
step 4, performing data enhancement on the electromyographic data after data segmentation;
step 5, extracting artificial features of the electromyographic data after data enhancement;
step 6, building a neural network to extract network characteristics;
and 7, performing feature fusion, namely performing vector series connection on the extracted network features and the artificial features, and inputting the network features and the artificial features into a classifier to obtain a classification result.
2. The myoelectric gesture recognition method based on attention mechanism and multi-feature fusion according to claim 1, wherein the step 1 comprises:
1.1, acquiring the electromyographic data of a plurality of types of gestures of a plurality of acquired persons by using an eight-channel electromyographic bracelet, wherein the acquired persons are in a non-muscle fatigue state, the acquired electromyographic data is normal electromyographic data, and the acquisition is repeated for a plurality of times to avoid errors;
step 1.2, additionally collecting the electromyographic data of the collected person after muscle fatigue, wherein the collected electromyographic data is fatigue electromyographic data and is used for a later network robustness test, and the collected person can be enabled to generate the muscle fatigue through movement or load.
3. The myoelectric gesture recognition method based on attention mechanism and multi-feature fusion according to claim 2, wherein the step 2 comprises:
step 2.1, filtering the electromyographic data, reducing the frequency range of the filtered signals and reducing noise signals;
and 2.2, solving an energy value of the square of the amplitude value of each channel of the electromyographic data, summing the energy values of eight channels, dividing the electromyographic data into action section electromyographic data and rest section electromyographic data by taking one eighth of the energy value as a threshold value, and removing all the rest section electromyographic data.
4. The myoelectric gesture recognition method based on attention mechanism and multi-feature fusion according to claim 3, wherein the step 3 comprises:
3.1, dividing the action section electromyographic data into a plurality of signal sections with fixed lengths by using a sliding window;
and 3.2, dividing the normal myoelectric data into a training set, a verification set and a test set, and taking all the fatigue myoelectric data as the test set.
5. The myoelectric gesture recognition method based on attention mechanism and multi-feature fusion of claim 4, wherein the step 4 comprises performing data enhancement on the training set according to an expansion method of a time sequence signal, and selecting three methods to expand original data, including dithering, scaling and amplitude warping.
6. The myoelectric gesture recognition method based on attention mechanism and multi-feature fusion of claim 5, wherein the step 5 comprises calculating a group of related artificial features for each input sample and splicing into a one-dimensional vector, and the method comprises four different artificial features, specifically: peak Average Power (PAP), Zero Cross Average Power (ZCAP), waveform length (MWL) and time difference value (DBM), which are used to adapt to the tired electromyographic data because the four artificial features have small variation to the tired electromyographic data, wherein the four artificial features are calculated according to the following formula:
DBM=m0-m2
Where H is the window length and ΔnRepresenting the nth derivative, s, of the signaliRepresenting the electromyographic signal at time i, s [ j ]]Representing the fourier transform of a discrete signal.
7. The myoelectric gesture recognition method based on attention mechanism and multi-feature fusion according to claim 6, wherein the step 6 comprises:
step 6.1, designing a multi-stream convolutional neural network model structure, wherein the convolutional neural network model structure consists of a plurality of parallel convolutional neural networks, and convolution kernels with different sizes are arranged, so that the convolutional neural network model structure can extract the network characteristics under different receptive fields;
and 6.2, introducing a dual attention mechanism into the convolutional neural network model structure, wherein the position attention module layer (PAM) and the channel attention module layer (CAM) in the dual attention mechanism can enable the network characteristics to cover more parts, so that the classification accuracy is improved.
8. The myoelectric gesture recognition method based on attention mechanism and multi-feature fusion as claimed in claim 3, characterized in that in step 2.1, a Butterworth filter is adopted, and the frequency of the filtered signal is kept at 10-450 Hz.
9. The myoelectric gesture recognition method based on attention mechanism and multi-feature fusion of claim 4 characterized in that the length of the sliding window in step 3.1 is 200ms and each moving step is 50 ms.
10. The myoelectric gesture recognition method based on attention mechanism and multi-feature fusion as claimed in claim 7, characterized in that the step 6.1 is composed of two parallel convolutional neural networks, and the convolutional kernel is a rectangular convolutional kernel.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210208157.8A CN114569143A (en) | 2022-03-03 | 2022-03-03 | Myoelectric gesture recognition method based on attention mechanism and multi-feature fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210208157.8A CN114569143A (en) | 2022-03-03 | 2022-03-03 | Myoelectric gesture recognition method based on attention mechanism and multi-feature fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114569143A true CN114569143A (en) | 2022-06-03 |
Family
ID=81776127
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210208157.8A Pending CN114569143A (en) | 2022-03-03 | 2022-03-03 | Myoelectric gesture recognition method based on attention mechanism and multi-feature fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114569143A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117311513A (en) * | 2023-10-26 | 2023-12-29 | 昆明理工大学 | Low sampling rate myoelectric gesture recognition method combining convolutional neural network with subdomain adaptation |
-
2022
- 2022-03-03 CN CN202210208157.8A patent/CN114569143A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117311513A (en) * | 2023-10-26 | 2023-12-29 | 昆明理工大学 | Low sampling rate myoelectric gesture recognition method combining convolutional neural network with subdomain adaptation |
CN117311513B (en) * | 2023-10-26 | 2024-03-08 | 昆明理工大学 | Low sampling rate myoelectric gesture recognition method combining convolutional neural network with subdomain adaptation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108491077B (en) | Surface electromyographic signal gesture recognition method based on multi-stream divide-and-conquer convolutional neural network | |
CN112861604B (en) | Myoelectric action recognition and control method irrelevant to user | |
CN106980367A (en) | A kind of gesture identification method based on myoelectricity topographic map | |
Hermosilla et al. | Shallow convolutional network excel for classifying motor imagery EEG in BCI applications | |
CN112043473B (en) | Parallel nested and autonomous preferred classifier for brain-myoelectricity fusion perception of intelligent artificial limb | |
CN106127191B (en) | Brain electricity classification method based on WAVELET PACKET DECOMPOSITION and logistic regression | |
CN108681685A (en) | A kind of body work intension recognizing method based on human body surface myoelectric signal | |
CN109598222A (en) | Wavelet neural network Mental imagery brain electricity classification method based on the enhancing of EEMD data | |
CN113111831A (en) | Gesture recognition technology based on multi-mode information fusion | |
Li et al. | EEG signal classification method based on feature priority analysis and CNN | |
CN111931656B (en) | User independent motor imagery classification model training method based on transfer learning | |
CN107822629A (en) | The detection method of extremity surface myoelectricity axle | |
CN114159079B (en) | Multi-type muscle fatigue detection method based on feature extraction and GRU deep learning model | |
CN114569143A (en) | Myoelectric gesture recognition method based on attention mechanism and multi-feature fusion | |
CN112036357B (en) | Upper limb action recognition method and system based on surface electromyographic signals | |
CN110738093B (en) | Classification method based on improved small world echo state network electromyography | |
CN113128384A (en) | Brain-computer interface software key technical method of stroke rehabilitation system based on deep learning | |
CN110321856B (en) | Time-frequency multi-scale divergence CSP brain-computer interface method and device | |
CN112998725A (en) | Rehabilitation method and system of brain-computer interface technology based on motion observation | |
CN111783719A (en) | Myoelectric control method and device | |
CN116910464A (en) | Myoelectric signal prosthetic hand control system and method | |
CN116831874A (en) | Lower limb rehabilitation device control method based on electromyographic signals | |
CN109144277B (en) | Method for constructing intelligent vehicle controlled by brain based on machine learning | |
CN114098768B (en) | Cross-individual surface electromyographic signal gesture recognition method based on dynamic threshold and EasyTL | |
CN116522106A (en) | Motor imagery electroencephalogram signal classification method based on transfer learning parallel multi-scale filter bank time domain convolution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |