CN110658915A - Electromyographic signal gesture recognition method based on double-current network - Google Patents
Electromyographic signal gesture recognition method based on double-current network Download PDFInfo
- Publication number
- CN110658915A CN110658915A CN201910672070.4A CN201910672070A CN110658915A CN 110658915 A CN110658915 A CN 110658915A CN 201910672070 A CN201910672070 A CN 201910672070A CN 110658915 A CN110658915 A CN 110658915A
- Authority
- CN
- China
- Prior art keywords
- double
- data
- network model
- gesture recognition
- cnn
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 16
- 238000012549 training Methods 0.000 claims abstract description 20
- 230000003183 myoelectrical effect Effects 0.000 claims abstract description 10
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims abstract description 4
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 230000004927 fusion Effects 0.000 claims abstract description 3
- 238000005457 optimization Methods 0.000 claims abstract description 3
- 230000008569 process Effects 0.000 claims description 4
- 230000002123 temporal effect Effects 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 2
- 230000002045 lasting effect Effects 0.000 claims 1
- 230000009471 action Effects 0.000 abstract description 6
- 238000013527 convolutional neural network Methods 0.000 description 24
- 238000012360 testing method Methods 0.000 description 6
- 238000002567 electromyography Methods 0.000 description 5
- 238000010801 machine learning Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 210000003813 thumb Anatomy 0.000 description 3
- 238000012800 visualization Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000013079 data visualisation Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 210000003414 extremity Anatomy 0.000 description 2
- 230000004118 muscle contraction Effects 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 210000000707 wrist Anatomy 0.000 description 2
- 241001532067 Manfreda Species 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 206010015037 epilepsy Diseases 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000003811 finger Anatomy 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/015—Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- Dermatology (AREA)
- Neurology (AREA)
- Neurosurgery (AREA)
- Image Analysis (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A myoelectric signal gesture recognition method based on a double-current network comprises the following steps: 1) collecting myoelectric signals of various gestures of multiple persons. By wearing a collection device with 16 channels, each gesture action of a subject lasts for 12 seconds, 10 seconds of steady-state data are extracted, data preprocessing is carried out, a time window of 300ms is selected, and the size of each frame of electromyogram is 300 multiplied by 16. Thereby constructing a training set. 2) And constructing a double-flow network model which mainly comprises three parts. The first part is multilayer CNN and is responsible for extracting spatial features; the second part is a multilayer LSTM which is responsible for learning time characteristics; the last part is a feature merging layer which is responsible for feature fusion. 3) And training the double-flow network model, and performing gradient descent optimization by adopting an Adam optimizer until the double-flow network model converges. 4) And performing gesture recognition on the sEMG of the arm by using the trained double-flow network model.
Description
Technical Field
The invention relates to the field of man-machine interaction and artificial intelligence, in particular to a myoelectric signal gesture recognition method based on a double-current network, which can be applied to the directions of industrial control, medical artificial limbs and the like.
Background
The surface electromyography signals (sEMG) are classified by constructing a deep learning model, the electromyography signals are converted into instructions for transmitting the movement intention of a user, and the instructions are transmitted into a machine, so that a complete electromyography control system is formed. Gesture recognition based on surface electromyographic signals is the core of the myoelectric control system. In an application scenario, sEMG is susceptible to external environmental disturbances, such as electrode offset, variations in muscle contraction force, and variations in muscle contraction force, which all affect the accuracy of the recognition model. In the application field of sEMG, such as intelligent artificial limbs in the clinical field and man-machine control in the industrial field, higher requirements are imposed on the identification accuracy. Therefore, sEMG-based gesture recognition still has room for lifting.
sEMG-based gesture recognition can be very naturally defined as a pattern recognition problem, usually with supervised learning to train classifiers. When pattern recognition is performed on sEMG signals, there are three main parts: data preprocessing, feature extraction and classification. EMG is characterized by four main types: time domain features (TD), spectral or Frequency Domain (FD), time scale or time-frequency domain and parametric model analysis. In the aspect of a traditional algorithm, after the electromyographic data features are extracted, classification is carried out by using a classical machine learning algorithm. Such as Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), K-nearest neighbor (KNN), Gaussian Mixture Model (GMM).
There are three major disadvantages to using conventional machine learning algorithms to build recognition models. First, designers need to manually manufacture a large number of features, and finding the best combination of features is time consuming and laborious. Second, the best feature combination for one scene does not necessarily adapt to the other scene. Finally, biological signals are complex and require expert domain knowledge.
In recent years, deep learning has achieved significant effects in image classification, face recognition, and speech recognition. Deep learning, also known as feature learning, enables efficient features to be learned automatically from input data. The classical network architecture has convolutional neural network () CNN), Recurrent Neural Network (RNN). Many studies combine sEMG with deep learning. The general idea is to convert multi-frame sEMG signals into gray level images, and at the moment, the gesture recognition problem of the sEMG is reconstructed into an image classification problem. Manfreda et al found that a CNN with a simple structure had a better effect than the classical classification method. Geng et al constructed a deep convolutional network for application to high density sEMG signals. Acharya et al uses a convolutional network to analyze electroencephalographic signals to diagnose epilepsy. Xia et al process the sEMG signal using CNN, convert it to time-frequency frames and transmit them to RNN, thereby implementing a gesture classification model.
In the previous researches, the CNN is mostly used for extracting the spatial features of the sEMG, and multi-frame ordered electromyographic signals are combined into the electromyogram, so that the spatial effective information of the data can be extracted. Although a good effect is achieved, sEMG is a time series, and its internal time correlation is neglected.
Disclosure of Invention
The invention provides an electromyographic signal gesture recognition method based on a double-current network, which aims to overcome the defects in the prior art.
The invention designs a double-current network structure combining the structural characteristics of a Convolutional Neural Network (CNN) and a long-short term memory network (LSTM), wherein the upper layer is a multilayer CNN structure and comprises a convolutional layer, a pooling layer and a full-link layer, and the spatial characteristics of electromyogram can be extracted; the lower layer is composed of multiple layers of LSTM in order to extract temporal features of sEMG sequences. Multiple layers of LSTM are connected in series. The network can extract the time-space characteristics of the electromyographic data. The gesture recognition and classification are carried out by utilizing the spatiotemporal characteristics of the sEMG sequence, the classification accuracy can be improved, and a good effect can be obtained in a real-time classification system.
The technical scheme for realizing the purpose of the invention is as follows:
a myoelectric signal gesture recognition method based on a double-current network is characterized by comprising the following steps:
step 1, collecting electromyographic signals of various gestures of multiple persons. By wearing a 16 channel acquisition device, the subject performed 6 repetitions with 12 seconds duration of each gesture motion. Extracting steady-state data of 10 seconds from the data, preprocessing the data, selecting a 300ms time window, and constructing a training set by setting the size of each frame of electromyogram to be 300 multiplied by 16;
and 2, constructing a double-flow network model which mainly comprises three parts. The first part is responsible for extracting spatial features, the second part is responsible for learning temporal features, and the last part is responsible for feature fusion;
the first part of the model is CNN, the model has a five-layer structure, and the extracted spatial feature dimension is 128 x 1 after input sEMG data (300 x 16) is processed by the CNN. The LSTM part in the model has three layers, and effective information of time series can be extracted by overlapping the layers. Each layer of LSTM has 128 units, and the time characteristic dimension of the input sEMG data (300 × 16) after being processed by the LSTM is 128 × 1. The first part and the second part are of parallel structures and can simultaneously and respectively process electromyographic data, and then the spatial features extracted by the CNN part and the time sequence features extracted by the LSTM part are fused to form a feature merging layer so as to extract more comprehensive features. Then, through two complete connection layers, effective information in the two characteristics is fused into space-time characteristics, and then a probability estimation of classification is obtained by using a Softmax layer;
and 3, training the double-flow network model, and performing gradient descent optimization by adopting an Adam optimizer. The training loss function is:
where a is the output of the model, y is the true value of the sample, θ includes all parameters in the network model,the formula is regularized for L2, so that the overfitting problem can be effectively prevented;
and 4, performing gesture recognition on the sEMG of the arm by using the trained double-flow network model.
Further, reLu and BatchNorm are adopted after each input layer and hidden layer of CNN in step 2.
The invention has the advantages that:
1. the method is reasonable in design, the deep neural network is applied to the myoelectric signal gesture recognition, and the model has strong plasticity and recognition capability. Compared with the traditional machine learning method, the method does not need complicated characteristic engineering, and greatly improves the identification accuracy. Has important application significance.
2. The invention combines the characteristics of CNN and LSTM to form a novel double-current network model for myoelectric signal gesture recognition. The network model can simultaneously extract the time characteristics and the space characteristics of the sEMG data, so that the recognition accuracy of the electromyographic signals generated by different gestures is improved. And under the condition that the training data set is larger and the number of gestures is larger, compared with other neural network models, the double-flow model has the advantages that the recognition accuracy is obviously improved, and the adaptability is stronger.
Drawings
Fig. 1 shows a non-invasive wearable myoelectric collecting device for collecting data according to the present invention.
FIG. 2 shows five gestures for collecting electromyographic signals according to the present invention.
Fig. 3 is a diagram of a dual stream network architecture in accordance with the present invention.
FIG. 4 is a comparison of gesture recognition classification on self-collected data sets in accordance with the present invention with other machine learning algorithms.
Fig. 5(a) is a graph illustrating the recognition accuracy of the present invention for performing gesture recognition classification on the NinaproDB1 data set using different data sizes as training sets.
Fig. 5(b) illustrates the recognition accuracy of the present invention in classifying different amounts of gesture data, performing gesture recognition classification on the NinaproDB1 data set.
Fig. 6(a) is a three-dimensional visualization diagram of raw data of the electromyographic signal data set.
Fig. 6(b) is a three-dimensional visualization chart for feature classification using CNN.
Fig. 6(c) is a three-dimensional visualization diagram for feature classification using the present invention.
Detailed Description
The technical scheme of the invention is further explained by combining the attached drawings.
A myoelectric signal gesture recognition method based on a double-current network comprises the following steps:
step 1, a set of non-invasive wearable electromyography acquisition equipment shown in fig. 1 is used for acquiring sEMG data of a total of five gestures of 8 healthy volunteers shown in fig. 2, and a total of 240 sEMG samples are obtained. Each sample contained 195 frames of electromyogram data. Each frame of electromyogram is composed of 300 milliseconds of sEMG, and the acquisition device is provided with 16 electrode channels, so the dimension of the electromyogram is 300 multiplied by 16.
And 2, establishing a double-flow network model which mainly comprises a plurality of layers of CNNs and a plurality of layers of LSTMs according to the diagram shown in the figure 3. The CNN moiety in the model has five layers: the first two layers are convolutional layers, which comprise 64 convolutional kernels (5 × 5, 3 × 3) with the stride of 1 and the filling of 1, and the outputs of the convolutional kernels are respectively subjected to maximum pooling processing, so that the dimensionality is reduced while data information is kept; the next two layers are local connection layers with 64 non-overlapping convolution kernels (1 x 1), and in order to extract the most effective spatial features, a complete connection layer with 256 units is adopted; to prevent overfitting, dropout with a probability of 0.5 is used after the fourth layer. After CNN processing of the input sEMG data (300 × 16), the spatial feature dimension extracted is 256 × 1. The LSTM part in the model has three layers, and effective information of time series can be extracted by overlapping the layers. Each layer of LSTM has 256 units, and the time characteristic dimension of the input sEMG data (300 × 16) after being processed by the LSTM is 256 × 1. The CNN and the LSTM simultaneously and respectively process input data, and then the spatial characteristics of the multi-frame sEMG extracted by the CNN part and the time sequence characteristics of the multi-frame sEMG extracted by the LSTM part are fused to form a characteristic merging layer so as to extract more comprehensive characteristics. Then, through two complete connection layers, effective information in the two characteristics is fused into space-time characteristics, and then probability estimation of classification is obtained by using a Softmax layer.
And 3, training the model by using an Adam optimizer, and stopping training after 30 times of training. The learning rate was initialized to 0.01 and changed to 0.001 at the tenth iteration.
And 4, verifying the superiority of the double-flow network model through cross verification. Mainly comprising two data sets. One is a self-collected dataset of 8 human 5 gestures and the other is a public dataset of 27 human 52 actions, Ninapro. In data set one, the data of 7 persons are used as a training set, and the data of the remaining one person are used as a test set. In data set two, 2/3 of all data were taken as the training set, and the remaining 1/3 were taken as the test set.
On the self-collected data set, the recognition rate of the model provided by the invention is higher than that of a machine learning model and a CNN model in figure 4.
To verify the performance of the model, a test was performed using the public data set Ninapro. Training was performed using 2/3 data and testing was performed using 1/3 data. The resulting surface sub-model is inherently adaptive in different datasets. On a data scale, 2/3, 1/3, 1/4, 1/8 and 1/16 of NinaproDB1 were used as training sets, and the remaining part was used as a test set, respectively, and the obtained recognition accuracy was as shown in fig. 5 (a). In terms of the number of gestures, different numbers of gesture data are selected, 2/3 is used as training, 1/3 is used as testing, the number of gestures is changed to 5, 10, 15, 20, 30, 40 and 52, and the obtained recognition accuracy is shown in fig. 5 (b). With reference to fig. 5(a) and 5(b), it can be seen that the dual-flow model combining CNN and LSTM has a greater advantage in recognition accuracy than the conventional algorithm in the case of a larger training data set and a larger number of gestures, and the advantage is more obvious as the training data set and the number of gestures increase.
The invention also carries out data visualization, and PCA dimension reduction is carried out on the original data, the 128-dimensional characteristics of the CNN and the 128-dimensional characteristics of the double-flow network. Six movements were selected, thumb up, fingers abducted, fist closed, wrist in flexion, wrist out flexion and carpal ruler offset. A three-dimensional projection of the data is shown in fig. 6. 6(a) is the original data dimension reduction, and 6(b) is the CNN feature dimension reduction. And 6(c) is the feature dimension reduction of the double-current network. In fig. 6(b) and 6(c), the fist making action is dark blue, the thumb up action is light blue, and the difference between the two actions is only the erection of the thumb. In fig. 6(b), there are many overlapped portions of two colors, and in fig. 6(c), there are few overlapped portions of two colors, so that it is difficult for CNN to distinguish similar gesture actions. As can be seen from the recognition accuracy and the data visualization graph, the network of the invention can distinguish similar gestures better.
The embodiments described in this specification are merely illustrative of implementations of the inventive concept and the scope of the present invention should not be considered limited to the specific forms set forth in the embodiments but rather by the equivalents thereof as may occur to those skilled in the art upon consideration of the present inventive concept.
Claims (2)
1. A myoelectric signal gesture recognition method based on a double-current network comprises the following steps:
step 1, collecting electromyographic signals of various gestures of multiple persons; by wearing a 16-channel acquisition device, the subject had 6 repetitions with each gesture motion lasting 12 seconds; extracting steady-state data of 10 seconds from the data, preprocessing the data, selecting a 300ms time window, and constructing a training set by setting the size of each frame of electromyogram to be 300 multiplied by 16;
step 2, constructing a double-flow network model which mainly comprises three parts; the first part is responsible for extracting spatial features, the second part is responsible for learning temporal features, and the last part is responsible for feature fusion;
the first part of the model is CNN, five layers of structures are arranged, and after input sEMG data (300 multiplied by 16) are processed by the CNN, the extracted spatial feature dimension is 128 multiplied by 1; the LSTM part in the model has three layers, and effective information of a time sequence can be extracted through multilayer superposition; each layer of LSTM has 128 units, and the time characteristic dimension extracted after the input sEMG data (300 multiplied by 16) is processed by the LSTM is 128 multiplied by 1; the first part and the second part are of parallel structures and can simultaneously and respectively process electromyographic data, and then the spatial features extracted by the CNN part and the time sequence features extracted by the LSTM part are fused to form a feature merging layer so as to extract more comprehensive features; then, through two complete connection layers, effective information in the two characteristics is fused into space-time characteristics, and then a probability estimation of classification is obtained by using a Softmax layer;
step 3, training the double-flow network model, and performing gradient descent optimization by adopting an Adam optimizer; the training loss function is:
where a is the output of the model, y is the true value of the sample, θ includes all parameters in the network model,the formula is regularized for L2, so that the overfitting problem can be effectively prevented;
and 4, performing gesture recognition on the sEMG of the arm by using the trained double-flow network model.
2. The electromyographic signal gesture recognition method based on the dual-stream network described in claim 1, characterized in that: in the step 2, reLu and BatchNorm are adopted after each input layer and hidden layer of the CNN.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910672070.4A CN110658915A (en) | 2019-07-24 | 2019-07-24 | Electromyographic signal gesture recognition method based on double-current network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910672070.4A CN110658915A (en) | 2019-07-24 | 2019-07-24 | Electromyographic signal gesture recognition method based on double-current network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110658915A true CN110658915A (en) | 2020-01-07 |
Family
ID=69030903
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910672070.4A Pending CN110658915A (en) | 2019-07-24 | 2019-07-24 | Electromyographic signal gesture recognition method based on double-current network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110658915A (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111368739A (en) * | 2020-03-05 | 2020-07-03 | 东北大学 | Violent behavior identification method based on double-current convolutional neural network |
CN111476161A (en) * | 2020-04-07 | 2020-07-31 | 金陵科技学院 | Somatosensory dynamic gesture recognition method fusing image and physiological signal dual channels |
CN111553307A (en) * | 2020-05-08 | 2020-08-18 | 中国科学院合肥物质科学研究院 | Gesture recognition system fusing bioelectrical impedance information and myoelectric information |
CN111897428A (en) * | 2020-07-30 | 2020-11-06 | 太原科技大学 | Gesture recognition method based on moving brain-computer interface |
CN111938660A (en) * | 2020-08-13 | 2020-11-17 | 电子科技大学 | Stroke patient hand rehabilitation training action recognition method based on array myoelectricity |
CN112507881A (en) * | 2020-12-09 | 2021-03-16 | 山西三友和智慧信息技术股份有限公司 | sEMG signal classification method and system based on time convolution neural network |
CN112932502A (en) * | 2021-02-02 | 2021-06-11 | 杭州电子科技大学 | Electroencephalogram emotion recognition method combining mutual information channel selection and hybrid neural network |
WO2021143353A1 (en) * | 2020-01-13 | 2021-07-22 | 腾讯科技(深圳)有限公司 | Gesture information processing method and apparatus, electronic device, and storage medium |
CN113505822A (en) * | 2021-06-30 | 2021-10-15 | 中国矿业大学 | Multi-scale information fusion upper limb action classification method based on surface electromyographic signals |
CN113743247A (en) * | 2021-08-16 | 2021-12-03 | 电子科技大学 | Gesture recognition method based on Reders model |
CN114424937A (en) * | 2022-01-26 | 2022-05-03 | 宁波工业互联网研究院有限公司 | Human motion intention identification method and system for lower limb exoskeleton |
WO2022116056A1 (en) * | 2020-12-01 | 2022-06-09 | 深圳先进技术研究院 | Training method and training apparatus for continuous motion information prediction model, and computer-readable storage medium |
CN114822542A (en) * | 2022-04-25 | 2022-07-29 | 中国人民解放军军事科学院国防科技创新研究院 | Different-person classification-assisted silent speech recognition method and system |
CN115291730A (en) * | 2022-08-11 | 2022-11-04 | 北京理工大学 | Wearable bioelectric equipment and bioelectric action identification and self-calibration method |
CN115834310A (en) * | 2023-02-15 | 2023-03-21 | 四川轻化工大学 | Communication signal modulation identification method based on LGTransformer |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106980367A (en) * | 2017-02-27 | 2017-07-25 | 浙江工业大学 | A kind of gesture identification method based on myoelectricity topographic map |
CN108388348A (en) * | 2018-03-19 | 2018-08-10 | 浙江大学 | A kind of electromyography signal gesture identification method based on deep learning and attention mechanism |
CN108460089A (en) * | 2018-01-23 | 2018-08-28 | 哈尔滨理工大学 | Diverse characteristics based on Attention neural networks merge Chinese Text Categorization |
CN108491077A (en) * | 2018-03-19 | 2018-09-04 | 浙江大学 | A kind of surface electromyogram signal gesture identification method for convolutional neural networks of being divided and ruled based on multithread |
CN108763204A (en) * | 2018-05-21 | 2018-11-06 | 浙江大学 | A kind of multi-level text emotion feature extracting method and model |
CN109924977A (en) * | 2019-03-21 | 2019-06-25 | 西安交通大学 | A kind of surface electromyogram signal classification method based on CNN and LSTM |
-
2019
- 2019-07-24 CN CN201910672070.4A patent/CN110658915A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106980367A (en) * | 2017-02-27 | 2017-07-25 | 浙江工业大学 | A kind of gesture identification method based on myoelectricity topographic map |
CN108460089A (en) * | 2018-01-23 | 2018-08-28 | 哈尔滨理工大学 | Diverse characteristics based on Attention neural networks merge Chinese Text Categorization |
CN108388348A (en) * | 2018-03-19 | 2018-08-10 | 浙江大学 | A kind of electromyography signal gesture identification method based on deep learning and attention mechanism |
CN108491077A (en) * | 2018-03-19 | 2018-09-04 | 浙江大学 | A kind of surface electromyogram signal gesture identification method for convolutional neural networks of being divided and ruled based on multithread |
CN108763204A (en) * | 2018-05-21 | 2018-11-06 | 浙江大学 | A kind of multi-level text emotion feature extracting method and model |
CN109924977A (en) * | 2019-03-21 | 2019-06-25 | 西安交通大学 | A kind of surface electromyogram signal classification method based on CNN and LSTM |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021143353A1 (en) * | 2020-01-13 | 2021-07-22 | 腾讯科技(深圳)有限公司 | Gesture information processing method and apparatus, electronic device, and storage medium |
US11755121B2 (en) | 2020-01-13 | 2023-09-12 | Tencent Technology (Shenzhen) Company Limited | Gesture information processing method and apparatus, electronic device, and storage medium |
CN111368739A (en) * | 2020-03-05 | 2020-07-03 | 东北大学 | Violent behavior identification method based on double-current convolutional neural network |
CN111476161A (en) * | 2020-04-07 | 2020-07-31 | 金陵科技学院 | Somatosensory dynamic gesture recognition method fusing image and physiological signal dual channels |
CN111553307A (en) * | 2020-05-08 | 2020-08-18 | 中国科学院合肥物质科学研究院 | Gesture recognition system fusing bioelectrical impedance information and myoelectric information |
CN111553307B (en) * | 2020-05-08 | 2023-03-24 | 中国科学院合肥物质科学研究院 | Gesture recognition system fusing bioelectrical impedance information and myoelectric information |
CN111897428A (en) * | 2020-07-30 | 2020-11-06 | 太原科技大学 | Gesture recognition method based on moving brain-computer interface |
CN111897428B (en) * | 2020-07-30 | 2022-03-01 | 太原科技大学 | Gesture recognition method based on moving brain-computer interface |
CN111938660B (en) * | 2020-08-13 | 2022-04-12 | 电子科技大学 | Stroke patient hand rehabilitation training action recognition method based on array myoelectricity |
CN111938660A (en) * | 2020-08-13 | 2020-11-17 | 电子科技大学 | Stroke patient hand rehabilitation training action recognition method based on array myoelectricity |
WO2022116056A1 (en) * | 2020-12-01 | 2022-06-09 | 深圳先进技术研究院 | Training method and training apparatus for continuous motion information prediction model, and computer-readable storage medium |
CN112507881A (en) * | 2020-12-09 | 2021-03-16 | 山西三友和智慧信息技术股份有限公司 | sEMG signal classification method and system based on time convolution neural network |
CN112932502A (en) * | 2021-02-02 | 2021-06-11 | 杭州电子科技大学 | Electroencephalogram emotion recognition method combining mutual information channel selection and hybrid neural network |
CN112932502B (en) * | 2021-02-02 | 2022-05-03 | 杭州电子科技大学 | Electroencephalogram emotion recognition method combining mutual information channel selection and hybrid neural network |
CN113505822B (en) * | 2021-06-30 | 2022-02-15 | 中国矿业大学 | Multi-scale information fusion upper limb action classification method based on surface electromyographic signals |
CN113505822A (en) * | 2021-06-30 | 2021-10-15 | 中国矿业大学 | Multi-scale information fusion upper limb action classification method based on surface electromyographic signals |
CN113743247A (en) * | 2021-08-16 | 2021-12-03 | 电子科技大学 | Gesture recognition method based on Reders model |
CN114424937A (en) * | 2022-01-26 | 2022-05-03 | 宁波工业互联网研究院有限公司 | Human motion intention identification method and system for lower limb exoskeleton |
CN114822542A (en) * | 2022-04-25 | 2022-07-29 | 中国人民解放军军事科学院国防科技创新研究院 | Different-person classification-assisted silent speech recognition method and system |
CN114822542B (en) * | 2022-04-25 | 2024-05-14 | 中国人民解放军军事科学院国防科技创新研究院 | Different person classification assisted silent voice recognition method and system |
CN115291730A (en) * | 2022-08-11 | 2022-11-04 | 北京理工大学 | Wearable bioelectric equipment and bioelectric action identification and self-calibration method |
CN115291730B (en) * | 2022-08-11 | 2023-08-15 | 北京理工大学 | Wearable bioelectric equipment and bioelectric action recognition and self-calibration method |
CN115834310A (en) * | 2023-02-15 | 2023-03-21 | 四川轻化工大学 | Communication signal modulation identification method based on LGTransformer |
CN115834310B (en) * | 2023-02-15 | 2023-05-09 | 四川轻化工大学 | LGTransformer-based communication signal modulation identification method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110658915A (en) | Electromyographic signal gesture recognition method based on double-current network | |
Zhang et al. | Cascade and parallel convolutional recurrent neural networks on EEG-based intention recognition for brain computer interface | |
Zhang et al. | EEG-based intention recognition from spatio-temporal representations via cascade and parallel convolutional recurrent neural networks | |
Roy | Adaptive transfer learning-based multiscale feature fused deep convolutional neural network for EEG MI multiclassification in brain–computer interface | |
CN108491077B (en) | Surface electromyographic signal gesture recognition method based on multi-stream divide-and-conquer convolutional neural network | |
Mane et al. | A multi-view CNN with novel variance layer for motor imagery brain computer interface | |
CN112861604B (en) | Myoelectric action recognition and control method irrelevant to user | |
Fadel et al. | Multi-class classification of motor imagery EEG signals using image-based deep recurrent convolutional neural network | |
Praveen et al. | A joint cross-attention model for audio-visual fusion in dimensional emotion recognition | |
CN110333783B (en) | Irrelevant gesture processing method and system for robust electromyography control | |
CN113505822B (en) | Multi-scale information fusion upper limb action classification method based on surface electromyographic signals | |
CN112043473B (en) | Parallel nested and autonomous preferred classifier for brain-myoelectricity fusion perception of intelligent artificial limb | |
CN110610172B (en) | Myoelectric gesture recognition method based on RNN-CNN architecture | |
CN106108893A (en) | Based on eye electricity, the Mental imagery training Design of man-machine Conversation method of brain electricity | |
Wang et al. | Maximum weight multi-modal information fusion algorithm of electroencephalographs and face images for emotion recognition | |
Gao et al. | Convolutional neural network and riemannian geometry hybrid approach for motor imagery classification | |
Tong et al. | Learn the temporal-spatial feature of sEMG via dual-flow network | |
Abibullaev et al. | A brute-force CNN model selection for accurate classification of sensorimotor rhythms in BCIs | |
Liu et al. | A cnn-transformer hybrid recognition approach for semg-based dynamic gesture prediction | |
CN111783719A (en) | Myoelectric control method and device | |
Mehtiyev et al. | Deepensemble: a novel brain wave classification in MI-BCI using ensemble of deep learners | |
Sena et al. | Multiscale dcnn ensemble applied to human activity recognition based on wearable sensors | |
CN115456016A (en) | Motor imagery electroencephalogram signal identification method based on capsule network | |
Sheng et al. | A hand gesture recognition using single-channel electrodes based on artificial neural network | |
Xu et al. | Eeg signal classification and feature extraction methods based on deep learning: A review |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200107 |