CN113397572A - Surface electromyographic signal classification method and system based on Transformer model - Google Patents
Surface electromyographic signal classification method and system based on Transformer model Download PDFInfo
- Publication number
- CN113397572A CN113397572A CN202110839308.5A CN202110839308A CN113397572A CN 113397572 A CN113397572 A CN 113397572A CN 202110839308 A CN202110839308 A CN 202110839308A CN 113397572 A CN113397572 A CN 113397572A
- Authority
- CN
- China
- Prior art keywords
- data
- model
- matrix
- window
- head
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000009471 action Effects 0.000 claims abstract description 21
- 238000004364 calculation method Methods 0.000 claims abstract description 17
- 238000011176 pooling Methods 0.000 claims abstract description 13
- 238000001914 filtration Methods 0.000 claims abstract description 11
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 239000011159 matrix material Substances 0.000 claims description 51
- 230000007246 mechanism Effects 0.000 claims description 10
- 230000004913 activation Effects 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 7
- 230000017105 transposition Effects 0.000 claims description 6
- 238000013135 deep learning Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 17
- 210000000245 forearm Anatomy 0.000 description 7
- 210000005036 nerve Anatomy 0.000 description 5
- 210000003205 muscle Anatomy 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000008451 emotion Effects 0.000 description 3
- 210000001087 myotubule Anatomy 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 230000002996 emotional effect Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000002040 relaxant effect Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000008909 emotion recognition Effects 0.000 description 1
- 210000003414 extremity Anatomy 0.000 description 1
- 210000003141 lower extremity Anatomy 0.000 description 1
- 230000004118 muscle contraction Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 210000000278 spinal cord Anatomy 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/389—Electromyography [EMG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7203—Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/725—Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Public Health (AREA)
- Surgery (AREA)
- Veterinary Medicine (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Physiology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Psychiatry (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
- Image Processing (AREA)
Abstract
The invention provides a surface electromyogram signal classification method and system based on a Transformer model, which comprises the following steps: step S1: collecting corresponding multi-channel surface electromyographic signals according to preset actions, and filtering and removing noise of the collected signals; step S2: converting the filtered signals into a data-tag sequence by using a sliding window technology, and preprocessing the data of each window; step S3: carrying out position coding on the preprocessed window data, and inputting the window data into a coder layer of a Transformer model to extract characteristics; step S4: and integrating the features extracted by the encoder through global pooling, and then obtaining a final classification result through a layer of fully-connected network. The method is based on the deep learning algorithm, solves the problem of low calculation efficiency caused by the fact that the sequence models such as RNN, LSTM, GRU and the like cannot be subjected to parallel calculation, and simultaneously improves the accuracy of the models.
Description
Technical Field
The invention relates to the field of machine learning, in particular to a surface electromyographic signal classification method and system based on a Transformer model.
Background
The surface electromyographic signals are electrical signals collected on human skin by surface electrodes, and the electrical signals are potential differences generated near muscle fibers by muscle movement. When a human body produces an exercise intention, the intention is generated and encoded in nerve signals by the brain and transmitted to the spinal cord, the nerve signals are transmitted to corresponding limbs (such as lower limbs) through nerve passages after secondary encoding, muscle fibers are contracted by the nerve signals to generate potential differences, and muscles pull the skeleton to complete the exercise. In this process, the movement is intended to be ultimately encoded in the electrical signals generated by the contraction of muscle fibers. By decoding this signal, the original movement intention can be obtained, thereby controlling the external machine. Compared with the method of directly decoding brain signals and nerve signals, the muscle electrical signals are closer to the action implementation stage, the contained information is more accurate, the signal-to-noise ratio is higher, and the acquisition is more convenient.
Machine learning is one of the primary methods of decoding surface muscle electrical signals. The method comprises two stages: feature extraction and classification in feature space. The existing manual feature extraction method has a large difference from the RNN neural network in precision, but the RNN neural network cannot realize parallel calculation in the calculation process and has low speed.
Patent document CN112466326A (application number: 202011470115.9) discloses a method for extracting speech emotion features based on a transform model encoder, which is applicable to the fields of artificial intelligence and speech emotion recognition. Firstly, extracting low-level speech emotion characteristics from an original speech waveform by using a sinnet filter, and then further learning the low-level speech emotion characteristics by using a multilayer transform model encoder; the improved transformer model encoder is added with a layer of s incnet filter, namely a set of parameterized sinc functions with band-pass filters, in front of the conventional transformer model encoder, the sinnet filter is utilized to complete the low-level feature extraction work of the voice original waveform signal, and the network can better capture important narrow-band emotional features, so that frame-level emotional features containing global context information at a deeper level are obtained.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a surface electromyogram signal classification method and system based on a Transformer model.
The surface electromyogram signal classification method based on the Transformer model provided by the invention comprises the following steps:
step S1: collecting corresponding multi-channel surface electromyographic signals according to preset actions, and filtering and removing noise of the collected signals;
step S2: converting the filtered signals into a data-tag sequence by using a sliding window technology, and preprocessing the data of each window;
step S3: carrying out position coding on the preprocessed window data, and inputting the window data into a coder layer of a Transformer model to extract characteristics;
step S4: and integrating the features extracted by the encoder through global pooling, and then obtaining a final classification result through a layer of fully-connected network.
Preferably, the step S1 includes: collecting corresponding multichannel surface electromyographic signals according to preset types of movement actions and preset types of rest and relaxation actions, and filtering noise of the collected signals through a band-pass filter.
Preferably, the sliding window technique in step S2 includes: preset group data overlapping is formed between adjacent windows;
the pretreatment comprises the following steps: each channel of each window is individually normalized.
Preferably, the position encoding in step S3 includes:
wherein pos represents a location of the data within the time window; i represents the location of the data feature within the current set of data; dinputRepresenting dimensions of input data features.
Preferably, the step S3 of inputting the encoder-layer extraction features of the transform model includes:
the encoder module of the used Transformer comprises a multi-head attention network and a feedforward network;
the multi-head attention network includes: extracting internal features of the input sequence through a multi-head attention mechanism, wherein the formula is as follows:
MultiHead(X)=Concat(head1,…,headh)WO
wherein X represents a time window of input; concat represents a splicing function; parameter matrix Andall are learnable matrices; dmodelRepresenting dimensions of the transform model output features; if the number of the heads of the multi-head is h, the shape of the parameter matrix is dk=dv=dmodelH, requirement dkMust be a square number; q represents a query matrix; k represents a matrix of the relevance of the inquired information and other information; v represents a matrix of queried information; superscript T represents matrix transposition; r represents a vector space; dinputCharacteristic dimension representing input information, dkRepresents one dimension of K;
the feedforward network comprises a two-layer fully-connected network, ReLU is used as an activation function, and the calculation formula is as follows:
FFN(x)=max(0,xW1+b1)W2+b2
parameter matrixAndis a learnable matrix. dh iddenRepresenting a hidden layer dimension; b1、b2A deviation term is represented.
Preferably, the step S4 includes: features extracted by the encoder are integrated using global average pooling, and then finally classified via the softmax function of the fully connected network.
The invention provides a surface electromyogram signal classification system based on a Transformer model, which comprises the following components:
module M1: collecting corresponding multi-channel surface electromyographic signals according to preset actions, and filtering and removing noise of the collected signals;
module M2: converting the filtered signals into a data-tag sequence by using a sliding window technology, and preprocessing the data of each window;
module M3: carrying out position coding on the preprocessed window data, and inputting the window data into a coder layer of a Transformer model to extract characteristics;
module M4: and integrating the features extracted by the encoder through global pooling, and then obtaining a final classification result through a layer of fully-connected network.
Preferably, said module M1 comprises: collecting corresponding multichannel surface electromyographic signals according to preset types of movement actions and preset types of rest and relaxation actions, and filtering noise of the collected signals through a band-pass filter.
Preferably, the position encoding in the module M3 includes:
wherein pos represents a location of the data within the time window; i represents the location of the data feature within the current set of data; dinputA dimension representing a characteristic of the input data;
the encoder layer extracting features of the transform model input in the module M3 includes:
the encoder module of the used Transformer comprises a multi-head attention network and a feedforward network;
the multi-head attention network includes: extracting internal features of the input sequence through a multi-head attention mechanism, wherein the formula is as follows:
MultiHead(X)=Concat(head1,…,headh)WO
wherein X represents a time window of input; concat represents a splicing function; parameter matrix Andall are learnable matrices; dmodelRepresenting dimensions of the transform model output features; if the number of the heads of the multi-head is h, the shape of the parameter matrix is dk=dv=dmodelH, requirement dkMust be a square number; q represents a query matrix; k represents a matrix of the relevance of the inquired information and other information; v represents a matrix of queried information; superscript T represents matrix transposition; r represents a vector space; dinputCharacteristic dimension representing input information, dkRepresents one dimension of K;
the feedforward network comprises a two-layer fully-connected network, ReLU is used as an activation function, and the calculation formula is as follows:
FFN(x)=max(0,xW1+b1)W2+b2
parameter matrixAndis a learnable matrix. dhiddenRepresenting a hidden layer dimension; b1、b2A deviation term is represented.
Preferably, said module M4 comprises: features extracted by the encoder are integrated using global average pooling, and then finally classified via the softmax function of the fully connected network.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the method, the forearm electromyographic signals are classified by using a Transformer algorithm, the characteristics do not need to be designed manually, the process of characteristic selection is omitted, the quality of extracted characteristics is better, and the recognition precision is improved;
2. the invention utilizes a Multi-Head Attention network mechanism in a transform to extract sequence information in a time sequence, and compared with the traditional RNN type network, the accuracy is higher;
3. the Multi-Head Attention network mechanism used in the invention can realize the parallel calculation of a plurality of groups of data, and has higher calculation efficiency and higher speed compared with the traditional RNN type network which can only carry out serial calculation;
4. the method is based on the deep learning algorithm, solves the problem of low calculation efficiency caused by the fact that the sequence models such as RNN, LSTM, GRU and the like cannot be subjected to parallel calculation, and simultaneously improves the accuracy of the models.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a process framework diagram of the present invention;
FIG. 2 is a network framework of the present invention;
FIG. 3 is a flow chart of a self-attention mechanism operation;
FIG. 4 is a flow chart of the multi-head self-attention mechanism operation.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Example 1
The surface electromyogram signal classification method based on the Transformer model provided by the invention comprises the following steps:
step S1: collecting corresponding multi-channel surface electromyographic signals according to preset actions, and filtering and removing noise of the collected signals;
step S2: converting the filtered signals into a data-tag sequence by using a sliding window technology, and preprocessing the data of each window;
step S3: carrying out position coding on the preprocessed window data, and inputting the window data into a coder layer of a Transformer model to extract characteristics;
step S4: and integrating the features extracted by the encoder through global pooling, and then obtaining a final classification result through a layer of fully-connected network.
Specifically, the step S1 includes: according to six types of movement actions and one type of rest relaxing actions, six-channel signals acquired from six different positions of the forearm are acquired corresponding multi-channel surface electromyographic signals, and the acquired signals are filtered by a band-pass filter to filter noise so as to eliminate artifact noise (low frequency) and unnecessary high-frequency noise.
Specifically, the sliding window technique in step S2 includes: 125 under the sliding window band, 35 step length, and 90 groups of data overlap between adjacent windows;
the pretreatment comprises the following steps: each channel of each window is individually normalized.
Specifically, the position encoding in step S3 includes:
wherein pos represents a location of the data within the time window; i represents the location of the data feature within the current set of data; dinputRepresenting dimensions of input data features.
Specifically, the step S3 of inputting the encoder layer extraction features of the transform model includes:
the Encoder (Encoder) module of the transform used includes a Multi-Head Attention Network (Multi-Head Attention) and a Feed-Forward Network (Feed Forward Network);
the multi-head attention network includes: extracting internal features of the input sequence through a multi-head attention mechanism, wherein the formula is as follows:
MultiHead(X)=Concat(head1,…,headh)WO
wherein X represents a time window of input; concat represents a splicing function; parameter matrix Andall are learnable matrices; dmodelRepresenting dimensions of the transform model output features; if the number of the heads of the multi-head is h, the shape of the parameter matrix is dk=dv=dmodelH, requirement dkMust be a square number; q represents a query matrix; k represents a matrix of the relevance of the inquired information and other information; v represents a matrix of queried information; superscript T represents matrix transposition; r represents a vector space; dinputCharacteristic dimension representing input information, dkRepresents one dimension of K;
the feedforward network comprises a two-layer fully-connected network, ReLU is used as an activation function, and the calculation formula is as follows:
FFN(x)=max(0,xW1+b1)W2+b2
parameter matrixAndis a learnable matrix. dhiddenRepresenting a hidden layer dimension; b1、b2A deviation term is represented.
Specifically, the step S4 includes: features extracted by the encoder are integrated using global average pooling, and then finally classified via the softmax function of the fully connected network.
The invention provides a surface electromyogram signal classification system based on a Transformer model, which comprises the following components:
module M1: collecting corresponding multi-channel surface electromyographic signals according to preset actions, and filtering and removing noise of the collected signals;
module M2: converting the filtered signals into a data-tag sequence by using a sliding window technology, and preprocessing the data of each window;
module M3: carrying out position coding on the preprocessed window data, and inputting the window data into a coder layer of a Transformer model to extract characteristics;
module M4: and integrating the features extracted by the encoder through global pooling, and then obtaining a final classification result through a layer of fully-connected network.
Specifically, the module M1 includes: according to six types of movement actions and one type of rest relaxing actions, six-channel signals acquired from six different positions of the forearm are acquired corresponding multi-channel surface electromyographic signals, and the acquired signals are filtered by a band-pass filter to filter noise so as to eliminate artifact noise (low frequency) and unnecessary high-frequency noise.
Specifically, the sliding window technique in the module M2 includes: 125 under the sliding window band, 35 step length, and 90 groups of data overlap between adjacent windows;
the pretreatment comprises the following steps: each channel of each window is individually normalized.
Specifically, the position encoding in the module M3 includes:
wherein pos represents a location of the data within the time window; i represents the location of the data feature within the current set of data; dinputRepresenting dimensions of input data features.
Specifically, the step of inputting the encoder layer extraction features of the transform model in the module M3 includes:
the Encoder (Encoder) module of the transform used includes a Multi-Head Attention Network (Multi-Head Attention) and a Feed-Forward Network (Feed Forward Network);
the multi-head attention network includes: extracting internal features of the input sequence through a multi-head attention mechanism, wherein the formula is as follows:
MultiHead(X)=Concat(head1,…,headh)WO
wherein X representsA time window of input; concat represents a splicing function; parameter matrix Andall are learnable matrices; dmodelRepresenting dimensions of the transform model output features; if the number of the heads of the multi-head is h, the shape of the parameter matrix is dk=dv=dmodelH, requirement dkMust be a square number; q represents a query matrix; k represents a matrix of the relevance of the inquired information and other information; v represents a matrix of queried information; superscript T represents matrix transposition; r represents a vector space; dinputCharacteristic dimension representing input information, dkRepresents one dimension of K;
the feedforward network comprises a two-layer fully-connected network, ReLU is used as an activation function, and the calculation formula is as follows:
FFN(x)=max(0,xW1+b1)W2+b2
parameter matrixAndis a learnable matrix. dh iddenRepresenting a hidden layer dimension; b1、b2A deviation term is represented.
Specifically, the module M4 includes: features extracted by the encoder are integrated using global average pooling, and then finally classified via the softmax function of the fully connected network.
Example 2
Example 2 is a preferred example of example 1
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
As shown in fig. 1 to 4, in a surface electromyographic signal classification method based on a transform model, firstly, electromyographic signals of forearms are collected according to seven designed actions, wherein the seven actions are respectively as follows: fist making, stretching, inward folding of forearm, outward folding of wrist towards forearm, inward folding of elbow, outward folding of elbow and natural relaxation. Through the seven actions, corresponding forearm surface electromyographic signals are collected, and the sampling frequency is 1024 Hz. For the collected signals, a band-pass filter is arranged for eliminating artifact noise and high-frequency noise with small information quantity, wherein the artifact noise does not exceed 20Hz generally, and the high-frequency noise exceeds 500Hz generally, and the passing frequency of the band-pass filter is 20-500 Hz.
For the filtered data, we use sliding window technique to convert the collected time series into "data-label" pairs, and normalize each channel in each time window, with the following formula:
where min is the minimum value taken for the channel and max is the maximum value taken for the channel.
For the data after windowing and normalization, a position code is added to the data so that a subsequent model can conveniently learn position information in a sequence during learning, and the position code formula is as follows:
where pos is the location of the data within the time window, i is the location of the data feature within the set of data, dinputFor the set model output dimension, from the data we have collected, there is dinput=6
For data added with position information, we input it into Encoder module of Transformer, the data first passes through Multi-Head Attention layer, and the calculation formula of this part is as follows:
MultiHead(X)=Concat(head1,…,headh)WO
where X is the input time window, Concat is the splicing function, the parameter matrix Andare all learnable matrices, dmodelFor the dimension of the model output features, let d heremodel256. If the number of the multi-head is h is 4, the parameter matrix shape is dk=dv=dmodelWhen d is 64kThe requirement of being an average is satisfied.
Inputting the features extracted by the Multi-Head Attention layer into a Feed Forward Network layer, wherein the Feed Forward Network layer consists of two layers of fully connected networks, and ReLU is used as an activation function, and the calculation formula is as follows:
FFN(x)=max(0,xW1+b1)W2+b2
We use global tiepooling to integrate features extracted by the Encoder layer, followed finally by a fully connected network layer, using the softmax function as the activation function to complete the final classification.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.
Claims (10)
1. A surface electromyogram signal classification method based on a Transformer model is characterized by comprising the following steps:
step S1: collecting corresponding multi-channel surface electromyographic signals according to preset actions, and filtering and removing noise of the collected signals;
step S2: converting the filtered signals into a data-tag sequence by using a sliding window technology, and preprocessing the data of each window;
step S3: carrying out position coding on the preprocessed window data, and inputting the window data into a coder layer of a Transformer model to extract characteristics;
step S4: and integrating the features extracted by the encoder through global pooling, and then obtaining a final classification result through a layer of fully-connected network.
2. The method for classifying surface electromyogram signals based on a fransformer model according to claim 1, wherein the step S1 comprises: collecting corresponding multichannel surface electromyographic signals according to preset types of movement actions and preset types of rest and relaxation actions, and filtering noise of the collected signals through a band-pass filter.
3. The method for classifying surface electromyographic signals based on a fransformer model according to claim 1, wherein the sliding window technique in the step S2 comprises: preset group data overlapping is formed between adjacent windows;
the pretreatment comprises the following steps: each channel of each window is individually normalized.
4. The method for classifying surface electromyographic signals based on a fransformer model according to claim 1, wherein the position encoding in the step S3 comprises:
wherein pos represents a location of the data within the time window; i represents the location of the data feature within the current set of data; dinputRepresenting dimensions of input data features.
5. The method for classifying surface electromyogram signals based on a fransformer model according to claim 1, wherein the step S3 of inputting the encoder layer extraction features of the fransformer model comprises:
the encoder module of the used Transformer comprises a multi-head attention network and a feedforward network;
the multi-head attention network includes: extracting internal features of the input sequence through a multi-head attention mechanism, wherein the formula is as follows:
MultiHead(X)=Concat(head1,…,headh)WO
headi=Attention(XWi Q,XWi K,XWi V)
wherein X represents a time window of input; concat represents a splicing function; parameter matrix Andall are learnable matrices; dmodelRepresenting dimensions of the transform model output features; if the number of the heads of the multi-head is h, the shape of the parameter matrix is dk=dv=dmodelH, requirement dkMust be a square number; q represents a query matrix; k represents a matrix of the relevance of the inquired information and other information; v represents a matrix of queried information; superscript T represents matrix transposition; r represents a vector space; dinputCharacteristic dimension representing input information, dkRepresents one dimension of K;
the feedforward network comprises a two-layer fully-connected network, ReLU is used as an activation function, and the calculation formula is as follows:
FFN(x)=max(0,xW1+b1)W2+b2
6. The method for classifying surface electromyogram signals based on a fransformer model according to claim 1, wherein the step S4 comprises: features extracted by the encoder are integrated using global average pooling, and then finally classified via the softmax function of the fully connected network.
7. A surface electromyogram signal classification system based on a Transformer model is characterized by comprising:
module M1: collecting corresponding multi-channel surface electromyographic signals according to preset actions, and filtering and removing noise of the collected signals;
module M2: converting the filtered signals into a data-tag sequence by using a sliding window technology, and preprocessing the data of each window;
module M3: carrying out position coding on the preprocessed window data, and inputting the window data into a coder layer of a Transformer model to extract characteristics;
module M4: and integrating the features extracted by the encoder through global pooling, and then obtaining a final classification result through a layer of fully-connected network.
8. The transform model-based surface electromyographic signal classification system according to claim 7, wherein the module M1 comprises: collecting corresponding multichannel surface electromyographic signals according to preset types of movement actions and preset types of rest and relaxation actions, and filtering noise of the collected signals through a band-pass filter.
9. The transform model-based surface electromyogram signal classification system according to claim 7, wherein the position coding in the module M3 comprises:
wherein pos represents a location of the data within the time window; i represents the location of the data feature within the current set of data; dinputA dimension representing a characteristic of the input data;
the encoder layer extracting features of the transform model input in the module M3 includes:
the encoder module of the used Transformer comprises a multi-head attention network and a feedforward network;
the multi-head attention network includes: extracting internal features of the input sequence through a multi-head attention mechanism, wherein the formula is as follows:
MultiHead(X)=Concat(head1,…,headh)WO
headi=Attention(XWi Q,XWi K,XWi V)
wherein X represents a time window of input; concat represents a splicing function; parameter matrix Andall are learnable matrices; dmodelRepresenting dimensions of the transform model output features; if the number of the heads of the multi-head is h, the shape of the parameter matrix is dk=dv=dmodelH, requirement dkMust be a square number; q represents a query matrix; k represents a matrix of the relevance of the inquired information and other information; v represents a matrix of queried information; superscript T represents matrix transposition; r represents a vector space; dinputCharacteristic dimension representing input information, dkRepresents one dimension of K;
the feedforward network comprises a two-layer fully-connected network, ReLU is used as an activation function, and the calculation formula is as follows:
FFN(x)=max(0,xW1+b1)W2+b2
10. The transform model-based surface electromyographic signal classification system according to claim 7, wherein the module M4 comprises: features extracted by the encoder are integrated using global average pooling, and then finally classified via the softmax function of the fully connected network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110839308.5A CN113397572A (en) | 2021-07-23 | 2021-07-23 | Surface electromyographic signal classification method and system based on Transformer model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110839308.5A CN113397572A (en) | 2021-07-23 | 2021-07-23 | Surface electromyographic signal classification method and system based on Transformer model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113397572A true CN113397572A (en) | 2021-09-17 |
Family
ID=77687607
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110839308.5A Pending CN113397572A (en) | 2021-07-23 | 2021-07-23 | Surface electromyographic signal classification method and system based on Transformer model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113397572A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113901893A (en) * | 2021-09-22 | 2022-01-07 | 西安交通大学 | Electrocardiosignal identification and classification method based on multiple cascade deep neural network |
CN114626424A (en) * | 2022-05-16 | 2022-06-14 | 天津大学 | Data enhancement-based silent speech recognition method and device |
CN114863912A (en) * | 2022-05-05 | 2022-08-05 | 中国科学技术大学 | Silent voice decoding method based on surface electromyogram signals |
CN116070985A (en) * | 2023-04-06 | 2023-05-05 | 江苏华溯大数据有限公司 | Dangerous chemical vehicle loading and unloading process identification method |
CN116127364A (en) * | 2023-04-12 | 2023-05-16 | 上海术理智能科技有限公司 | Integrated transducer-based motor imagery decoding method and system |
CN116434343A (en) * | 2023-04-25 | 2023-07-14 | 天津大学 | Video motion recognition method based on high-low frequency double branches |
CN116485729A (en) * | 2023-04-03 | 2023-07-25 | 兰州大学 | Multistage bridge defect detection method based on transformer |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109924977A (en) * | 2019-03-21 | 2019-06-25 | 西安交通大学 | A kind of surface electromyogram signal classification method based on CNN and LSTM |
US10743809B1 (en) * | 2019-09-20 | 2020-08-18 | CeriBell, Inc. | Systems and methods for seizure prediction and detection |
CN111616706A (en) * | 2020-05-20 | 2020-09-04 | 山东中科先进技术研究院有限公司 | Surface electromyogram signal classification method and system based on convolutional neural network |
CN112466326A (en) * | 2020-12-14 | 2021-03-09 | 江苏师范大学 | Speech emotion feature extraction method based on transform model encoder |
CN113033657A (en) * | 2021-03-24 | 2021-06-25 | 武汉理工大学 | Multi-user behavior identification method based on Transformer network |
-
2021
- 2021-07-23 CN CN202110839308.5A patent/CN113397572A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109924977A (en) * | 2019-03-21 | 2019-06-25 | 西安交通大学 | A kind of surface electromyogram signal classification method based on CNN and LSTM |
US10743809B1 (en) * | 2019-09-20 | 2020-08-18 | CeriBell, Inc. | Systems and methods for seizure prediction and detection |
CN111616706A (en) * | 2020-05-20 | 2020-09-04 | 山东中科先进技术研究院有限公司 | Surface electromyogram signal classification method and system based on convolutional neural network |
CN112466326A (en) * | 2020-12-14 | 2021-03-09 | 江苏师范大学 | Speech emotion feature extraction method based on transform model encoder |
CN113033657A (en) * | 2021-03-24 | 2021-06-25 | 武汉理工大学 | Multi-user behavior identification method based on Transformer network |
Non-Patent Citations (1)
Title |
---|
QIZHENG GU ET AL.: "Automatic Generation of Electromyogram Diagnosis Report", 《2020 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113901893A (en) * | 2021-09-22 | 2022-01-07 | 西安交通大学 | Electrocardiosignal identification and classification method based on multiple cascade deep neural network |
CN113901893B (en) * | 2021-09-22 | 2023-09-15 | 西安交通大学 | Electrocardiosignal identification and classification method based on multi-cascade deep neural network |
CN114863912A (en) * | 2022-05-05 | 2022-08-05 | 中国科学技术大学 | Silent voice decoding method based on surface electromyogram signals |
CN114863912B (en) * | 2022-05-05 | 2024-05-10 | 中国科学技术大学 | Silent voice decoding method based on surface electromyographic signals |
CN114626424A (en) * | 2022-05-16 | 2022-06-14 | 天津大学 | Data enhancement-based silent speech recognition method and device |
CN114626424B (en) * | 2022-05-16 | 2022-09-13 | 天津大学 | Data enhancement-based silent speech recognition method and device |
CN116485729A (en) * | 2023-04-03 | 2023-07-25 | 兰州大学 | Multistage bridge defect detection method based on transformer |
CN116485729B (en) * | 2023-04-03 | 2024-01-12 | 兰州大学 | Multistage bridge defect detection method based on transformer |
CN116070985A (en) * | 2023-04-06 | 2023-05-05 | 江苏华溯大数据有限公司 | Dangerous chemical vehicle loading and unloading process identification method |
CN116127364A (en) * | 2023-04-12 | 2023-05-16 | 上海术理智能科技有限公司 | Integrated transducer-based motor imagery decoding method and system |
CN116434343A (en) * | 2023-04-25 | 2023-07-14 | 天津大学 | Video motion recognition method based on high-low frequency double branches |
CN116434343B (en) * | 2023-04-25 | 2023-09-19 | 天津大学 | Video motion recognition method based on high-low frequency double branches |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113397572A (en) | Surface electromyographic signal classification method and system based on Transformer model | |
CN108491077B (en) | Surface electromyographic signal gesture recognition method based on multi-stream divide-and-conquer convolutional neural network | |
CN110610172B (en) | Myoelectric gesture recognition method based on RNN-CNN architecture | |
CN109620651B (en) | Intelligent auxiliary rehabilitation equipment based on synchronous brain and muscle electricity | |
CN107736894A (en) | A kind of electrocardiosignal Emotion identification method based on deep learning | |
CN109598222B (en) | EEMD data enhancement-based wavelet neural network motor imagery electroencephalogram classification method | |
CN112466326A (en) | Speech emotion feature extraction method based on transform model encoder | |
Montazerin et al. | ViT-HGR: Vision transformer-based hand gesture recognition from high density surface EMG signals | |
Godoy et al. | Electromyography-based, robust hand motion classification employing temporal multi-channel vision transformers | |
CN113158964A (en) | Sleep staging method based on residual learning and multi-granularity feature fusion | |
CN113180659A (en) | Electroencephalogram emotion recognition system based on three-dimensional features and cavity full convolution network | |
CN113128353B (en) | Emotion perception method and system oriented to natural man-machine interaction | |
Huang et al. | Classify motor imagery by a novel CNN with data augmentation | |
CN110598628A (en) | Electromyographic signal hand motion recognition method based on integrated deep learning | |
Roy et al. | Hand movement recognition using cross spectrum image analysis of EMG signals-A deep learning approach | |
Ye et al. | Attention bidirectional LSTM networks based mime speech recognition using sEMG data | |
CN111950460A (en) | Muscle strength self-adaptive stroke patient hand rehabilitation training action recognition method | |
CN116628420A (en) | Brain wave signal processing method based on LSTM neural network element learning | |
CN116225222A (en) | Brain-computer interaction intention recognition method and system based on lightweight gradient lifting decision tree | |
CN114743569A (en) | Speech emotion recognition method based on double-layer fusion deep network | |
CN113642528B (en) | Hand movement intention classification method based on convolutional neural network | |
Ye et al. | Upper Limb Motion Recognition Using Gated Convolution Neural Network via Multi-Channel sEMG | |
CN111883178B (en) | Double-channel voice-to-image-based emotion recognition method | |
CN114343679A (en) | Surface electromyogram signal upper limb action recognition method and system based on transfer learning | |
Bo et al. | Hand gesture recognition using semg signals based on cnn |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210917 |