CN114266276B - Motor imagery electroencephalogram signal classification method based on channel attention and multi-scale time domain convolution - Google Patents
Motor imagery electroencephalogram signal classification method based on channel attention and multi-scale time domain convolution Download PDFInfo
- Publication number
- CN114266276B CN114266276B CN202111606161.1A CN202111606161A CN114266276B CN 114266276 B CN114266276 B CN 114266276B CN 202111606161 A CN202111606161 A CN 202111606161A CN 114266276 B CN114266276 B CN 114266276B
- Authority
- CN
- China
- Prior art keywords
- convolution
- training
- channel
- eegnet
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000012549 training Methods 0.000 claims abstract description 53
- 238000000605 extraction Methods 0.000 claims abstract description 13
- 238000012360 testing method Methods 0.000 claims description 17
- 238000012795 verification Methods 0.000 claims description 16
- 238000002474 experimental method Methods 0.000 claims description 13
- 230000006870 function Effects 0.000 claims description 12
- 238000013527 convolutional neural network Methods 0.000 claims description 10
- 230000000694 effects Effects 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 9
- 238000011176 pooling Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 230000003993 interaction Effects 0.000 claims description 5
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 238000010200 validation analysis Methods 0.000 claims description 3
- 230000003213 activating effect Effects 0.000 claims description 2
- 230000002776 aggregation Effects 0.000 claims description 2
- 238000004220 aggregation Methods 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000013507 mapping Methods 0.000 claims description 2
- 238000012935 Averaging Methods 0.000 claims 1
- 210000004556 brain Anatomy 0.000 abstract description 14
- 238000002679 ablation Methods 0.000 description 5
- 238000011160 research Methods 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 230000033001 locomotion Effects 0.000 description 3
- 230000033764 rhythmic process Effects 0.000 description 3
- 230000001953 sensory effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000000052 comparative effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 210000000337 motor cortex Anatomy 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 230000003447 ipsilateral effect Effects 0.000 description 1
- 230000007659 motor function Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 210000000653 nervous system Anatomy 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000037152 sensory function Effects 0.000 description 1
Landscapes
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
A motor imagery electroencephalogram signal classification method based on channel attention and multi-scale time domain convolution belongs to the field of computer software. Aiming at the problem of difficult feature extraction caused by low signal-to-noise ratio of the brain electrical signal, an improved network model based on EEGNet is provided, which is called MCA-EEGNet for short. Firstly, a common convolution layer in EEGNet models is replaced by a parallel multi-scale time convolution layer so as to better extract the characteristics, thereby improving the classification accuracy. Meanwhile, a channel attention module ECA is added, so that channel information with high correlation with input data is more focused during network training, and the robustness of the model is further improved. Compared with EEGNet models, the classification method provided by the invention can more effectively improve the characteristic extraction and classification performance of the motor imagery electroencephalogram signals.
Description
Technical Field
The invention discloses a motor imagery electroencephalogram signal classification method based on channel attention and multi-scale time domain convolution, which can be used for identifying motor imagery limb parts and belongs to the field of computer software.
Background
In recent years, EEG-based Brain-computer interface (BCI) technology has evolved rapidly. The brain-computer interface is a real-time communication system for connecting the brain and external equipment, has important research significance and huge application potential in the fields of biomedicine, nerve rehabilitation, artificial intelligence and the like, and realizes a new man-machine interaction mode of direct communication between the brain and the outside. The brain-computer interface technology mainly collects data from the brain in three ways: invasive, semi-invasive and non-invasive. EEG-based brain-computer interfaces are non-invasive and are the most commonly used technique in the field of brain-computer interfaces. Among these, the motion illusion is one of the most representative paradigms. Imagination of limb activity can change the electrical activity of the brain. The magnitude of the mu and beta rhythms of the contralateral sensory motor cortex of the brain is significantly reduced while the magnitude of the mu and beta rhythms of the ipsilateral sensory motor cortex is significantly increased when one-sided limb movements and motor ideas are known as event-related desynchronization (ERD) and event-related synchronization (ERS, EVENT RELATED synchronization). According to the difference of the brain electrical rhythms of the sensory and motor areas, the brain electrical signals of different motor imagination tasks can be identified and classified, and the brain electrical signals have great help and research significance for recovering the sensory and motor functions of patients suffering from the impaired central wind or nervous system. Although brain-computer interface technology based on motor imagery paradigms has been widely applied in fields such as rehabilitation and medical treatment, the decoding performance of the brain-computer interface technology still cannot well meet the needs of practical application. Because the EEG signal is an unstable signal and has a very low signal to noise ratio, the acquired EEG signal is easily affected by noise such as electrooculogram, myoelectricity and the like. In addition, the electroencephalogram signals generated by the same imagination task are different for different subjects; the time series of the same subject for the same imagined task may also vary considerably at different times. Therefore, how to extract effective features from motor imagery electroencephalogram signals and accurately classify the same is a challenging problem in the field of motor imagery.
To solve the above problems, a machine learning method and a deep learning method are often employed for feature extraction and classification. The traditional classification method of the motor imagery electroencephalogram signals mainly comprises the following two steps: feature extraction and classification. The method comprises the steps of firstly extracting features of an original electroencephalogram through a series of algorithms, and then inputting the extracted features into corresponding classifiers for classification and discrimination to obtain a final result. The traditional feature extraction method mainly comprises a Common space mode (Common SPATIAL PATTERN, CSP), a filter bank Common space mode (Filter Bank Common SPATIAL PATTERN, FBCSP) and the like; common classification methods include Linear Discriminant Analysis (LDA), support Vector Machines (SVM), bayesian classifiers, and the like. In recent years, researchers have found that deep learning methods can achieve better classification performance than machine learning methods. Lawhern et al propose EEGNet, a generic and compact convolutional neural network specifically designed for electroencephalogram recognition tasks. They use a channel-by-channel convolution and a depth separable convolution (channel-by-channel convolution + point-by-point convolution) to construct a specific EEG model, compared to common convolutional neural network structures. Currently, EEGNet models have been successfully applied to multiple paradigms in the brain-computer interface field and achieve a good classification effect. Wu et al propose a parallel multi-scale filter bank convolutional neural network (MSFBCNN) for motor imagery electroencephalogram classification, which is novel in that four parallel temporal convolutional layers are used to extract temporal features, and feature information in an end-to-end network is fully utilized. Dai et al propose a convolutional neural network (HS-CNN) with a mixed convolutional scale, and the proposed method effectively solves the problem of limiting the classification effect by using a single convolutional scale in CNN, thereby further improving the classification accuracy. From the above study, it can be found that: the lightweight network and the filter bank play a key role in motor imagery electroencephalogram classification based on deep learning. In addition, the classification accuracy of the existing methods still needs to be further improved.
Channel attention mechanisms have great potential in improving the performance of deep convolutional neural networks and have therefore attracted extensive attention from more and more researchers. SE-Net(Squeeze-and-Excitation Networks),CBAM(Convolutional Block Attention Module),A2-Net(Double attention networks) of which are several relatively classical attention mechanisms. SE-Net was proposed by Hu et al, which adaptively recalibrates the characteristic response of the channel approach by capturing the dependencies between all channels. Although this module is widely used, the dimension reduction method involved therein can have adverse effects on channel attention prediction, and capturing the dependency of all channels can reduce model efficiency. To address the above, wang et al propose an Efficient Channel Attention (ECA) module for deep convolutional neural networks. The module carries out channel-by-channel global average pooling firstly, and then carries out convolution on each channel and k adjacent channels thereof under the condition of not reducing the dimension to extract information between the adjacent channels. The design avoids the side effects caused by dimension reduction, and proper cross-channel interaction can remarkably reduce the complexity of a model and the number of parameters while maintaining the performance. To further reduce the number of parameters Saini et al propose a simple and effective "Ultra-lightweight subspace attention mechanism" (Ultra-LIGHTWEIGHT SUBSPACE ATTENTION MODULE, ULSAM), which, while reducing the amount of parameters and computational overhead, has little gain on the overall model, yet to be further improved. Zequn Qin1 et al propose FcaNet (Frequency Channel Attention Networks) to think about and design multi-spectral channel attention from the frequency domain, with good results, but its applicability is still to be further studied.
Through discussing and analyzing the advantages and disadvantages of the existing research method, the invention obtains new elicitations and research ideas, and based on EEGNet, a new improved network model (MCA-EEGNet) is provided. A common convolution layer in EEGNet models is replaced by a multi-scale time convolution layer to better extract the characteristics; meanwhile, a channel attention module ECA is added, so that channel information with high correlation with input data is more focused during network training, and the robustness of the model is further improved. Compared with EEGNet model, the classification method provided by the invention can more effectively improve the decoding performance of the motor imagery electroencephalogram signals.
Disclosure of Invention
The invention provides a motor imagery electroencephalogram signal classification method based on channel attention and multi-scale time domain convolution, which can effectively improve the characteristic extraction and classification performance of motor imagery electroencephalogram signals. Aiming at the problem of difficult feature extraction caused by low signal-to-noise ratio of the electroencephalogram signals, a common convolution layer in EEGNet models is replaced by a parallel multi-scale time convolution layer so as to better perform feature extraction, thereby improving classification accuracy; meanwhile, a channel attention module ECA is added, so that channel information with high correlation with input data is more focused during network training, and the robustness of the model is further improved. Compared with EEGNet, the method provided by the invention has higher classification accuracy.
Through research discussion and repeated practice, the method determines the final scheme as follows:
Firstly, preprocessing an original electroencephalogram data set, dividing the data set into a training set, a verification set and a test set, respectively inputting the training set, the verification set and the test set into a new network model MCA-EEGNet for training and testing, finally obtaining a model classification result, evaluating the classification result, and verifying the effectiveness of the method.
The technical scheme of the invention comprises the following specific steps:
step 1, data preprocessing: performing band-pass filtering processing on the motor imagery electroencephalogram signals by using a band-pass filter, and then performing exponential moving average standardization on the filtered signals; dividing an electroencephalogram signal data set into a training set, a verification set and a test set;
Step 2, constructing an MCA-EEGNet model: a common convolution layer in EEGNet models is replaced by a parallel multi-scale time convolution layer so as to better extract the characteristics; the channel attention module ECA is added, so that channel information with high relativity with input data is more focused during network training;
step 3, inputting the training set and the verification set in the step 1 into an MCA-EEGNet model for training;
And 4, inputting the test set in the step 1 into the trained model in the step 3 for classification, and evaluating the classification accuracy.
The invention has the following advantages:
1. Compared with a network of a single-scale convolution layer, the network of the parallel multi-scale time convolution layer can be used for extracting the characteristics of different dimensions of signals, so that the accuracy of a motor imagery classification task is further improved.
2. By adding the channel attention module to the improved model, channel information with high correlation with input data can be more focused during network training, and meanwhile, proper cross-channel interaction can not only keep network performance, but also remarkably reduce complexity of the model and improve training speed of the model.
Drawings
FIG. 1 is a general flow chart of the method of the present invention
FIG. 2 MCA-EEGNet network block diagram
FIG. 3 is a time schematic of a motor imagery data set
Detailed Description
Aiming at the problems that the characteristics are difficult to extract and classify due to low signal-to-noise ratio of the electroencephalogram signals, the invention provides a motor imagery electroencephalogram signal classifying method based on channel attention and multi-scale time domain convolution. The common convolution layer in EEGNet models is replaced by the parallel multi-scale time convolution layer so as to better extract the characteristics, thereby improving the classification accuracy. Meanwhile, a channel attention module ECA is added, so that channel information with high relativity with input data is more focused during network training, the robustness of a model is further improved, and a high-efficiency and better-performance deep learning method is provided for classifying the motion image electroencephalogram signals.
FIG. 1 is a general flow chart of the method of the present invention, which can be broken down into the following steps:
step 1, preprocessing data and dividing a data set;
Step 2, constructing an MCA-EEGNet model;
Step 3, training the model by using the training set and the verification set;
and 4, testing the model effect and evaluating the classification accuracy.
Specific details of each step are set forth below:
Step 1:
(1) Carrying out band-pass filtering on the original motor imagery electroencephalogram signal by using a Butterworth band-pass filter with 3-order 4-40Hz, and filtering out signals with required frequency bands;
(2) Performing exponential moving average standardization on the filtered signals, wherein an attenuation factor is set to be 0.999 so as to reduce the influence of numerical value difference on the model effect;
(3) Before training is started, the preprocessed electroencephalogram data set is divided. 80% of the data in the training sample is used as a training set, and the rest 20% of the data are used as a verification set.
Step 2:
aiming at the problems of difficult feature extraction and difficult classification caused by low signal-to-noise ratio of the electroencephalogram signals, the invention provides a novel model improvement method, which is called MCA-EEGNet for short, based on EEGNet. The network structure of the method is shown in fig. 2, and is mainly summarized into four parts: block1, block2, block3, full link layer. Models were built here using Pytorch. Each of which is described in detail below:
(1)Block1
Inspired by the filter bank thought, two parallel multi-scale time convolution layers are used for respectively carrying out convolution processing on an input signal, the convolution step size stride is set to be 1, and the padding is 1/2 of the convolution kernel size. Through multiple experiments, the best results are achieved by finally setting the two convolution kernel sizes, (1, 64), (1, 40), respectively. And then connecting the convolution results of the two convolution layers and outputting the convolution results after normalization.
(2)Block2
Firstly, performing feature extraction on the output of an upper layer by using a space convolution layer with a convolution kernel size of (C, 1), a convolution step length of 1 and a maximum norm weight constraint max_norm of 0.5, wherein C is the lead number of acquired electroencephalogram signals. And then carrying out normalization processing on the output result. The ELU function is used as the activation function, so that the training speed can be increased, and the classification accuracy can be improved. After the activation function, the features are processed using an average pooling layer of size and step size of 1 x 8 to reduce the number of parameters. And finally, randomly discarding the nodes in the corresponding layer in a Dropout mode with the probability of 0.5 so as to reduce the overfitting phenomenon of the network.
(3)Block3
The depth separable convolution (SeparableConv D) consists of two parts, a channel-by-channel convolution (DepthwiseConv D) and a point-by-point convolution (PointwiseConv D). First, the output of the upper layer is subjected to feature extraction by using a channel-by-channel convolution layer with the convolution kernel size of (1, 33) and the step size of 1, and padding is 1/2 of the convolution kernel size. Then, a point-by-point convolution operation is performed, the convolution kernel size is (1, 1), the step size is 1, and the padding is 0. And after normalization processing is carried out on the output result, an efficient channel attention module ECA is added to carry out weight distribution on the network, so that channel information with high relativity with input data is more concerned during network training. Finally, the result is output through a series of operations such as average pooling and discarding.
Among other things, ECA is an efficient channel attention module for deep convolutional neural networks. The method comprises the steps of carrying out channel-by-channel global average pooling on input information to obtain an aggregation characteristic. And then, under the condition of not reducing the dimension, carrying out one-dimensional convolution on each channel and k adjacent channels thereof to extract information between the adjacent channels. And finally, activating and outputting a result through a Sigmoid function. Wherein the k value represents the coverage rate of the local cross-channel interaction, which is adaptively determined by the mapping of the lead number C, and the calculation method is shown in the formula 1:
Where γ=2, b=1, |m| odd denotes the odd number nearest to m, and C denotes the number of leads of the acquired brain electrical signal.
(4) Full connection layer
And finally, integrating all the obtained features through a full connection layer, and inputting the integrated features into softMax for classification to obtain a final result. Meanwhile, the maximum norm constraint is added to the full-connection layer for regularization treatment, the maximum norm value is set to be 0.25, so that the overfitting phenomenon is prevented, and the generalization capability of the model is improved.
Step 3: the training set and the validation set of the electroencephalogram signals are input into the MCA-EEGNet model for training, and the training process is divided into two stages. The maximum number of iterations in the first stage is set to 800 and training is ended in advance when the validation set loss function reaches a minimum to prevent overfitting and save training time. In the second stage, the verification set data are combined into the training set data for training, when the verification set loss value is smaller than the training set loss value in the first stage, training is finished in advance, and the maximum iteration number is still 800. And recording a model with the lowest loss value of the verification set in the second iteration process, predicting a test set sample by using the model, and recording the accuracy of the test set. And respectively carrying out the model training and the test on 9 subjects to obtain 9 groups of test set accuracy, and recording the average value as the final model accuracy.
In the experiment, the cross entropy loss function is adopted in the training of all methods, the Adam method is used as an optimizer, the learning rate is set to be 0.001, and the other parameters use the default value of the Adam method. The batch size (batch size) of the batch training is set to 64.
Step 4: inputting the test set in the step1 into the trained model in the step 3 for classification recognition, and evaluating the classification accuracy.
The data set and experimental results used in the method of the invention are described as follows:
1. Data set
The present invention was conducted using two published data sets BCI Competition IV DATASET a and 2b, the time schematic of which is shown in FIG. 3, all of which have been pre-processed using a 0.5-100Hz band pass filter.
The 2a dataset contained 4 types of motor imagery electroencephalogram signals for the left hand, right hand, both feet, and tongue of 9 subjects. These brain electrical signals were collected from 22 electrodes at a sampling rate of 250Hz, containing 576 trials altogether (i.e., 576 samples). These 576 samples were collected from two days and the daily experiment was recorded as 1 session. Each session contains 4 classes of samples, each class containing 72 samples. For the 2a dataset, data from 0.5 seconds to 2.5 seconds after presentation of the prompt were extracted as one sample. All samples have been labeled (i.e., the motor imagery of which location the sample corresponds to is marked).
The 2b dataset contains 2 types of motor imagery electroencephalogram signals for the left and right hands of 9 subjects. These brain electrical signals are acquired from 3 electrodes, again at a sampling rate of 250Hz. For each subject, the motor imagery task was divided into 5 sessions. Unlike the 2a dataset, the first 2 sessions in the 2b dataset are tested without feedback, i.e. are the brain electrical imagination data without visual feedback, and the last 3 sessions are brain electrical imagination data comprising visual feedback. For the 2b dataset, data from 0.5 seconds to 4 seconds after presentation of the prompt were extracted as one sample.
Because the difference of the electroencephalogram characteristics of different subjects is large, classification experiments of electroencephalogram signals are carried out, the classification accuracy rate is required to be calculated for each subject independently, and the average value of the classification accuracy rates of a plurality of subjects is calculated to serve as a performance index of a model.
2. Experimental results and discussion
In order to verify the effectiveness and versatility of the method of the present invention, a comparison experiment and an ablation experiment were performed on the public data sets 2a and 2b, respectively, with the following experimental results:
(1) Cross dataset contrast experiments
The new method and EEGNet method proposed by the present invention were subjected to comparative experiments using the 2a and 2b data sets, respectively, and the experimental results are shown in table 1:
table 1 results of cross dataset comparative experiments
As can be seen from table 1, the accuracy of the proposed method is higher than EEGNet method on both the 2a and 2b datasets, with a maximum improvement of about 8.5% on the 2a dataset.
(2) Ablation experiments
Two sets of ablation experiments were performed on the 2a dataset for the method of the invention, one set being MCA-EEGNet without the addition of a attentional mechanism and one set being MCA-EEGNet without the use of a parallel multi-scale convolution layer, the experimental results being shown in Table 2:
Table 2 ablation experimental results
As can be seen from Table 2, both sets of ablation experiments were higher than EEGNet, but lower than the proposed method. It can be stated that both of these solutions are effective and indispensable, and that the method of the present invention has a higher classification performance when both are provided.
Claims (3)
1. A motor imagery electroencephalogram signal classification method based on channel attention and multi-scale time domain convolution is characterized by comprising the following steps:
step 1, data preprocessing: performing band-pass filtering processing on the motor imagery electroencephalogram signals by using a band-pass filter, and then performing exponential moving average standardization on the filtered signals; dividing an electroencephalogram signal data set into a training set, a verification set and a test set;
Step 2, constructing an MCA-EEGNet model: a common convolution layer in EEGNet models is replaced by a parallel multi-scale time convolution layer so as to better extract the characteristics; the channel attention module ECA is added, so that channel information with high relativity with input data is more focused during network training;
step 3, inputting the training set and the verification set in the step 1 into an MCA-EEGNet model for training;
Step 4, inputting the test set in the step 1 into the trained MCA-EEGNet model in the step 3 for classification, and evaluating the classification accuracy;
The step 2 is specifically as follows:
The specific structure of the MCA-EEGNet model is summarized in four parts: block1, block2, block3, full link layer; each of which is described in detail below:
(1)Block1
Inspired by the filter bank thought, two parallel multi-scale time convolution layers are used for respectively carrying out convolution processing on an input signal, the convolution step size stride is set to be 1, and the padding is 1/2 of the convolution kernel size; through multiple experiments, the best effect is achieved by finally setting the size of two convolution kernels to be (1, 64) and (1, 40) respectively; then, the convolution results of the two convolution layers are connected and output after normalization;
(2)Block2
Firstly, performing feature extraction on the output of an upper layer by using a space convolution layer with a convolution kernel size of (C, 1), a convolution step length of 1 and a maximum norm weight constraint max_norm of 0.5, wherein C is the lead number of acquired electroencephalogram signals; then carrying out normalization processing on the output result; the ELU function is used as the activation function, so that the training speed can be increased, and the classification accuracy can be improved; after the function is activated, an average pooling layer with the size and the step length of 1 multiplied by 8 is used for processing the characteristics so as to reduce the parameter number; finally, randomly discarding the nodes in the corresponding layer in a probability of 0.5 in a Dropout mode to reduce the overfitting phenomenon of the network;
(3)Block3
The depth separable convolution (SeparableConv D) consists of two parts, namely a channel-by-channel convolution (DepthwiseConv D) and a point-by-point convolution (PointwiseConv D); firstly, carrying out feature extraction on the output of an upper layer by using a channel-by-channel convolution layer with the convolution kernel size of (1, 33) and the step length of 1, wherein padding is 1/2 of the convolution kernel size; then carrying out point-by-point convolution operation, wherein the convolution kernel size is (1, 1), the step length is 1, and the padding is 0; after normalization processing is carried out on the output result, an efficient channel attention module ECA is added to carry out weight distribution on the network, so that channel information with high relativity with input data is more concerned during network training; finally, outputting a result through a series of operations of averaging pooling and discarding;
Wherein ECA is a high-efficiency channel attention module for deep convolutional neural networks; firstly, carrying out channel-by-channel global average pooling on input information to obtain an aggregation characteristic; then, under the condition of not reducing the dimension, carrying out one-dimensional convolution on each channel and k adjacent channels thereof to extract information between the adjacent channels; finally, activating and outputting a result through a Sigmoid function; wherein the k value represents the coverage rate of the local cross-channel interaction, which is adaptively determined by the mapping of the lead number C, and the calculation method is shown in the formula 1:
wherein, gamma=2, b=1, |m| odd represents an odd number nearest to m, and C represents the number of leads of the acquired electroencephalogram signals;
(4) Full connection layer
Finally, integrating all the obtained features through a full connection layer, and inputting the integrated features into softMax for classification to obtain a final result; meanwhile, the maximum norm constraint is added to the full connection layer for regularization, and the maximum norm value is set to be 0.25.
2. The method for classifying motor imagery electroencephalograms based on channel attention and multi-scale time domain convolution according to claim 1, wherein the method comprises the following steps of:
The step 1 specifically comprises the following steps:
(1) Carrying out band-pass filtering on the original motor imagery electroencephalogram signal by using a Butterworth band-pass filter with 3-order 4-40Hz, and filtering out signals with required frequency bands;
(2) Performing exponential moving average standardization on the filtered signals, wherein an attenuation factor is set to be 0.999 so as to reduce the influence of numerical value difference on the model effect;
(3) Before training, dividing the preprocessed electroencephalogram data set; 80% of the data in the training sample is used as a training set, and the rest 20% of the data are used as a verification set.
3. The method for classifying motor imagery electroencephalograms based on channel attention and multi-scale time domain convolution according to claim 1, wherein the method comprises the following steps of:
Step 3: inputting a training set and a validation set of the electroencephalogram signals into an MCA-EEGNet model for training, wherein the training process is divided into two stages; the maximum iteration number of the first stage is set to 800, and training is finished in advance when the loss function of the verification set reaches the minimum so as to prevent overfitting and save training time; in the second stage, merging the verification set data into the training set data for training, and ending training in advance when the verification set loss value is smaller than the training set loss value in the first stage, wherein the maximum iteration number is still 800; recording a model with the lowest verification set loss value in the second iteration process, predicting a test set sample by using the model, and recording the accuracy of the test set; respectively performing MCA-EEGNet model training and testing on 9 subjects to obtain 9 groups of testing set accuracy, and recording the average value as the final model accuracy;
the training adopts cross entropy loss functions, takes an Adam method as an optimizer, sets the learning rate to be 0.001, and uses the default value of the Adam method for the rest parameters; the batch sizes for the batch training were each set to 64.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111606161.1A CN114266276B (en) | 2021-12-25 | 2021-12-25 | Motor imagery electroencephalogram signal classification method based on channel attention and multi-scale time domain convolution |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111606161.1A CN114266276B (en) | 2021-12-25 | 2021-12-25 | Motor imagery electroencephalogram signal classification method based on channel attention and multi-scale time domain convolution |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114266276A CN114266276A (en) | 2022-04-01 |
CN114266276B true CN114266276B (en) | 2024-05-31 |
Family
ID=80830579
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111606161.1A Active CN114266276B (en) | 2021-12-25 | 2021-12-25 | Motor imagery electroencephalogram signal classification method based on channel attention and multi-scale time domain convolution |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114266276B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114767120B (en) * | 2022-04-25 | 2024-05-10 | 上海韶脑传感技术有限公司 | Single-side limb patient motor imagery electroencephalogram channel selection method based on deep learning |
CN115474899A (en) * | 2022-08-17 | 2022-12-16 | 浙江大学 | Basic taste perception identification method based on multi-scale convolution neural network |
CN115429284B (en) * | 2022-09-16 | 2024-05-03 | 山东科技大学 | Electrocardiosignal classification method, system, computer device and readable storage medium |
CN115381467B (en) * | 2022-10-31 | 2023-03-10 | 浙江浙大西投脑机智能科技有限公司 | Attention mechanism-based time-frequency information dynamic fusion decoding method and device |
CN115919315B (en) * | 2022-11-24 | 2023-08-29 | 华中农业大学 | Cross-main-body fatigue detection deep learning method based on EEG channel multi-scale parallel convolution |
CN115836868B (en) * | 2022-11-25 | 2024-08-23 | 燕山大学 | Driver fatigue state identification method based on multi-scale convolution kernel size CNN |
CN115813409B (en) * | 2022-12-02 | 2024-08-23 | 复旦大学 | Motion image electroencephalogram decoding method with ultralow delay |
CN116058852B (en) * | 2023-03-09 | 2023-12-22 | 同心智医科技(北京)有限公司 | Classification system, method, electronic device and storage medium for MI-EEG signals |
CN116702001A (en) * | 2023-04-06 | 2023-09-05 | 博睿康科技(常州)股份有限公司 | Application of signal detection method in physiological signal classification |
CN117390542A (en) * | 2023-10-13 | 2024-01-12 | 上海韶脑传感技术有限公司 | Domain generalization-based cross-test motor imagery electroencephalogram modeling method |
CN117909786A (en) * | 2023-12-22 | 2024-04-19 | 东北电力大学 | EEG-based left and right hand motor imagery identification method |
CN118228771A (en) * | 2024-04-01 | 2024-06-21 | 中国人民解放军国防科技大学 | Cognitive signal decoding method based on time-frequency space-time attention decoupling |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113269048A (en) * | 2021-04-29 | 2021-08-17 | 北京工业大学 | Motor imagery electroencephalogram signal classification method based on deep learning and mixed noise data enhancement |
CN113349801A (en) * | 2021-06-21 | 2021-09-07 | 西安电子科技大学 | Imaginary speech electroencephalogram signal decoding method based on convolutional neural network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111325111A (en) * | 2020-01-23 | 2020-06-23 | 同济大学 | Pedestrian re-identification method integrating inverse attention and multi-scale deep supervision |
-
2021
- 2021-12-25 CN CN202111606161.1A patent/CN114266276B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113269048A (en) * | 2021-04-29 | 2021-08-17 | 北京工业大学 | Motor imagery electroencephalogram signal classification method based on deep learning and mixed noise data enhancement |
CN113349801A (en) * | 2021-06-21 | 2021-09-07 | 西安电子科技大学 | Imaginary speech electroencephalogram signal decoding method based on convolutional neural network |
Also Published As
Publication number | Publication date |
---|---|
CN114266276A (en) | 2022-04-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114266276B (en) | Motor imagery electroencephalogram signal classification method based on channel attention and multi-scale time domain convolution | |
CN113786204B (en) | Epileptic intracranial brain electrical signal early warning method based on deep convolution attention network | |
CN107961007A (en) | A kind of electroencephalogramrecognition recognition method of combination convolutional neural networks and long memory network in short-term | |
Bouallegue et al. | A dynamic filtering DF-RNN deep-learning-based approach for EEG-based neurological disorders diagnosis | |
Dagdevir et al. | Optimization of preprocessing stage in EEG based BCI systems in terms of accuracy and timing cost | |
CN112528819B (en) | P300 electroencephalogram signal classification method based on convolutional neural network | |
CN113768519B (en) | Method for analyzing consciousness level of patient based on deep learning and resting state electroencephalogram data | |
Yu et al. | Epileptic seizure prediction using deep neural networks via transfer learning and multi-feature fusion | |
CN112990008B (en) | Emotion recognition method and system based on three-dimensional characteristic diagram and convolutional neural network | |
CN112488002B (en) | Emotion recognition method and system based on N170 | |
CN116304815A (en) | Motor imagery electroencephalogram signal classification method based on self-attention mechanism and parallel convolution | |
Zhao et al. | Interactive local and global feature coupling for EEG-based epileptic seizure detection | |
Malekmohammadi et al. | An efficient hardware implementation for a motor imagery brain computer interface system | |
CN112450885B (en) | Epileptic electroencephalogram-oriented identification method | |
Li et al. | Spatial–temporal discriminative restricted Boltzmann machine for event-related potential detection and analysis | |
CN113269048A (en) | Motor imagery electroencephalogram signal classification method based on deep learning and mixed noise data enhancement | |
CN116522106A (en) | Motor imagery electroencephalogram signal classification method based on transfer learning parallel multi-scale filter bank time domain convolution | |
Al-dabag et al. | EEG motor movement classification based on cross-correlation with effective channel | |
Geng et al. | [Retracted] A Fusion Algorithm for EEG Signal Processing Based on Motor Imagery Brain‐Computer Interface | |
Bhalerao et al. | Automatic detection of motor imagery EEG signals using swarm decomposition for robust BCI systems | |
CN117860271A (en) | Classifying method for motor imagery electroencephalogram signals | |
Jayashekar et al. | Hybrid Feature Extraction for EEG Motor Imagery Classification Using Multi-Class SVM. | |
Sun et al. | MEEG-Transformer: transformer Network based on Multi-domain EEG for emotion recognition | |
Zhao et al. | GTSception: a deep learning eeg emotion recognition model based on fusion of global, time domain and frequency domain feature extraction | |
CN111493864A (en) | EEG signal mixed noise processing method, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |