CN114781441A - EEG motor imagery classification method and multi-space convolution neural network model - Google Patents

EEG motor imagery classification method and multi-space convolution neural network model Download PDF

Info

Publication number
CN114781441A
CN114781441A CN202210353223.0A CN202210353223A CN114781441A CN 114781441 A CN114781441 A CN 114781441A CN 202210353223 A CN202210353223 A CN 202210353223A CN 114781441 A CN114781441 A CN 114781441A
Authority
CN
China
Prior art keywords
eeg signal
spatial
convolution
features
eeg
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210353223.0A
Other languages
Chinese (zh)
Other versions
CN114781441B (en
Inventor
赵威
刘铁军
郜东瑞
李鑫
谢佳欣
秦云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202210353223.0A priority Critical patent/CN114781441B/en
Publication of CN114781441A publication Critical patent/CN114781441A/en
Application granted granted Critical
Publication of CN114781441B publication Critical patent/CN114781441B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an EEG motor imagery classification method and a multi-space convolution neural network model, wherein the method extracts the time characteristics of an EEG signal through time convolution, reserves the space characteristics of the EEG signal, and then extracts the space characteristics of the EEG signal through space convolution; and mapping the temporal features and the spatial features to complete classification in a classifier. The model comprises at least a spatial convolution and a temporal convolution in order to extract feature expressions of the EEG signal in different spaces simultaneously. Experimental results prove that the classification accuracy rate of the method is superior to that of the existing method under a plurality of data sets, and the superiority of the method is embodied. The present invention helps to advance the field of EEG motor imagery.

Description

EEG motor imagery classification method and multi-space convolution neural network model
Technical Field
The invention relates to the technical field of electroencephalogram signal analysis, in particular to an EEG motor imagery classification method and a multi-space convolution neural network model.
Background
The analysis of EEG motor imagery data relies on the rapid development of brain-computer interface technology. In the motor imagery task, EEG data is derived from experimentally tested motor-like imagery behaviors. Analysis of such EEG signal data will help to study the subject brain behaviour. Still further, the EEG signal generated by interpreting motor imagery may help disabled patients control external mechanical movements. Such as: the direction of movement of the wheelchair, the movement of the robotic arms, etc. Therefore, the analysis of motor imagery EEG signals is of great significance for the independent activity of stroke wind patients.
In the traditional EEG signal analysis method, the traditional machine learning algorithm is mainly utilized to complete the task of feature extraction of the EEG signal. For example, the common space mode (CSP) is one of the most popular and most powerful feature extraction methods, and a series of methods derived therefrom, such as filter bank common space mode (FBCSP), etc. The extracted EEG features are fed into a classifier to obtain a classification result. The classifier includes: linear Discriminant Analysis (LDA), Support Vector Machine (SVM), and the like. However, conventional machine-learned feature extraction algorithms rely on a large amount of data prior knowledge. Whereas the acquisition of a priori knowledge requires a lot of time. More importantly, the generalization capability of the traditional classification model has been a challenge.
As deep learning develops, more and more neural networks are applied to feature extraction and classification of EEG signals, such as EEGNet, residual networks, and the like. The powerful learning capability of the deep learning model makes the EEG feature extraction process independent of data prior knowledge. In addition, deep learning models typically have greater generalization performance. Unfortunately, most of the current deep learning models generally only focus on the expression of the EEG signal in a single space, and ignore the useful information of the EEG signal in other spaces.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an EEG motor imagery classification method and a multi-space convolution neural network model. The invention first extracts temporal feature information along the time dimension of the EEG using a time convolutional network and preserves the spatial characteristics of the EEG signal. Subsequently, feature information of the EEG signal in the spatial dimension is extracted by spatial convolution. And finally, mapping the extracted time and space characteristics to a category space by using a full-connection network to finish a classification task.
The specific technical scheme of the invention is as follows:
according to a first aspect of the present invention, there is provided an EEG motor imagery classification method based on a multi-space convolution neural network model, the method comprising: extracting the time characteristics of the EEG signal through time convolution, preserving the spatial characteristics of the EEG signal, and then extracting the spatial characteristics of the EEG signal through space convolution; and mapping the time characteristic and the spatial characteristic to a classifier to finish classification.
Further, the extracting, by means of the temporal convolution, feature information of the EEG signal in the time dimension and preserving spatial features of the EEG signal, and then the extracting, by means of the spatial convolution, feature information of the EEG signal in the spatial dimension specifically includes: checking the number of channels of the EEG signal by a first convolution of 1 x 1 size for upscaling; enriching spatial features of the EEG signal by a second convolution kernel of size 1 x 1; convolving the EEG signal over time by a third convolution kernel of 1 x 11 size to obtain a temporal signature of the EEG signal; carrying out weighted spatial filtering on the EEG signal through a fourth convolution kernel with the size of 60 multiplied by 1 to obtain the spatial characteristics of the EEG signal; and compressing the time characteristic and the spatial characteristic through a first pooling layer, removing redundant information and reducing the parameter number.
Further, the mapping the temporal features and the spatial features into a classifier to complete classification specifically includes: the time characteristics and/or the space characteristics are classified after being filtered and pooled, and the category of the maximum probability is calculated through the following formula (1);
Figure BDA0003581481910000021
wherein x isiRepresenting the ith neuron input, xjRepresents the jth neuron input, ∑jexp(xj) Representing all neuronal inputs.
Further, the activation function of the classifier is shown in the following formula (2):
Figure BDA0003581481910000022
where x represents the output result after convolution calculation, and a is a constant.
Further, after the classifying is completed by mapping the temporal features and the spatial features into a classifier, the method further comprises: the difference between the classification result and the corresponding genuine label is minimized according to a loss function as shown in the following equation (3):
Figure BDA0003581481910000031
wherein N represents the number of samples, M represents the number of categories, and y is the same as the real label data if the category of the categories is the same as the real label dataicEqual to 1, otherwise 0, picRepresenting the predicted probability that sample i belongs to category c.
Further, the method further comprises: optimizing the loss function with an optimizer, updating the learning rate of the optimizer by the following equation (4):
Figure BDA0003581481910000032
where new _ lr represents a new learning rate, initial _ lr represents an initial learning rate, r represents a learning decay rate, epoch represents the number of iterations to date, and step _ size represents a step size.
According to a second technical scheme of the invention, a multi-space convolution neural network model for EEG motor imagery classification is provided, which comprises a feature extractor and a classifier; the feature extractor is configured to extract temporal features of the EEG signal by temporal convolution and preserve spatial features of the EEG signal, followed by extraction of spatial features of the EEG signal by spatial convolution; the classifier is configured to map the temporal features and the spatial features to a complete classification in the classifier.
Further, the feature extractor is further configured to: checking the number of channels of the EEG signal by a first convolution of 1 x 1 size for upscaling; enriching spatial features of the EEG signal by a second convolution kernel of size 1 x 1; convolving the EEG signal over time by a third convolution kernel of size 1 x 11 to obtain a temporal signature of the EEG signal; performing weighted spatial filtering on the EEG signal through a fourth convolution kernel with the size of 60 multiplied by 1 to obtain the spatial characteristics of the EEG signal; and compressing the time characteristics and the spatial characteristics through a first pooling layer, removing redundant information and reducing the parameter quantity.
Further, the classifier is further configured to: the temporal features and/or the spatial features are classified after being filtered and pooled, and the category of the maximum probability is calculated through the following formula (1);
Figure BDA0003581481910000041
wherein x isiRepresenting the ith neuron input, xjRepresents the jth neuron input, Σjexp(xj) Representing all neuronal inputs.
Further, the activation function of the classifier is as shown in the following formula (2):
Figure BDA0003581481910000042
where x represents the output result after convolution calculation, and a is a constant.
Further, the model further comprises a machine learning module configured to: the difference between the classification result and the corresponding genuine label is minimized according to a loss function as shown in the following equation (3):
Figure BDA0003581481910000043
wherein N represents the number of samples, M represents the number of classes, and y is the same as the real tag data if the class of classification is the same as the real tag dataicEqual to 1, otherwise 0, picRepresenting the predicted probability that sample i belongs to class c.
Further, the machine learning module further comprises an optimizer, which optimizes the loss function and updates the learning rate of the optimizer according to the following formula (4):
Figure BDA0003581481910000044
where new _ lr represents a new learning rate, initial _ lr represents an initial learning rate, r represents a learning decay rate, epoch represents the number of iterations to date, and step _ size represents a step size.
Has the advantages that:
1) the method overcomes the limitation of the traditional machine learning algorithm on the task of feature extraction, and obtains higher model generalization capability.
2) The proposed multi-space convolution improves the feature extraction performance of the model on the EEG signal to some extent.
3) The method provided embodies high classification accuracy on a plurality of data sets, and is obviously superior to the existing method.
Drawings
Fig. 1 is a flowchart of an EEG motor imagery classification method based on a multi-space convolution neural network model according to an embodiment of the present invention.
Fig. 2 is a geometric diagram of an activation function of a classifier according to an embodiment of the present invention.
FIG. 3 is a graph of an IIa data set confusion matrix according to an embodiment of the invention.
Figure 4 is a confusion matrix for an IIb data set according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In addition, descriptions such as "first", "second", etc. in the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention will now be further described with reference to the accompanying drawings.
The embodiment of the invention provides an EEG motor imagery classification method based on a multi-space convolution neural network model. As shown in fig. 1, the method starts with step S100 by extracting temporal features of the EEG signal by temporal convolution and preserving spatial features of the EEG signal, followed by extracting spatial features of the EEG signal by spatial convolution.
In some embodiments, the signal per input is defined as Xi∈RC×TC denotes the number of channels of the EEG signal, T denotes the data length, and i denotes the number of iterations. Step S100 is implemented by a feature extractor in a multi-space convolutional neural network model. In the feature extractor, the number of channels of EEG signal is first scaled up by a first convolution of 1 × 1 size, from original CxT to 60T, so that useful information of each channel can be extracted; then at the Shape Transformatio layer, a second convolution kernel of size 1 × 1 is also used to enrich the spatial features of the EEG signal; at the Temporal Con layer, convolving the EEG signal with time by using a third convolution kernel with the size of 1 multiplied by 11 to obtain a time characteristic; in the Spatial Conv layer, weighted Spatial filtering is carried out on EEG signals by using a 60 × 1 convolution kernel to obtain Spatial features; the last layer is a first pooling layer, which compresses the time characteristics and the space characteristics, removes redundant information and reduces the parameter number.
Finally, in step S200, the temporal features and the spatial features are mapped to a classifier to complete classification.
In the classifier, a convolutional layer is used to further filter the input data, then a second pooling layer follows, finally the FC layer classifies the data, and finally the formula (1) is used to calculate the class with the maximum probability, and the class is disclosed as (1)
Figure BDA0003581481910000061
Wherein x isiRepresents the ith neuron input, xjRepresents the jth neuron input, ∑jexp(xj) Representing all neuronal inputs.
In some embodiments, the activation function of the classifier is as shown in equation (2) below:
Figure BDA0003581481910000062
as shown in fig. 2, the activation function designed by the embodiment of the present invention fuses sigmoid and ReLU, and has soft saturation on the left side and no saturation on the right side. The left side enables the ELU to be more robust to input changes or noise, the right side linear part enables the ELU to relieve the problem of gradient disappearance, and the output mean value of the ELU is close to 0, so that the convergence speed is higher.
In some embodiments, cross-entropy loss is used to minimize the difference between model predictions and corresponding true labels, the loss function is shown in equation (3) below:
Figure BDA0003581481910000063
wherein N represents the number of samples, M represents the number of categories, and y is the same as the real label data if the category of the categories is the same as the real label dataicEqual to 1, otherwise 0, picRepresenting the predicted probability that sample i belongs to class c.
In some embodiments, the loss function is optimized with an optimizer, whose learning rate is updated by the following equation (4):
Figure BDA0003581481910000071
wherein new _ lr represents a new learning rate, initial _ lr represents an initial learning rate, r represents a learning attenuation rate, epoch represents the number of iterations to the present, and step _ size represents a step size.
For example only, the learning rate of the initial trial of the optimizer is set to 0.02, reducing the learning rate to 0.5 times every 50 epochs. The optimizer may select an Adma optimizer. Generally, the Adma optimizer is an optimizer with excellent working performance, and is computationally efficient and requires less memory. The method combines the advantages of two optimization algorithms of AdaGrad and RMSProp, and comprehensively considers the mean value and the variance of the gradient to calculate a new step length.
The embodiment of the invention also provides a multi-space convolution neural network model for EEG motor imagery classification, which comprises a feature extractor and a classifier; the feature extractor is configured to extract temporal features of the EEG signal by temporal convolution and preserve spatial features of the EEG signal, followed by extraction of spatial features of the EEG signal by spatial convolution; the classifier is configured to map the temporal features and the spatial features into a complete classification in the classifier.
Further, the feature extractor is further configured to: ascending the number of channels of the EEG signal by a first convolution kernel of 1 x 1 size; enriching spatial features of the EEG signal by a second convolution kernel of size 1 x 1; convolving the EEG signal over time by a third convolution kernel of 1 x 11 size to obtain a temporal signature of the EEG signal; performing weighted spatial filtering on the EEG signal through a fourth convolution kernel with the size of 60 multiplied by 1 to obtain the spatial characteristics of the EEG signal; and compressing the time characteristic and the spatial characteristic through a first pooling layer, removing redundant information and reducing the parameter number.
Further, the classifier is further configured to: the temporal features and/or the spatial features are classified after being filtered and pooled, and the category of the maximum probability is calculated through the following formula (1);
Figure BDA0003581481910000072
wherein x isiRepresenting the ith neuron input, xjRepresents the jth neuron input, Σjexp(xj) Representing all neuronal inputs.
Further, the activation function of the classifier is shown in the following formula (2):
Figure BDA0003581481910000081
where x represents the output result after convolution calculation, and a is a constant.
Further, the model further comprises a machine learning module configured to: the difference between the classification result and the corresponding genuine label is minimized according to a loss function as shown in the following equation (3):
Figure BDA0003581481910000082
wherein N represents the number of samples, M represents the number of categories, if categories and true label countAccording to the same, yicEqual to 1, otherwise 0, picRepresenting the predicted probability that sample i belongs to category c.
Further, the machine learning module further comprises an optimizer, which optimizes the loss function and updates the learning rate of the optimizer according to the following formula (4):
Figure BDA0003581481910000083
wherein new _ lr represents a new learning rate, initial _ lr represents an initial learning rate, r represents a learning attenuation rate, epoch represents the number of iterations to the present, and step _ size represents a step size.
The multi-space convolution neural network model for classifying the EEG motor imagery provided by the embodiment of the present invention is consistent with the technical effect of the method provided by the embodiment of the present invention when being implemented specifically, and will not be described herein.
Experiments will be performed below to further illustrate the feasibility and the advancement of the present invention based on the methods or models provided by the embodiments of the present invention.
Embodiments of the present invention have performed extensive experiments on the IIa and IIb data sets of BCI race IV.
IIa of BCI race IV: the data set collected 22 electrode electroencephalographic signals of 9 healthy subjects at two different stages. Each subject was involved in four motor imagery tasks, including the motor imagery of the left hand, right hand, feet and tongue. There were 6 experiments per step with a brief rest in between. The experimental data for one run contained 48 pieces of experimental data (12 for each of the four categories of tasks), for a total of 288 pieces of experimental data generated at each stage. Experimental data between [2,6] seconds of each experiment are considered in the experiments herein. All experimental data are put together here and taken in steps of 20 with a sliding window size of 500. 5000 parts of the selected experimental data are used as a test set, and the rest parts are used as a training set.
IIb for BCI race IV: the BCI competition IV public Data set Data sets 2b, the experimental paradigm of which is the same as IIa, is an electroencephalogram Data set based on the visual evoked left-right hand motor imagery. The data set collects the EEG signals of 9 experimenters who are right-handed and have normal vision or normal vision after correction as a data set [15 ]. All experimental data are put together here and taken in steps of 20 with a sliding window size of 500. 5000 parts of the selected experimental data are used as a test set, and the rest parts are used as a training set.
The method provided by the embodiment of the invention is compared with the traditional machine learning classification method for experiments. The traditional machine learning classification method comprises SVM, KNN and LDA. The experimental result shows that compared with the traditional machine learning method, the classification effect is remarkably improved, and the multi-space convolution neural network provided by the embodiment of the invention can effectively extract the characteristics of the electroencephalogram signals and classify the electroencephalogram signals. The confusion matrix for the IIa and IIb datasets is shown in fig. 3 and 4, and the pairing ratio of the machine learning method is shown in table 1 and table 2.
TABLE 1 machine learning method vs. present methods for IIa data set classification
Figure BDA0003581481910000091
TABLE 2 machine learning method vs. text method for IIb dataset classification
Figure BDA0003581481910000092
As can be seen from tables 1 and 2, the method proposed by the embodiment of the present invention is significantly superior to the general machine learning method.
The above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.

Claims (10)

1. An EEG motor imagery classification method based on a multi-space convolution neural network model is characterized by comprising the following steps:
extracting the time characteristics of the EEG signal through time convolution, preserving the spatial characteristics of the EEG signal, and then extracting the spatial characteristics of the EEG signal through space convolution;
and mapping the temporal features and the spatial features to complete classification in a classifier.
2. The method of claim 1, wherein extracting feature information of the EEG signal in the time dimension by temporal convolution and preserving spatial features of the EEG signal, and subsequently extracting feature information of the EEG signal in the spatial dimension by spatial convolution comprises:
checking the number of channels of the EEG signal by a first convolution of 1 x 1 size for upscaling;
enriching spatial features of the EEG signal by a second convolution kernel of size 1 x 1;
convolving the EEG signal over time by a third convolution kernel of 1 x 11 size to obtain a temporal signature of the EEG signal;
performing weighted spatial filtering on the EEG signal through a fourth convolution kernel with the size of 60 multiplied by 1 to obtain the spatial characteristics of the EEG signal;
and compressing the time characteristic and the spatial characteristic through a first pooling layer, removing redundant information and reducing the parameter number.
3. The method of claim 1, wherein the mapping the temporal features and the spatial features into a classifier to complete classification specifically comprises:
the time characteristics and/or the space characteristics are classified after being filtered and pooled, and the category of the maximum probability is calculated through the following formula (1);
Figure FDA0003581481900000011
wherein x isiRepresenting the ith neuron input, xjRepresents the jth neuron input, Σjexp(xj) Representing all neuronal inputs.
4. A method according to any of claims 1 to 3, characterized in that the activation function of the classifier is as shown in equation (2):
Figure FDA0003581481900000012
where x represents the output result after convolution calculation, and a is a constant.
5. The method of claim 4, wherein after completion of classifying by mapping the temporal features and the spatial features into a classifier, the method further comprises:
the difference between the classification result and the corresponding genuine label is minimized according to a loss function as shown in the following equation (3):
Figure FDA0003581481900000021
wherein N represents the number of samples, M represents the number of classes, and y is the same as the real tag data if the class of classification is the same as the real tag dataicEqual to 1, otherwise 0, picRepresenting the predicted probability that sample i belongs to category c.
6. The method of claim 5, wherein the method further comprises:
optimizing the loss function with an optimizer, updating the learning rate of the optimizer by the following equation (4):
Figure FDA0003581481900000022
wherein new _ lr represents a new learning rate, initial _ lr represents an initial learning rate, r represents a learning attenuation rate, epoch represents the number of iterations to the present, and step _ size represents a step size.
7. A multi-space convolution neural network model for EEG motor imagery classification is characterized by comprising a feature extractor and a classifier;
the feature extractor is configured to extract temporal features of the EEG signal by temporal convolution and preserve spatial features of the EEG signal, followed by extraction of spatial features of the EEG signal by spatial convolution;
the classifier is configured to map the temporal features and the spatial features into a complete classification in the classifier.
8. The multi-space convolutional neural network model of claim 7, wherein the feature extractor is further configured to:
checking the number of channels of the EEG signal by a first convolution of 1 x 1 size for upscaling;
enriching spatial features of the EEG signal by a second convolution kernel of size 1 x 1;
convolving the EEG signal over time by a third convolution kernel of 1 x 11 size to obtain a temporal signature of the EEG signal;
performing weighted spatial filtering on the EEG signal through a fourth convolution kernel with the size of 60 multiplied by 1 to obtain the spatial characteristics of the EEG signal;
and compressing the time characteristics and the spatial characteristics through a first pooling layer, removing redundant information and reducing the parameter quantity.
9. The multi-space convolutional neural network model of claim 7, wherein the classifier is further configured to:
the temporal features and/or the spatial features are classified after being filtered and pooled, and the category of the maximum probability is calculated through the following formula (1);
Figure FDA0003581481900000031
wherein, xiRepresenting the ith neuron input, xjRepresents the jth neuron input, Σjexp(xj) Representing all neuronal inputs.
10. The multi-space convolutional neural network model of claim 7, wherein the model further comprises a machine learning module configured to:
the difference between the classification result and the corresponding genuine label is minimized according to a loss function as shown in the following equation (3):
Figure FDA0003581481900000032
wherein N represents the number of samples, M represents the number of classes, and y is the same as the real tag data if the class of classification is the same as the real tag dataicEqual to 1, otherwise 0, picRepresenting the predicted probability that sample i belongs to category c.
CN202210353223.0A 2022-04-06 2022-04-06 EEG motor imagery classification method and multi-space convolution neural network model Active CN114781441B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210353223.0A CN114781441B (en) 2022-04-06 2022-04-06 EEG motor imagery classification method and multi-space convolution neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210353223.0A CN114781441B (en) 2022-04-06 2022-04-06 EEG motor imagery classification method and multi-space convolution neural network model

Publications (2)

Publication Number Publication Date
CN114781441A true CN114781441A (en) 2022-07-22
CN114781441B CN114781441B (en) 2024-01-26

Family

ID=82427519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210353223.0A Active CN114781441B (en) 2022-04-06 2022-04-06 EEG motor imagery classification method and multi-space convolution neural network model

Country Status (1)

Country Link
CN (1) CN114781441B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115337026A (en) * 2022-10-19 2022-11-15 之江实验室 Method and device for searching EEG signal features based on convolutional neural network
CN117434452A (en) * 2023-12-08 2024-01-23 珠海市嘉德电能科技有限公司 Lithium battery charge and discharge detection method, device, equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993103A (en) * 2019-03-29 2019-07-09 华南理工大学 A kind of Human bodys' response method based on point cloud data
CN110069958A (en) * 2018-01-22 2019-07-30 北京航空航天大学 A kind of EEG signals method for quickly identifying of dense depth convolutional neural networks
CN110213788A (en) * 2019-06-15 2019-09-06 福州大学 WSN abnormality detection and kind identification method based on data flow space-time characteristic
CN110309797A (en) * 2019-07-05 2019-10-08 齐鲁工业大学 Merge the Mental imagery recognition methods and system of CNN-BiLSTM model and probability cooperation
CN110765920A (en) * 2019-10-18 2020-02-07 西安电子科技大学 Motor imagery classification method based on convolutional neural network
CN113011239A (en) * 2020-12-02 2021-06-22 杭州电子科技大学 Optimal narrow-band feature fusion-based motor imagery classification method
CN113642400A (en) * 2021-07-12 2021-11-12 东北大学 Graph convolution action recognition method, device and equipment based on 2S-AGCN
US20220012489A1 (en) * 2020-07-10 2022-01-13 Korea University Research And Business Foundation Apparatus and method for motor imagery classification using eeg
CN114062511A (en) * 2021-10-24 2022-02-18 北京化工大学 Single-sensor-based intelligent acoustic emission identification method for early damage of aircraft engine
US20220054071A1 (en) * 2019-09-06 2022-02-24 Tencent Technology (Shenzhen) Company Limited Motor imagery electroencephalogram signal processing method, device, and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110069958A (en) * 2018-01-22 2019-07-30 北京航空航天大学 A kind of EEG signals method for quickly identifying of dense depth convolutional neural networks
CN109993103A (en) * 2019-03-29 2019-07-09 华南理工大学 A kind of Human bodys' response method based on point cloud data
CN110213788A (en) * 2019-06-15 2019-09-06 福州大学 WSN abnormality detection and kind identification method based on data flow space-time characteristic
CN110309797A (en) * 2019-07-05 2019-10-08 齐鲁工业大学 Merge the Mental imagery recognition methods and system of CNN-BiLSTM model and probability cooperation
US20220054071A1 (en) * 2019-09-06 2022-02-24 Tencent Technology (Shenzhen) Company Limited Motor imagery electroencephalogram signal processing method, device, and storage medium
CN110765920A (en) * 2019-10-18 2020-02-07 西安电子科技大学 Motor imagery classification method based on convolutional neural network
US20220012489A1 (en) * 2020-07-10 2022-01-13 Korea University Research And Business Foundation Apparatus and method for motor imagery classification using eeg
CN113011239A (en) * 2020-12-02 2021-06-22 杭州电子科技大学 Optimal narrow-band feature fusion-based motor imagery classification method
CN113642400A (en) * 2021-07-12 2021-11-12 东北大学 Graph convolution action recognition method, device and equipment based on 2S-AGCN
CN114062511A (en) * 2021-10-24 2022-02-18 北京化工大学 Single-sensor-based intelligent acoustic emission identification method for early damage of aircraft engine

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZHANG D等: "EEG-based intention recognition from spatio-temporal representations via cascade and parallel convolutional recurrent neural networks", pages 1 - 8 *
伍佳等: "顾及区域信息的卷积神经网络在影像语义分割中的应用", 《科学技术与工程》, no. 21, pages 281 - 286 *
杨俊等: "基于深度时空特征融合的多通道运动想象EEG解码方法", vol. 43, no. 1, pages 196 - 203 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115337026A (en) * 2022-10-19 2022-11-15 之江实验室 Method and device for searching EEG signal features based on convolutional neural network
CN115337026B (en) * 2022-10-19 2023-03-10 之江实验室 Convolutional neural network-based EEG signal feature retrieval method and device
CN117434452A (en) * 2023-12-08 2024-01-23 珠海市嘉德电能科技有限公司 Lithium battery charge and discharge detection method, device, equipment and storage medium
CN117434452B (en) * 2023-12-08 2024-03-05 珠海市嘉德电能科技有限公司 Lithium battery charge and discharge detection method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN114781441B (en) 2024-01-26

Similar Documents

Publication Publication Date Title
Altaheri et al. Physics-informed attention temporal convolutional network for EEG-based motor imagery classification
CN109472194B (en) Motor imagery electroencephalogram signal feature identification method based on CBLSTM algorithm model
Sakhavi et al. Learning temporal information for brain-computer interface using convolutional neural networks
CN111652066B (en) Medical behavior identification method based on multi-self-attention mechanism deep learning
CN111709267B (en) Electroencephalogram signal emotion recognition method of deep convolutional neural network
EP4212100A1 (en) Electroencephalogram signal classification method and apparatus, and device, storage medium and program product
CN114781441A (en) EEG motor imagery classification method and multi-space convolution neural network model
CN113011239B (en) Motor imagery classification method based on optimal narrow-band feature fusion
CN111797674B (en) MI electroencephalogram signal identification method based on feature fusion and particle swarm optimization algorithm
CN110135244B (en) Expression recognition method based on brain-computer collaborative intelligence
CN113180692A (en) Electroencephalogram signal classification and identification method based on feature fusion and attention mechanism
Baysal et al. Multi-objective symbiotic organism search algorithm for optimal feature selection in brain computer interfaces
Zhang et al. Classification of canker on small datasets using improved deep convolutional generative adversarial networks
CN114595725B (en) Electroencephalogram signal classification method based on addition network and supervised contrast learning
CN113133769A (en) Equipment control method, device and terminal based on motor imagery electroencephalogram signals
CN115238796A (en) Motor imagery electroencephalogram signal classification method based on parallel DAMSCN-LSTM
Bardak et al. EEG based emotion prediction with neural network models
CN112926502B (en) Micro expression identification method and system based on coring double-group sparse learning
CN114209342A (en) Electroencephalogram signal motor imagery classification method based on space-time characteristics
CN114587384A (en) Motor imagery electroencephalogram signal feature extraction method combining low-rank representation and manifold learning
Tang et al. A channel selection method for event related potential detection based on random forest and genetic algorithm
Babu et al. Face Recognition System Using Deep Belief Network and Particle Swarm Optimization.
Kalimuthu et al. Multi-class facial emotion recognition using hybrid dense squeeze network
Saikia et al. Application of deep learning for eeg
Taha et al. EEG Emotion Recognition Via Ensemble Learning Representations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant