CN116058852B - Classification system, method, electronic device and storage medium for MI-EEG signals - Google Patents

Classification system, method, electronic device and storage medium for MI-EEG signals Download PDF

Info

Publication number
CN116058852B
CN116058852B CN202310218867.3A CN202310218867A CN116058852B CN 116058852 B CN116058852 B CN 116058852B CN 202310218867 A CN202310218867 A CN 202310218867A CN 116058852 B CN116058852 B CN 116058852B
Authority
CN
China
Prior art keywords
branch
layer
convolution
neural network
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310218867.3A
Other languages
Chinese (zh)
Other versions
CN116058852A (en
Inventor
刘伟奇
马学升
陈金钢
彭思源
王肖玮
陈韵如
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongxin Zhiyi Technology Beijing Co ltd
Original Assignee
Tongxin Zhiyi Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongxin Zhiyi Technology Beijing Co ltd filed Critical Tongxin Zhiyi Technology Beijing Co ltd
Priority to CN202310218867.3A priority Critical patent/CN116058852B/en
Publication of CN116058852A publication Critical patent/CN116058852A/en
Application granted granted Critical
Publication of CN116058852B publication Critical patent/CN116058852B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Mathematical Physics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Public Health (AREA)
  • Theoretical Computer Science (AREA)
  • Psychiatry (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Fuzzy Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Psychology (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a classification system, a classification method, electronic equipment and a storage medium of MI-EEG signals, wherein the classification method of the MI-EEG signals comprises the following steps: preprocessing an MI-EEG signal data set to obtain a preprocessed data set, and dividing a training set and a testing set based on the data set; constructing a multi-branch convolutional neural network model, wherein the first branch, the second branch and the third branch respectively comprise an EEGNet convolutional block and a convolutional attention module; obtaining a target multi-branch convolutional neural network model through a training set and a testing set; and inputting the MI-EEG signals to be classified into a target multi-branch convolutional neural network model to obtain a classification result. The MI-EEG signal classification method solves the problem that the existing traditional model can not finish specific classification tasks while having fixed super parameters.

Description

Classification system, method, electronic device and storage medium for MI-EEG signals
Technical Field
The invention relates to the technical field of MI-EEG signal classification, in particular to a system, a method, electronic equipment and a storage medium for classifying MI-EEG signals.
Background
A Motor Image (MI) is a imagined limb movement behavior generated when the brain imagines moving a specific limb, and can be captured and recognized by electroencephalogram (EEG); initially, the common spatial mode (common spatial patterns, CSP) was the best technique to identify EEG-MI signals based.
The decoding of EEG-MI signals has potential application in fields such as sports medicine, however, due to the characteristics of low signal-to-noise ratio, existence of motion artifacts and noise, spatial correlation of signals, etc., the classification becomes very challenging, and EGG-MI signals have extremely high specificity, have large differences between different subjects, and may also have large differences at different test points of the same subject, such a specific task consumes a lot of time and computation costs for the traditional model.
Disclosure of Invention
The embodiment of the invention aims to provide a classification system, a classification method, electronic equipment and a storage medium of MI-EEG signals, which are used for solving the problem that the conventional model can not finish specific classification tasks while having fixed super parameters.
To achieve the above object, an embodiment of the present invention provides a method for classifying MI-EEG signals, the method specifically comprising:
acquiring an MI-EEG signal data set, preprocessing the MI-EEG signal data set to obtain a preprocessed data set, and dividing a training set and a testing set based on the data set;
constructing a multi-branch convolutional neural network model, wherein the multi-branch convolutional neural network model comprises a first branch, a second branch and a third branch, the first branch, the second branch and the third branch respectively comprise an EEGNet convolutional block and a convolutional attention module, and the first branch, the second branch and the third branch are connected through a softmax layer;
training the multi-branch convolutional neural network model through the training set;
performing performance evaluation on the trained multi-branch convolutional neural network through the test set to obtain a target multi-branch convolutional neural network model;
and inputting the MI-EEG signals to be classified into the target multi-branch convolutional neural network model to obtain a classification result.
Based on the technical scheme, the invention can also be improved as follows:
further, the EEGNet convolution block includes a first convolution layer, a second convolution layer, and a third convolution layer connected in sequence, and window sizes of the first convolution layer, the second convolution layer, and the third convolution layer are different.
Further, the convolution attention module comprises a channel attention sub-module and a spatial attention sub-module;
the channel attention submodule comprises an average pool layer, a maximum pool layer and a shared network, the shared network comprises a multi-layer perceptron, the multi-layer perceptron is provided with a hidden layer, the average pool layer and the maximum pool layer receive input characteristics and then generate a characteristic map, and the shared network receives the characteristic map and then outputs channel refined characteristics;
the spatial attention submodule comprises an average pool layer, a maximum pool layer and a convolution layer, and the channel refined features output refined feature mapping through the average pool layer, the maximum pool layer and the convolution layer.
Further, the constructing a multi-branch convolutional neural network model, wherein the multi-branch convolutional neural network model includes a first branch, a second branch, and a third branch, the first branch, the second branch, and the third branch include an EEGNet convolutional block and a convolutional attention module, respectively, the first branch, the second branch, and the third branch are connected through a softmax layer, including:
different numbers of parameters are set by EEGNet convolution blocks and convolution attention modules in the first, second and third branches, respectively, to capture different features.
A classification system for MI-EEG signals comprising:
an acquisition module for acquiring an MI-EEG signal dataset;
the preprocessing module is used for preprocessing the MI-EEG signal data set to obtain a preprocessed data set, and dividing a training set and a testing set based on the data set;
the construction module is used for constructing a multi-branch convolutional neural network model, wherein the multi-branch convolutional neural network model comprises a first branch, a second branch and a third branch, the first branch, the second branch and the third branch respectively comprise an EEGNet convolutional block and a convolutional attention module, and the first branch, the second branch and the third branch are connected through a softmax layer;
the training module is used for training the multi-branch convolutional neural network model through the training set;
the test module is used for performing performance evaluation on the trained multi-branch convolutional neural network through the test set to obtain a target multi-branch convolutional neural network model;
the multi-branch convolutional neural network model obtains a classification result based on the MI-EEG signals to be classified.
Further, the EEGNet convolution block includes a first convolution layer, a second convolution layer, and a third convolution layer connected in sequence, and window sizes of the first convolution layer, the second convolution layer, and the third convolution layer are different.
Further, the convolution attention module comprises a channel attention sub-module and a spatial attention sub-module;
the channel attention submodule comprises an average pool layer, a maximum pool layer and a shared network, the shared network comprises a multi-layer perceptron, the multi-layer perceptron is provided with a hidden layer, the average pool layer and the maximum pool layer receive input characteristics and then generate a characteristic map, and the shared network receives the characteristic map and then outputs channel refined characteristics;
the spatial attention submodule comprises an average pool layer, a maximum pool layer and a convolution layer, and the channel refined features output refined feature mapping through the average pool layer, the maximum pool layer and the convolution layer.
Further, the classification system of MI-EEG signals further comprises a setting module for setting the EEGNet convolution blocks and the convolution attention modules in the first, second and third branches, respectively, to different numbers of parameters for capturing different features.
An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method when the computer program is executed.
A non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method.
The embodiment of the invention has the following advantages:
the method for classifying MI-EEG signals comprises the steps of obtaining an MI-EEG signal data set, preprocessing the MI-EEG signal data set to obtain a preprocessed data set, and dividing a training set and a testing set based on the data set; constructing a multi-branch convolutional neural network model, wherein the multi-branch convolutional neural network model comprises a first branch, a second branch and a third branch, the first branch, the second branch and the third branch respectively comprise an EEGNet convolutional block and a convolutional attention module, and the first branch, the second branch and the third branch are connected through a softmax layer; training the multi-branch convolutional neural network model through the training set; performing performance evaluation on the trained multi-branch convolutional neural network through the test set to obtain a target multi-branch convolutional neural network model; inputting the MI-EEG signals to be classified into the target multi-branch convolutional neural network model to obtain a classification result; the problem that the existing traditional model cannot finish specific classification tasks while having fixed super parameters is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It will be apparent to those of ordinary skill in the art that the drawings in the following description are exemplary only and that other implementations can be obtained from the extensions of the drawings provided without inventive effort.
The structures, proportions, sizes, etc. shown in the present specification are shown only for the purposes of illustration and description, and are not intended to limit the scope of the invention, which is defined by the claims, so that any structural modifications, changes in proportions, or adjustments of sizes, which do not affect the efficacy or the achievement of the present invention, should fall within the ambit of the technical disclosure.
FIG. 1 is a flow chart of a method of classifying MI-EEG signals of the present invention;
FIG. 2 is a first architecture diagram of the classification system of MI-EEG signals of the present invention;
FIG. 3 is a second architecture diagram of the classification system of MI-EEG signals of the present invention;
FIG. 4 is a block diagram of an EEGNet convolution block of the present invention;
FIG. 5 is a schematic diagram of a convolution attention module of the present invention;
FIG. 6 is a block diagram of a channel attention sub-module according to the present invention;
FIG. 7 is a block diagram of a spatial attention sub-module according to the present invention;
FIG. 8 is a schematic diagram of a multi-branch convolutional neural network model of the present invention;
fig. 9 is a schematic diagram of an entity structure of an electronic device according to the present invention.
Wherein the reference numerals are as follows:
the system comprises an acquisition module 10, a preprocessing module 20, a construction module 30, a training module 40, a testing module 50, a setting module 60, an electronic device 70, a processor 701, a memory 702 and a bus 703.
Detailed Description
Other advantages and advantages of the present invention will become apparent to those skilled in the art from the following detailed description, which, by way of illustration, is to be read in connection with certain specific embodiments, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Examples
Fig. 1 is a flowchart of an embodiment of a method for classifying MI-EEG signals according to the invention, as shown in fig. 1, the method for classifying MI-EEG signals according to the embodiment of the invention comprises the following steps:
s101, acquiring an MI-EEG signal data set, preprocessing the MI-EEG signal data set to obtain a preprocessed data set, and dividing a training set and a testing set based on the data set;
specifically, the training set collected 4.5s data at a sampling frequency of 250 Hz (250×4.5=1125 samples) using 22 EEG electrodes. Each test resulted in a data matrix of dimension (22 x 1125). MI is divided into four classes: left hand, right hand, foot, and tongue.
The validation set was collected using 128 channels with a sampling frequency of 500 Hz. The validation set is downsampled from 500 Hz to 250 Hz to improve data quality. The signal channels were reduced from 128 to 44, thereby excluding electrodes that were not connected to the MI area. To keep with the training set, each trial was run for 4.5s (0.5 seconds before the end of the trial was prompted), 1125 samples were generated per trial, and the data matrix was (44×1125).
S102, constructing a multi-branch convolutional neural network model, wherein the multi-branch convolutional neural network model comprises a first branch, a second branch and a third branch, the first branch, the second branch and the third branch respectively comprise an EEGNet convolutional block and a convolutional attention module, and the first branch, the second branch and the third branch are connected through a softmax layer;
specifically, the EEGNet convolution block includes a first convolution layer, a second convolution layer, and a third convolution layer that are sequentially connected, where window sizes of the first convolution layer, the second convolution layer, and the third convolution layer are different, and the window sizes are defined by a kernel size. The first convolution layer uses a 2D filter and then performs batch normalization that helps to speed up training and regularization of the model. The second convolution layer uses depth convolution and performs batch normalization and activation functions in the form of exponential linear units (exponential linear unit, ELU), average pooling, and hidden layers. The third convolution layer uses separable convolutions. A simplified architecture of EEGNet is shown in fig. 4.
A convolution attention module (Convolutional Block Attention Module, CBAM) is a module that can be added to the model to focus on specific properties while ignoring other properties to emphasize important features along the channel and spatial axes. Each branch may learn features on the channel and spatial axes by using a sequence of attention modules (as shown in fig. 5). Because the convolution attention module learns which information to highlight or hide, the convolution attention module may effectively assist in the flow of information in the network.
The convolution attention module has two sub-modules: the channel attention sub-module and the spatial attention sub-module.
As shown in fig. 6, the channel attention submodule includes an average pool layer, a maximum pool layer and a shared network, the shared network includes a multi-layer sensor, the multi-layer sensor has a hidden layer, the average pool layer and the maximum pool layer receive input features and generate a feature map, and the shared network receives the feature map and then outputs channel refined features;
as shown in fig. 7, the spatial attention submodule includes an average pool layer, a maximum pool layer and a convolution layer, and the channel refined feature outputs a refined feature map via the average pool layer, the maximum pool layer and the convolution layer.
In the channel attention sub-module, the input features from the previous convolution block are transmitted to both the average pool layer and the maximum pool layer. The feature map generated by the two pool layers is then transmitted to a shared network consisting of a multi-layer persistence (MLP) with a hidden layer. In this hidden layer, a reduction ratio is used to reduce the number of active mappings, thereby reducing the parameters. After the shared network is applied to each pooled feature map, the output feature maps are combined using element-wise summation. Then, to generate the input feature vectors to be spatially attention sub-modules, element-wise multiplication is used between the channel attention sub-module output feature map and the attention module input features. Channel axis average pooling and maximum pooling are used when computing the spatial attention feature map. Thus, convolutional layers are used to construct efficient characterization.
Convolutional neural networks (convolutional neural network, CNN) can solve the computational problem of high-dimensional data (e.g., EEG signals) by performing convolutional operations in a neural network environment. The convolution window is a small portion of the input neurons to which each neuron in the CNN first hidden layer is connected. All neurons have a bias and each connection has a weight. The window is then slid through the input sequence, each neuron in the hidden layer learning to analyze its particular aspect. The kernel size is the size or length of the convolution window. The CNN no longer learns new weights and biases for each hidden layer neuron, but only learns a set of weights, and applies to all hidden layer neurons a single bias, namely weight sharing, whose calculation formula is as follows:
wherein,is the%>No. f of the filter>Activation or output of individual neurons, +.>Corresponding to the activation function->Is the shared overall deviation of the filter, +.>And->Is the kernel size,/->Is a vector of shared weights, +.>Is a vector of pre-neuron outputs, +.>Representing the transpose operation.
In a moving image, the optimum kernel size differs from object to object, and also differs at different times for the same object. To address the specific subject matter in EEG-MI classification using CNN, the multi-branch convolutional neural network model has different kernel sizes for each branch, and can find the appropriate convolution scale, i.e., kernel size, for all objects. The use of different kernel sizes helps the multi-branch convolutional neural network model to accomplish the tasks of a particular topic, which is more versatile.
The multi-branch convolutional neural network model has three branches, and can determine the convolution size, the filter number, the hiding probability and the attention parameter of all data. At the same time, the model can be customized according to a particular theme while increasing its applicability. In the first convolution layer, based on local and global modulations, the model may learn temporal and spatial properties based on spatially distributed dissociation filters. For this purpose, the input data is represented as a 2D array, wherein the rows represent the number of electrodes and the columns represent the number of time steps.
The representation of the MI-EEG signal dataset is:
wherein,is the track number>Is a signal and its corresponding class label, < ->Wherein->Is the number of classifications. />Representing the input signal (2D array), a>Wherein->Representing the number of EEG channels>Representing the length of the EEG signal input.
The output of the classification system is the output from the last layer, a layer with softmax activation. The output of this layer is a vector that contains the probability of each possible result or category. The sum of the probabilities of all possible outcomes or categories in the vector is 1. The softmax is defined as:
wherein the method comprises the steps ofInput vector +.>It comprises->N of the categories (results)Element(s)>Is the +.>Element(s)>And->Is the number of classifications. The cost function or loss function is a class cross entropy that takes the output probability from the softmax function and measures the distance from the true value, which provides a value of 0 or 1 for each class/result.
The use of cross entropy loss in adjusting model weights during training can minimize the loss, the smaller the loss, the better the model performance. The cross entropy loss function is defined as:
for the followingThe individual category->Is a true value, < >>Is->The softmax probability of the individual class, +.>And 2 is taken as a base to calculate.
The Multi-Branch convolutional neural network model (Multi-Branch EGGNet model, MBEEGCBAM model) can be divided into two parts as set forth above: EEGNet convolution block and CBAM module. The architecture of MBEEGCBAM is shown in FIG. 8 with three different branches, each with an EEGNet convolution block, channel attention block and spatial attention block, which are then connected by a series layer. Each branch has a different number of parameters to capture different features.
S103, training the multi-branch convolutional neural network model through a training set;
specifically, all training data in the training set are model trained by adopting global parameters. The parameters of the first branch are set as follows: EEGNet convolution block uses elu activation function, 4 time sequence filters, core size 6, discard rate 0; the attention module uses a relu activation function with a ratio of 2 and a kernel size of 2. The parameters of the second branch are set as follows: the EEGNet convolution block uses elu activation function, 8 timing filters, core size 32, discard rate 0.1; the attention module uses a relu activation function with a ratio of 8 and a kernel size of 4. The parameters of the third branch are set as follows: EEGNet convolution block uses elu activation function, 16 timing filters, core size 64, discard rate 0.2; the attention module uses a relu activation function with a ratio of 8 and a kernel size of 2.
During the training phase, a callback is used at the end of each training round to save the best model weight based on the current best accuracy, and the saved best model is loaded during the testing phase. The convolutional learning rate was 0.0009, the batch size was 64, and the training round number was 1000. Using an Adam optimizer, the cost function is a cross entropy error function.
And S104, performing performance evaluation on the trained multi-branch convolutional neural network through the test set to obtain a target multi-branch convolutional neural network model.
Specifically, the accuracy was used to evaluate the model performance. The MBEEGCBAM model has good average classification accuracy in a training set and a verification set, which are 82.85% and 95.45% respectively, and are higher than other traditional models.
S105, inputting the MI-EEG signals to be classified into a target multi-branch convolutional neural network model to obtain a classification result;
the multi-branch convolutional neural network model of the MI-EEG signal classifying method of the invention can capture more characteristics by connecting the characteristics of a plurality of branches of a basic model before classifying the branches by using a softmax layer, so that EEG-MI signals can be classified more accurately, and the multi-branch convolutional neural network model has good performance in a training set and a verification set, and has average classification accuracy of 82.85 percent and 95.45 percent respectively.
The multi-branch convolutional neural network model uses global parameters for all test objects, and the convolutional attention module is cited, so that feature mapping is more refined on the channel and space layers, higher accuracy is achieved, tasks of specific subjects can be completed while the model is used as a general model, and calculation cost is saved.
FIGS. 2 and 3 are flowcharts of embodiments of a classification system for MI-EEG signals of the present invention; as shown in fig. 2 and 3, a classification system for MI-EEG signals according to an embodiment of the invention comprises the following steps:
an acquisition module for acquiring an MI-EEG signal dataset;
the preprocessing module is used for preprocessing the MI-EEG signal data set to obtain a preprocessed data set, and dividing a training set and a testing set based on the data set;
the construction module is used for constructing a multi-branch convolutional neural network model, wherein the multi-branch convolutional neural network model comprises a first branch, a second branch and a third branch, the first branch, the second branch and the third branch respectively comprise an EEGNet convolutional block and a convolutional attention module, and the first branch, the second branch and the third branch are connected through a softmax layer;
the training module is used for training the multi-branch convolutional neural network model through the training set;
the test module is used for performing performance evaluation on the trained multi-branch convolutional neural network through the test set to obtain a target multi-branch convolutional neural network model;
the multi-branch convolutional neural network model obtains a classification result based on the MI-EEG signals to be classified.
The EEGNet convolution block comprises a first convolution layer, a second convolution layer and a third convolution layer which are sequentially connected, and window sizes of the first convolution layer, the second convolution layer and the third convolution layer are different.
The convolution attention module comprises a channel attention sub-module and a space attention sub-module;
the channel attention submodule comprises an average pool layer, a maximum pool layer and a shared network, the shared network comprises a multi-layer perceptron, the multi-layer perceptron is provided with a hidden layer, the average pool layer and the maximum pool layer receive input characteristics and then generate a characteristic map, and the shared network receives the characteristic map and then outputs channel refined characteristics;
the spatial attention submodule comprises an average pool layer, a maximum pool layer and a convolution layer, and the channel refined features output refined feature mapping through the average pool layer, the maximum pool layer and the convolution layer.
The classification system of MI-EEG signals further comprises a setting module for setting different numbers of parameters for the EEGNet convolution blocks and the convolution attention modules in the first, second and third branches, respectively, to capture different features.
The invention relates to a classification system of MI-EEG signals, which is used for acquiring an MI-EEG signal data set through an acquisition module; preprocessing the MI-EEG signal data set through a preprocessing module to obtain a preprocessed data set, and dividing a training set and a testing set based on the data set; constructing a multi-branch convolutional neural network model through a construction module, wherein the multi-branch convolutional neural network model comprises a first branch, a second branch and a third branch, the first branch, the second branch and the third branch respectively comprise an EEGNet convolutional block and a convolutional attention module, and the first branch, the second branch and the third branch are connected through a softmax layer; training the multi-branch convolutional neural network model through the training set; performing performance evaluation on the trained multi-branch convolutional neural network through the test set to obtain a target multi-branch convolutional neural network model; the multi-branch convolutional neural network model obtains a classification result based on the MI-EEG signals to be classified. The problem that the existing traditional model cannot finish specific classification tasks while having fixed super parameters is solved.
Fig. 9 is a schematic diagram of an entity structure of an electronic device according to an embodiment of the present invention, as shown in fig. 9, an electronic device 70 includes: a processor 701, a memory 702, and a bus 703;
wherein, the processor 701 and the memory 702 complete communication with each other through the bus 703;
the processor 701 is configured to invoke program instructions in the memory 702 to perform the methods provided by the above-described method embodiments, for example, including: acquiring an MI-EEG signal data set, preprocessing the MI-EEG signal data set to obtain a preprocessed data set, and dividing a training set and a testing set based on the data set; constructing a multi-branch convolutional neural network model, wherein the multi-branch convolutional neural network model comprises a first branch, a second branch and a third branch, the first branch, the second branch and the third branch respectively comprise an EEGNet convolutional block and a convolutional attention module, and the first branch, the second branch and the third branch are connected through a softmax layer; training the multi-branch convolutional neural network model through the training set; performing performance evaluation on the trained multi-branch convolutional neural network through the test set to obtain a target multi-branch convolutional neural network model; and inputting the MI-EEG signals to be classified into the target multi-branch convolutional neural network model to obtain a classification result.
The present embodiment provides a non-transitory computer readable storage medium storing computer instructions that cause a computer to perform the methods provided by the above-described method embodiments, for example, including: acquiring an MI-EEG signal data set, preprocessing the MI-EEG signal data set to obtain a preprocessed data set, and dividing a training set and a testing set based on the data set; constructing a multi-branch convolutional neural network model, wherein the multi-branch convolutional neural network model comprises a first branch, a second branch and a third branch, the first branch, the second branch and the third branch respectively comprise an EEGNet convolutional block and a convolutional attention module, and the first branch, the second branch and the third branch are connected through a softmax layer; training the multi-branch convolutional neural network model through the training set; performing performance evaluation on the trained multi-branch convolutional neural network through the test set to obtain a target multi-branch convolutional neural network model; and inputting the MI-EEG signals to be classified into the target multi-branch convolutional neural network model to obtain a classification result.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware associated with program instructions, where the foregoing program may be stored in a computer readable storage medium, and when executed, the program performs steps including the above method embodiments; and the aforementioned storage medium includes: various storage media such as ROM, RAM, magnetic or optical disks may store program code.
The apparatus embodiments described above are merely illustrative, wherein elements illustrated as separate elements may or may not be physically separate, and elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on such understanding, the foregoing technical solutions may be embodied essentially or in part in the form of a software product, which may be stored in a computer-readable storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the various embodiments or methods of some parts of the embodiments.
While the invention has been described in detail in the foregoing general description and specific examples, it will be apparent to those skilled in the art that modifications and improvements can be made thereto. Accordingly, such modifications or improvements may be made without departing from the spirit of the invention and are intended to be within the scope of the invention as claimed.

Claims (8)

1. A method of classifying MI-EEG signals, the method comprising in particular:
acquiring a M I-EEG signal data set, preprocessing the M I-EEG signal data set to obtain a preprocessed data set, and dividing a training set and a testing set based on the preprocessed data set;
constructing a multi-branch convolutional neural network model, wherein the multi-branch convolutional neural network model comprises a first branch, a second branch and a third branch, the first branch, the second branch and the third branch respectively comprise an EEGNet convolutional block and a convolutional attention module, and the first branch, the second branch and the third branch are connected through a softmax layer;
the convolution attention module comprises a channel attention sub-module and a space attention sub-module;
the channel attention submodule comprises an average pool layer, a maximum pool layer and a shared network, the shared network comprises a multi-layer perceptron, the multi-layer perceptron is provided with a hidden layer, the average pool layer and the maximum pool layer receive input characteristics and then generate a characteristic map, and the shared network receives the characteristic map and then outputs channel refined characteristics;
calculating activation or output of a jth neuron of an ith filter in the hidden layer by formula 1;
wherein a is ij Is the ith filter in the hidden layerActivation or output of the jth neuron of the device, f corresponding to the activation function, b i Is the shared overall bias of the filter, i and K are kernel sizes, W i =[w i1 ,w i2 ,…,w ik ]Is a vector of shared weights, X j =[x j ,x j+1 ,…,x j+k ]Is the vector output by the pre-neuron, T represents the transpose operation;
the spatial attention submodule comprises an average pool layer, a maximum pool layer and a convolution layer, and the channel refined features output refined feature mapping through the average pool layer, the maximum pool layer and the convolution layer;
training the multi-branch convolutional neural network model through the training set;
performing performance evaluation on the trained multi-branch convolutional neural network through the test set to obtain a target multi-branch convolutional neural network model;
and inputting the M I-EEG signals to be classified into the target multi-branch convolutional neural network model to obtain a classification result.
2. The method of classifying M I-EEG signals according to claim 1, wherein the EEGNet convolution block comprises a first convolution layer, a second convolution layer, and a third convolution layer connected in sequence, and wherein the window sizes of the first convolution layer, the second convolution layer, and the third convolution layer are different.
3. The method of classifying MI-EEG signals according to claim 1, wherein said constructing a multi-branch convolutional neural network model, wherein said multi-branch convolutional neural network model comprises a first branch, a second branch and a third branch, said first branch, said second branch and said third branch comprising an EEGNet convolution block and a convolution attention module, respectively, said first branch, said second branch and said third branch being connected by a softmax layer, comprising:
different numbers of parameters are set by EEGNet convolution blocks and convolution attention modules in the first, second and third branches, respectively, to capture different features.
4. A classification system for MI-EEG signals, comprising:
an acquisition module for acquiring an MI-EEG signal dataset;
the preprocessing module is used for preprocessing the M I-EEG signal data set to obtain a preprocessed data set, and dividing a training set and a testing set based on the preprocessed data set;
the construction module is used for constructing a multi-branch convolutional neural network model, wherein the multi-branch convolutional neural network model comprises a first branch, a second branch and a third branch, the first branch, the second branch and the third branch respectively comprise an EEGNet convolutional block and a convolutional attention module, and the first branch, the second branch and the third branch are connected through a softmax layer;
the convolution attention module comprises a channel attention sub-module and a space attention sub-module;
the channel attention submodule comprises an average pool layer, a maximum pool layer and a shared network, the shared network comprises a multi-layer perceptron, the multi-layer perceptron is provided with a hidden layer, the average pool layer and the maximum pool layer receive input characteristics and then generate a characteristic map, and the shared network receives the characteristic map and then outputs channel refined characteristics;
calculating activation or output of a jth neuron of an ith filter in the hidden layer by formula 1;
wherein a is ij Is the activation or output of the jth neuron of the ith filter in the hidden layer, f corresponds to the activation function, b i Is the shared overall bias of the filter, i and K are kernel sizes, W i =[w i1 ,w i2 ,…,w ik ]Is a vector of shared weights, X j =[x j ,x j+1 ,…,x j+k ]Is the vector output by the pre-neuron, T represents the transpose operation;
the spatial attention submodule comprises an average pool layer, a maximum pool layer and a convolution layer, and the channel refined features output refined feature mapping through the average pool layer, the maximum pool layer and the convolution layer;
the training module is used for training the multi-branch convolutional neural network model through the training set;
the test module is used for performing performance evaluation on the trained multi-branch convolutional neural network through the test set to obtain a target multi-branch convolutional neural network model;
the multi-branch convolutional neural network model obtains a classification result based on the MI-EEG signals to be classified.
5. The classification system of MI-EEG signal according to claim 4, wherein said EEGNet convolution block comprises a first convolution layer, a second convolution layer and a third convolution layer connected in sequence, and wherein the window sizes of said first convolution layer, said second convolution layer and said third convolution layer are different.
6. The classification system of MI-EEG signal according to claim 4, further comprising a setting module for setting different numbers of parameters for the EEGNet convolution blocks and convolution attention modules in the first, second and third branches, respectively, to capture different features.
7. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 3 when the computer program is executed.
8. A non-transitory computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any of claims 1 to 3.
CN202310218867.3A 2023-03-09 2023-03-09 Classification system, method, electronic device and storage medium for MI-EEG signals Active CN116058852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310218867.3A CN116058852B (en) 2023-03-09 2023-03-09 Classification system, method, electronic device and storage medium for MI-EEG signals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310218867.3A CN116058852B (en) 2023-03-09 2023-03-09 Classification system, method, electronic device and storage medium for MI-EEG signals

Publications (2)

Publication Number Publication Date
CN116058852A CN116058852A (en) 2023-05-05
CN116058852B true CN116058852B (en) 2023-12-22

Family

ID=86169960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310218867.3A Active CN116058852B (en) 2023-03-09 2023-03-09 Classification system, method, electronic device and storage medium for MI-EEG signals

Country Status (1)

Country Link
CN (1) CN116058852B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163180A (en) * 2019-05-29 2019-08-23 长春思帕德科技有限公司 Mental imagery eeg data classification method and system
CN113469198A (en) * 2021-06-30 2021-10-01 南京航空航天大学 Image classification method based on improved VGG convolutional neural network model
CN113610144A (en) * 2021-08-02 2021-11-05 合肥市正茂科技有限公司 Vehicle classification method based on multi-branch local attention network
CN114266276A (en) * 2021-12-25 2022-04-01 北京工业大学 Motor imagery electroencephalogram signal classification method based on channel attention and multi-scale time domain convolution
WO2022184124A1 (en) * 2021-03-05 2022-09-09 腾讯科技(深圳)有限公司 Physiological electrical signal classification and processing method and apparatus, computer device, and storage medium
CN115211870A (en) * 2022-08-23 2022-10-21 浙江大学 Neonate's brain electric signal convulsion discharge detecting system based on multiscale feature fusion network
CN115481695A (en) * 2022-09-26 2022-12-16 云南大学 Motor imagery classification method by utilizing multi-branch feature extraction

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113693613B (en) * 2021-02-26 2024-05-24 腾讯科技(深圳)有限公司 Electroencephalogram signal classification method, electroencephalogram signal classification device, computer equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163180A (en) * 2019-05-29 2019-08-23 长春思帕德科技有限公司 Mental imagery eeg data classification method and system
WO2022184124A1 (en) * 2021-03-05 2022-09-09 腾讯科技(深圳)有限公司 Physiological electrical signal classification and processing method and apparatus, computer device, and storage medium
CN113469198A (en) * 2021-06-30 2021-10-01 南京航空航天大学 Image classification method based on improved VGG convolutional neural network model
CN113610144A (en) * 2021-08-02 2021-11-05 合肥市正茂科技有限公司 Vehicle classification method based on multi-branch local attention network
CN114266276A (en) * 2021-12-25 2022-04-01 北京工业大学 Motor imagery electroencephalogram signal classification method based on channel attention and multi-scale time domain convolution
CN115211870A (en) * 2022-08-23 2022-10-21 浙江大学 Neonate's brain electric signal convulsion discharge detecting system based on multiscale feature fusion network
CN115481695A (en) * 2022-09-26 2022-12-16 云南大学 Motor imagery classification method by utilizing multi-branch feature extraction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于多特征卷积神经网路的运动想象脑电信号分析及意图识别;何群;邵丹丹;王煜文;张园园;谢平;;仪器仪表学报(第01期);全文 *

Also Published As

Publication number Publication date
CN116058852A (en) 2023-05-05

Similar Documents

Publication Publication Date Title
Baldwin et al. Time-ordered recent event (tore) volumes for event cameras
CN112446270B (en) Training method of pedestrian re-recognition network, pedestrian re-recognition method and device
CN112215119B (en) Small target identification method, device and medium based on super-resolution reconstruction
US20180314937A1 (en) Learning-based noise reduction in data produced by a network of sensors, such as one incorporated into loose-fitting clothing worn by a person
CN112799128B (en) Method for seismic signal detection and seismic phase extraction
CN112734809B (en) On-line multi-pedestrian tracking method and device based on Deep-Sort tracking framework
CN111863244B (en) Functional connection mental disease classification method and system based on sparse pooling graph convolution
CN111695673B (en) Method for training neural network predictor, image processing method and device
CN113887559A (en) Brain-computer information fusion classification method and system for brain off-loop application
CN117150346A (en) EEG-based motor imagery electroencephalogram classification method, device, equipment and medium
CN114027786A (en) Sleep disordered breathing detection method and system based on self-supervision memory network
CN117033985A (en) Motor imagery electroencephalogram classification method based on ResCNN-BiGRU
CN116612335A (en) Few-sample fine-granularity image classification method based on contrast learning
CN113051983A (en) Method for training field crop disease recognition model and field crop disease recognition
Chu et al. Ahed: A heterogeneous-domain deep learning model for IoT-enabled smart health with few-labeled EEG data
CN117456394A (en) Unmanned aerial vehicle image target detection method, unmanned aerial vehicle image target detection device, unmanned aerial vehicle image target detection equipment and unmanned aerial vehicle image target detection medium
CN116058852B (en) Classification system, method, electronic device and storage medium for MI-EEG signals
CN117975086A (en) Method and system for classifying few-sample images based on metric element learning
CN107679487A (en) Missing Persons&#39; discrimination method and system
CN116758331A (en) Object detection method, device and storage medium
CN116721132A (en) Multi-target tracking method, system and equipment for industrially cultivated fishes
CN115886833A (en) Electrocardiosignal classification method and device, computer readable medium and electronic equipment
US20220343134A1 (en) Convolutional neural network architectures based on synaptic connectivity
CN115273814A (en) Pseudo voice detection method, device, computer equipment and storage medium
CN109003680B (en) Epileptic data statistical method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant