CN116058852B - Classification system, method, electronic device and storage medium for MI-EEG signals - Google Patents
Classification system, method, electronic device and storage medium for MI-EEG signals Download PDFInfo
- Publication number
- CN116058852B CN116058852B CN202310218867.3A CN202310218867A CN116058852B CN 116058852 B CN116058852 B CN 116058852B CN 202310218867 A CN202310218867 A CN 202310218867A CN 116058852 B CN116058852 B CN 116058852B
- Authority
- CN
- China
- Prior art keywords
- branch
- layer
- convolution
- neural network
- convolutional neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 80
- 238000012549 training Methods 0.000 claims abstract description 41
- 238000012360 testing method Methods 0.000 claims abstract description 29
- 238000007781 pre-processing Methods 0.000 claims abstract description 11
- 230000006870 function Effects 0.000 claims description 18
- 230000004913 activation Effects 0.000 claims description 16
- 210000002569 neuron Anatomy 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 7
- 238000011156 evaluation Methods 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 3
- 238000000537 electroencephalography Methods 0.000 description 47
- 238000011176 pooling Methods 0.000 description 38
- 238000010586 diagram Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 4
- 238000010200 validation analysis Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000010494 dissociation reaction Methods 0.000 description 1
- 230000005593 dissociations Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 210000002364 input neuron Anatomy 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
- A61B5/377—Electroencephalography [EEG] using evoked responses
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Veterinary Medicine (AREA)
- Mathematical Physics (AREA)
- Heart & Thoracic Surgery (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Pathology (AREA)
- Public Health (AREA)
- Theoretical Computer Science (AREA)
- Psychiatry (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Fuzzy Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Signal Processing (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Psychology (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域Technical field
本发明涉及MI-EEG信号分类技术领域,具体涉及一种MI-EEG信号的分类系统、方法、电子设备及存储介质。The present invention relates to the technical field of MI-EEG signal classification, and specifically relates to a MI-EEG signal classification system, method, electronic equipment and storage medium.
背景技术Background technique
运动图像(motor imagery,MI)是一种想象肢体运动行为,是在大脑想象移动特定肢体时生成的,可以通过脑电图(electroencephalography,EEG)进行捕捉和识别;在初期,共同空间模式(common spatial patterns,CSP)是识别基于EEG-MI信号的最佳技术。Motor imagery (MI) is a behavior of imagining body movement, which is generated when the brain imagines moving a specific limb. It can be captured and identified through electroencephalography (EEG); in the early stage, common spatial patterns (common spatial patterns) spatial patterns (CSP) is the best technology to identify EEG-MI based signals.
EEG-MI信号的解码在运动医学等领域具有潜在应用价值,然而,由于EEG-MI信号信噪比低、存在运动伪影和噪音以及信号的空间相关性等特点,使得分类变得十分具有挑战性,且EGG-MI信号具有极高的特异性,在不同的对象之间具有很大差异,在同一对象的不同测试时点也可能具有很大差异,这样的特异化任务对于传统模型来说要耗费大量的时间和计算成本。The decoding of EEG-MI signals has potential application value in fields such as sports medicine. However, due to the low signal-to-noise ratio of EEG-MI signals, the presence of motion artifacts and noise, and the spatial correlation of signals, classification becomes very challenging. characteristics, and the EGG-MI signal has extremely high specificity, with great differences between different subjects, and may also have great differences at different test time points of the same subject. Such a specialized task is not suitable for traditional models. It consumes a lot of time and computational cost.
发明内容Contents of the invention
本发明实施例的目的在于提供一种MI-EEG信号的分类系统、方法、电子设备及存储介质,用以解决现有的传统模型无法在具有固定超参数的同时完成特异性分类任务的问题。The purpose of the embodiments of the present invention is to provide a MI-EEG signal classification system, method, electronic device and storage medium to solve the problem that the existing traditional model cannot complete specific classification tasks while having fixed hyperparameters.
为实现上述目的,本发明实施例提供一种MI-EEG信号的分类方法,所述方法具体包括:To achieve the above objectives, embodiments of the present invention provide a method for classifying MI-EEG signals. The method specifically includes:
获取MI-EEG信号数据集,对所述MI-EEG信号数据集进行预处理得到预处理后的数据集,基于所述数据集划分训练集和测试集;Obtain a MI-EEG signal data set, perform preprocessing on the MI-EEG signal data set to obtain a preprocessed data set, and divide a training set and a test set based on the data set;
构建多分支卷积神经网络模型,其中,所述多分支卷积神经网络模型包括第一分支、第二分支和第三分支,所述第一分支、所述第二分支和所述第三分支分别包括EEGNet卷积块和卷积注意力模块,所述第一分支、所述第二分支和所述第三分支通过softmax层连接;Construct a multi-branch convolutional neural network model, wherein the multi-branch convolutional neural network model includes a first branch, a second branch and a third branch, the first branch, the second branch and the third branch They include EEGNet convolution blocks and convolution attention modules respectively, and the first branch, the second branch and the third branch are connected through a softmax layer;
通过所述训练集对所述多分支卷积神经网络模型进行训练;Train the multi-branch convolutional neural network model through the training set;
通过所述测试集对训练后的多分支卷积神经网络进行性能评估,得到目标多分支卷积神经网络模型;Perform performance evaluation on the trained multi-branch convolutional neural network through the test set to obtain the target multi-branch convolutional neural network model;
将待分类的MI-EEG信号输入所述目标多分支卷积神经网络模型,得到分类结果。The MI-EEG signal to be classified is input into the target multi-branch convolutional neural network model to obtain a classification result.
在上述技术方案的基础上,本发明还可以做如下改进:On the basis of the above technical solutions, the present invention can also make the following improvements:
进一步地,所述EEGNet卷积块包括依次连接的第一卷积层、第二卷积层和第三卷积层,且所述第一卷积层、所述第二卷积层和所述第三卷积层的窗口大小不同。Further, the EEGNet convolution block includes a first convolution layer, a second convolution layer and a third convolution layer connected in sequence, and the first convolution layer, the second convolution layer and the The window size of the third convolutional layer is different.
进一步地,所述卷积注意力模块包括通道注意力子模块和空间注意力子模块;Further, the convolution attention module includes a channel attention sub-module and a spatial attention sub-module;
所述通道注意力子模块包括平均池层、最大池层和共享网络,所述共享网络包括多层感知器,所述多层感知器具有一个隐藏层,所述平均池层和所述最大池层接收输入特征后生成特征图,所述共享网络接收所述特征图后输出通道细化的特征;The channel attention sub-module includes an average pooling layer, a maximum pooling layer and a shared network. The shared network includes a multi-layer perceptron, the multi-layer perceptron has a hidden layer, the average pooling layer and the maximum pooling layer. The layer generates a feature map after receiving input features, and the sharing network outputs channel-refined features after receiving the feature map;
所述空间注意力子模块包括平均池层、最大池层和卷积层,所述通道细化的特征经所述包括平均池层、所述最大池层和所述卷积层输出精细化的特征映射。The spatial attention sub-module includes an average pooling layer, a maximum pooling layer and a convolutional layer, and the channel-refined features are refined through the output of the average pooling layer, the maximum pooling layer and the convolutional layer. Feature mapping.
进一步地,所述构建多分支卷积神经网络模型,其中,所述多分支卷积神经网络模型包括第一分支、第二分支和第三分支,所述第一分支、所述第二分支和所述第三分支分别包括EEGNet卷积块和卷积注意力模块,所述第一分支、所述第二分支和所述第三分支通过softmax层连接,包括:Further, the construction of a multi-branch convolutional neural network model, wherein the multi-branch convolutional neural network model includes a first branch, a second branch and a third branch, the first branch, the second branch and The third branch includes an EEGNet convolution block and a convolution attention module respectively. The first branch, the second branch and the third branch are connected through a softmax layer, including:
将所述第一分支、所述第二分支和所述第三分支中的EEGNet卷积块和卷积注意力模块分别设置不同数量的参数,以捕获不同的特征。The EEGNet convolution block and convolution attention module in the first branch, the second branch and the third branch are respectively set with different numbers of parameters to capture different features.
一种MI-EEG信号的分类系统,包括:A classification system for MI-EEG signals, including:
获取模块,用于获取MI-EEG信号数据集;Acquisition module, used to obtain MI-EEG signal data sets;
预处理模块,用于对所述MI-EEG信号数据集进行预处理得到预处理后的数据集,基于所述数据集划分训练集和测试集;A preprocessing module, used to preprocess the MI-EEG signal data set to obtain a preprocessed data set, and divide a training set and a test set based on the data set;
构建模块,用于构建多分支卷积神经网络模型,其中,所述多分支卷积神经网络模型包括第一分支、第二分支和第三分支,所述第一分支、所述第二分支和所述第三分支分别包括EEGNet卷积块和卷积注意力模块,所述第一分支、所述第二分支和所述第三分支通过softmax层连接;A building module for building a multi-branch convolutional neural network model, wherein the multi-branch convolutional neural network model includes a first branch, a second branch and a third branch, the first branch, the second branch and The third branch includes an EEGNet convolution block and a convolution attention module respectively, and the first branch, the second branch and the third branch are connected through a softmax layer;
训练模块,用于通过所述训练集对所述多分支卷积神经网络模型进行训练;A training module, used to train the multi-branch convolutional neural network model through the training set;
测试模块,用于通过所述测试集对训练后的多分支卷积神经网络进行性能评估,得到目标多分支卷积神经网络模型;A test module, used to evaluate the performance of the trained multi-branch convolutional neural network through the test set to obtain the target multi-branch convolutional neural network model;
所述多分支卷积神经网络模型基于待分类的MI-EEG信号得到分类结果。The multi-branch convolutional neural network model obtains classification results based on the MI-EEG signal to be classified.
进一步地,所述EEGNet卷积块包括依次连接的第一卷积层、第二卷积层和第三卷积层,且所述第一卷积层、所述第二卷积层和所述第三卷积层的窗口大小不同。Further, the EEGNet convolution block includes a first convolution layer, a second convolution layer and a third convolution layer connected in sequence, and the first convolution layer, the second convolution layer and the The window size of the third convolutional layer is different.
进一步地,所述卷积注意力模块包括通道注意力子模块和空间注意力子模块;Further, the convolution attention module includes a channel attention sub-module and a spatial attention sub-module;
所述通道注意力子模块包括平均池层、最大池层和共享网络,所述共享网络包括多层感知器,所述多层感知器具有一个隐藏层,所述平均池层和所述最大池层接收输入特征后生成特征图,所述共享网络接收所述特征图后输出通道细化的特征;The channel attention sub-module includes an average pooling layer, a maximum pooling layer and a shared network. The shared network includes a multi-layer perceptron, the multi-layer perceptron has a hidden layer, the average pooling layer and the maximum pooling layer. The layer generates a feature map after receiving input features, and the sharing network outputs channel-refined features after receiving the feature map;
所述空间注意力子模块包括平均池层、最大池层和卷积层,所述通道细化的特征经所述包括平均池层、所述最大池层和所述卷积层输出精细化的特征映射。The spatial attention sub-module includes an average pooling layer, a maximum pooling layer and a convolutional layer, and the channel-refined features are refined through the output of the average pooling layer, the maximum pooling layer and the convolutional layer. Feature mapping.
进一步地,所述MI-EEG信号的分类系统还包括设置模块,所述设置模块用于将所述第一分支、所述第二分支和所述第三分支中的EEGNet卷积块和卷积注意力模块分别设置不同数量的参数,以捕获不同的特征。Further, the MI-EEG signal classification system further includes a setting module for converting the EEGNet convolution blocks and convolution blocks in the first branch, the second branch and the third branch. The attention module sets different numbers of parameters respectively to capture different features.
一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如所述方法的步骤。An electronic device includes a memory, a processor, and a computer program stored in the memory and executable on the processor. When the processor executes the computer program, the steps of the method are implemented.
一种非暂态计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现所述方法的步骤。A non-transitory computer-readable storage medium on which a computer program is stored, which implements the steps of the method when executed by a processor.
本发明实施例具有如下优点:The embodiments of the present invention have the following advantages:
本发明中MI-EEG信号的分类方法,获取MI-EEG信号数据集,对所述MI-EEG信号数据集进行预处理得到预处理后的数据集,基于所述数据集划分训练集和测试集;构建多分支卷积神经网络模型,其中,所述多分支卷积神经网络模型包括第一分支、第二分支和第三分支,所述第一分支、所述第二分支和所述第三分支分别包括EEGNet卷积块和卷积注意力模块,所述第一分支、所述第二分支和所述第三分支通过softmax层连接;通过所述训练集对所述多分支卷积神经网络模型进行训练;通过所述测试集对训练后的多分支卷积神经网络进行性能评估,得到目标多分支卷积神经网络模型;将待分类的MI-EEG信号输入所述目标多分支卷积神经网络模型,得到分类结果;解决了现有的传统模型无法在具有固定超参数的同时完成特异性分类任务的问题。The MI-EEG signal classification method in the present invention obtains a MI-EEG signal data set, preprocesses the MI-EEG signal data set to obtain a preprocessed data set, and divides a training set and a test set based on the data set. ;Construct a multi-branch convolutional neural network model, wherein the multi-branch convolutional neural network model includes a first branch, a second branch and a third branch, the first branch, the second branch and the third branch The branches respectively include EEGNet convolution blocks and convolution attention modules, and the first branch, the second branch and the third branch are connected through a softmax layer; the multi-branch convolutional neural network is tested through the training set. The model is trained; the performance of the trained multi-branch convolutional neural network is evaluated through the test set to obtain a target multi-branch convolutional neural network model; the MI-EEG signal to be classified is input into the target multi-branch convolutional neural network network model to obtain classification results; it solves the problem that existing traditional models cannot complete specific classification tasks while having fixed hyperparameters.
附图说明Description of the drawings
为了更清楚地说明本发明的实施方式或现有技术中的技术方案,下面将对实施方式或现有技术描述中所需要使用的附图作简单地介绍。显而易见地,下面描述中的附图仅仅是示例性的,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图引伸获得其它的实施附图。In order to more clearly explain the embodiments of the present invention or the technical solutions in the prior art, the drawings that need to be used in the description of the embodiments or the prior art will be briefly introduced below. Obviously, the drawings in the following description are only exemplary. For those of ordinary skill in the art, other implementation drawings can be obtained based on the extension of the provided drawings without exerting creative efforts.
本说明书所绘示的结构、比例、大小等,均仅用以配合说明书所揭示的内容,以供熟悉此技术的人士了解与阅读,并非用以限定本发明可实施的限定条件,故不具技术上的实质意义,任何结构的修饰、比例关系的改变或大小的调整,在不影响本发明所能产生的功效及所能达成的目的下,均应仍落在本发明所揭示的技术内容得能涵盖的范围内。The structures, proportions, sizes, etc. shown in this specification are only used to coordinate with the contents disclosed in the specification for the understanding and reading of people familiar with this technology. They are not used to limit the conditions under which the invention can be implemented, and therefore do not have any technical Any structural modification, change in proportion or size adjustment shall still fall within the scope of the technical content disclosed in the present invention without affecting the effectiveness and purpose achieved by the present invention. within the scope that can be covered.
图1为本发明MI-EEG信号的分类方法的流程图;Figure 1 is a flow chart of the MI-EEG signal classification method of the present invention;
图2为本发明MI-EEG信号的分类系统的第一架构图;Figure 2 is a first architecture diagram of the MI-EEG signal classification system of the present invention;
图3为本发明MI-EEG信号的分类系统的第二架构图;Figure 3 is a second architecture diagram of the MI-EEG signal classification system of the present invention;
图4为本发明EEGNet卷积块的架构图;Figure 4 is an architectural diagram of the EEGNet convolution block of the present invention;
图5为本发明卷积注意力模块的架构图;Figure 5 is an architectural diagram of the convolutional attention module of the present invention;
图6为本发明通道注意力子模块的架构图;Figure 6 is an architectural diagram of the channel attention sub-module of the present invention;
图7为本发明空间注意力子模块的架构图;Figure 7 is an architectural diagram of the spatial attention sub-module of the present invention;
图8为本发明多分支卷积神经网络模型的架构图;Figure 8 is an architectural diagram of the multi-branch convolutional neural network model of the present invention;
图9为本发明提供的电子设备实体结构示意图。Figure 9 is a schematic diagram of the physical structure of the electronic equipment provided by the present invention.
其中附图标记为:The drawings are marked as:
获取模块10,预处理模块20,构建模块30,训练模块40,测试模块50,设置模块60,电子设备70,处理器701,存储器702,总线703。Acquisition module 10, preprocessing module 20, construction module 30, training module 40, test module 50, setting module 60, electronic device 70, processor 701, memory 702, bus 703.
具体实施方式Detailed ways
以下由特定的具体实施例说明本发明的实施方式,熟悉此技术的人士可由本说明书所揭露的内容轻易地了解本发明的其他优点及功效,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following specific embodiments are used to illustrate the implementation of the present invention. Persons familiar with this technology can easily understand other advantages and effects of the present invention from the content disclosed in this specification. Obviously, the described embodiments are only part of the embodiments of the present invention. , not all examples. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts fall within the scope of protection of the present invention.
实施例Example
图1为本发明MI-EEG信号的分类方法实施例流程图,如图1所示,本发明实施例提供的一种MI-EEG信号的分类方法包括以下步骤:Figure 1 is a flow chart of an embodiment of a MI-EEG signal classification method of the present invention. As shown in Figure 1, a MI-EEG signal classification method provided by an embodiment of the present invention includes the following steps:
S101,获取MI-EEG信号数据集,对MI-EEG信号数据集进行预处理得到预处理后的数据集,基于数据集划分训练集和测试集;S101, obtain the MI-EEG signal data set, preprocess the MI-EEG signal data set to obtain the preprocessed data set, and divide the training set and the test set based on the data set;
具体的,训练集使用22个EEG电极收集了采样频率为250 Hz(250×4.5=1125个样本)的4.5 s数据。每次试验产生一个维度为(22×1125)的数据矩阵。MI分为四类:左手、右手、足和舌。Specifically, the training set used 22 EEG electrodes to collect 4.5 s data with a sampling frequency of 250 Hz (250×4.5=1125 samples). Each trial produces a data matrix of dimension (22×1125). MI is divided into four categories: left hand, right hand, foot and tongue.
验证集使用128个通道收集,采样频率为500 Hz。将验证集从500 Hz下采样到250Hz以提高数据质量。将信号通道从128个减少到44个,从而排除未连接到MI区域的电极。为了与训练集保持一致,试验每次进行4.5s(试验结束前0.5秒进行提示),每次试验产生1125个样本,数据矩阵为(44×1125)。The validation set is collected using 128 channels with a sampling frequency of 500 Hz. The validation set was downsampled from 500 Hz to 250Hz to improve data quality. Reduce signal channels from 128 to 44, thereby excluding electrodes not connected to the MI region. In order to be consistent with the training set, each trial lasted 4.5 seconds (a prompt was given 0.5 seconds before the end of the trial). Each trial produced 1125 samples, and the data matrix was (44×1125).
S102,构建多分支卷积神经网络模型,其中,多分支卷积神经网络模型包括第一分支、第二分支和第三分支,第一分支、第二分支和第三分支分别包括EEGNet卷积块和卷积注意力模块,第一分支、第二分支和第三分支通过softmax层连接;S102. Construct a multi-branch convolutional neural network model. The multi-branch convolutional neural network model includes a first branch, a second branch and a third branch. The first branch, the second branch and the third branch respectively include EEGNet convolution blocks. And the convolutional attention module, the first branch, the second branch and the third branch are connected through the softmax layer;
具体的,所述EEGNet卷积块包括依次连接的第一卷积层、第二卷积层和第三卷积层,且所述第一卷积层、所述第二卷积层和所述第三卷积层的窗口大小不同,窗口大小由核大小定义。第一个卷积层使用2D滤波器,然后进行批归一化,批归一化有助于加速训练和模型的正则化。第二个卷积层使用深度卷积,并进行以指数线性单元(exponential linearunit,ELU)、平均池化和隐藏层形式的批归一化和激活函数。第三卷积层使用可分离卷积。EEGNet的简化架构如图4所示。Specifically, the EEGNet convolution block includes a first convolution layer, a second convolution layer and a third convolution layer connected in sequence, and the first convolution layer, the second convolution layer and the The third convolutional layer has a different window size, which is defined by the kernel size. The first convolutional layer uses 2D filters, followed by batch normalization. Batch normalization helps speed up training and regularize the model. The second convolutional layer uses depthwise convolutions with batch normalization and activation functions in the form of exponential linear units (ELU), average pooling, and hidden layers. The third convolutional layer uses separable convolutions. The simplified architecture of EEGNet is shown in Figure 4.
卷积注意力模块(Convolutional Block Attention Module,CBAM)是一个可以添加到模型中的模块,用于关注特定属性而忽略其他属性,以强调沿通道和空间轴的重要特征。每个分支都可以通过使用注意力模块序列(如图5所示)来学习在通道和空间轴上的特征。因为卷积注意力模块学习要突出或隐藏哪些信息,所以卷积注意力模块可以有效地帮助信息在网络中的流动。The Convolutional Block Attention Module (CBAM) is a module that can be added to the model to focus on specific attributes and ignore others to emphasize important features along channels and spatial axes. Each branch can learn features on the channel and spatial axes by using a sequence of attention modules (shown in Figure 5). Because the convolutional attention module learns what information to highlight or hide, the convolutional attention module can effectively help the flow of information in the network.
卷积注意力模块有两个子模块:通道注意力子模块和空间注意力子模块。The convolutional attention module has two sub-modules: channel attention sub-module and spatial attention sub-module.
如图6所示,所述通道注意力子模块包括平均池层、最大池层和共享网络,所述共享网络包括多层感知器,所述多层感知器具有一个隐藏层,所述平均池层和所述最大池层接收输入特征后生成特征图,所述共享网络接收所述特征图后输出通道细化的特征;As shown in Figure 6, the channel attention sub-module includes an average pooling layer, a maximum pooling layer and a shared network. The shared network includes a multi-layer perceptron. The multi-layer perceptron has a hidden layer. The average pooling layer The layer and the maximum pooling layer generate a feature map after receiving input features, and the sharing network outputs channel-refined features after receiving the feature map;
如图7所示,所述空间注意力子模块包括平均池层、最大池层和卷积层,所述通道细化的特征经所述包括平均池层、所述最大池层和所述卷积层输出精细化的特征映射。As shown in Figure 7, the spatial attention sub-module includes an average pooling layer, a maximum pooling layer and a convolution layer, and the channel-refined features include an average pooling layer, the maximum pooling layer and the convolution layer. The cumulative layer outputs a refined feature map.
在通道注意力子模块中,来自前一卷积块的输入特征被同时传输到平均池层和最大池层。由两个池层生成的特征图然后被传输到共享网络,该共享网络由一个具有一个隐藏层的多层感知器(multi-layer perceptron,MLP)组成。在这个隐藏层中,使用缩减比率来减少激活映射的数量,从而减少参数。在将共享网络应用于每个池化后的特征映射之后,使用逐元素求和来合并输出特征映射。然后,为了生成将空间注意力子模块的输入特征向量,在通道注意力子模块输出特征映射和注意力模块输入特征之间使用逐元素乘法。当计算空间注意力特征映射时,使用通道轴平均池化和最大池化。因此,卷积层被用于构建有效的特征描述。In the channel attention sub-module, the input features from the previous convolution block are simultaneously transferred to the average pooling layer and the max pooling layer. The feature maps generated by the two pooling layers are then transferred to the shared network, which consists of a multi-layer perceptron (MLP) with one hidden layer. In this hidden layer, a reduction ratio is used to reduce the number of activation maps and thus the parameters. After applying the shared network to each pooled feature map, the output feature maps are merged using element-wise summation. Then, to generate the input feature vector of the spatial attention sub-module, element-wise multiplication is used between the channel attention sub-module output feature map and the attention module input feature. When computing spatial attention feature maps, channel-axis average pooling and max pooling are used. Therefore, convolutional layers are used to construct effective feature descriptions.
卷积神经网络(convolutional neural network,CNN)通过在神经网络环境中进行卷积运算,可以解决高维数据(如EEG信号)的计算问题。卷积窗口是输入神经元的一小部分,CNN第一隐藏层中的每个神经元都连接到该部分。所有神经元都有一个偏差,每个连接都有权重。然后,窗口在整个输入序列中滑动,隐藏层中的每个神经元学习分析其特定方面。核大小是卷积窗口的大小或长度。CNN不再为每个隐藏层神经元学习新的权重和偏差,而是只学习一组权重,以及应用于所有隐藏层神经元的单个偏差,即权重分担,其计算公式如下:Convolutional neural network (CNN) can solve the calculation problem of high-dimensional data (such as EEG signals) by performing convolution operations in a neural network environment. The convolution window is a small portion of the input neuron to which each neuron in the first hidden layer of the CNN is connected. All neurons have a bias and each connection has a weight. The window then slides across the entire input sequence, and each neuron in the hidden layer learns to analyze its specific aspects. Kernel size is the size or length of the convolution window. Instead of learning new weights and biases for each hidden layer neuron, CNN only learns a set of weights and a single bias applied to all hidden layer neurons, that is, weight sharing, which is calculated as follows:
其中,是隐藏层中第/>个滤波器的第/>个神经元的激活或输出,/>对应于激活函数,/>是滤波器的共享总体偏差,/>和/>是核大小,/>是共享权重的向量,/>是前置神经元输出的向量,/>表示转置操作。in, is the hidden layer/> filter's/> The activation or output of a neuron,/> Corresponds to the activation function, /> is the shared overall bias of the filter, /> and/> is the core size,/> is a vector of shared weights,/> is the vector output by the preceding neuron, /> Represents the transpose operation.
在运动图像中,最佳核大小因对象不同而不同,对于同一对象在不同时刻也不同。为了解决使用CNN进行EEG-MI分类中的特定主题问题,所述多分支卷积神经网络模型每个分支具有不同的内核大小,能够为所有对象找到合适的卷积尺度,即核大小。使用不同的核大小有助于该多分支卷积神经网络模型完成特定主题的任务,更加通用。In moving images, the optimal kernel size varies from object to object, and for the same object at different times. In order to solve the subject-specific problem in EEG-MI classification using CNN, the multi-branch convolutional neural network model has different kernel sizes for each branch and is able to find suitable convolution scales, that is, kernel sizes, for all objects. Using different kernel sizes helps the multi-branch convolutional neural network model complete subject-specific tasks and be more general.
多分支卷积神经网络模型共有三个分支,可以确定所有数据的卷积大小、滤波器数量、隐藏概率和注意力参数。同时,可以在增加其适用性的同时,根据特定主题定制模型。在第一卷积层中,基于局部和全局调制,模型可以基于空间分布的解离滤波器学习时间属性和空间属性。为此,输入数据表示为2D阵列,其中行表示电极的数量,列表示时间步长的数量。The multi-branch convolutional neural network model has three branches that determine the convolution size, number of filters, hidden probabilities, and attention parameters for all data. At the same time, models can be customized to specific topics while increasing their applicability. In the first convolutional layer, based on local and global modulation, the model can learn temporal attributes and spatial attributes based on spatially distributed dissociation filters. For this purpose, the input data is represented as a 2D array, where the rows represent the number of electrodes and the columns represent the number of time steps.
MI-EEG信号数据集的表示为:The representation of the MI-EEG signal data set is:
其中,是轨迹数,/>是信号及其相应的分类标签,/>,其中/>是分类的数量。/>代表输入信号(2D阵列),/>,其中/>表示EEG通道的数量,/>表示EEG信号输入的长度。in, is the number of trajectories,/> is the signal and its corresponding classification label,/> , of which/> is the number of categories. /> Represents the input signal (2D array),/> , of which/> Indicates the number of EEG channels,/> Indicates the length of EEG signal input.
分类系统的输出是来自最后一层的输出,该层一个具有softmax激活功能的层。该层的输出是一个向量,其中包含每个可能结果或类别的概率。向量中所有可能结果或类别的概率之和为1。将softmax定义为:The output of the classification system is the output from the last layer, a layer with a softmax activation function. The output of this layer is a vector containing the probability of each possible outcome or category. The sum of the probabilities of all possible outcomes or categories in the vector is 1. Define softmax as:
其中是softmax函数的输入向量/>,它包含/>个分类(结果)的n个元素,/>是输入向量中的第/>个元素,/>和/>是分类的数量。成本函数或损失函数是分类交叉熵,它从softmax函数中获取输出概率,并测量与真实值的距离,这为每个分类/结果提供为0或1的值。in is the input vector of the softmax function/> , which contains/> n elements of categories (results),/> is the />th in the input vector elements,/> and/> is the number of categories. The cost function or loss function is categorical cross entropy which takes the output probability from the softmax function and measures the distance from the true value which gives a value of 0 or 1 for each classification/result.
在训练期间调整模型权重时使用交叉熵损失可以尽可能减少损失,损失越小,则模型性能越好。交叉熵损失函数定义为:Using cross-entropy loss when adjusting model weights during training can minimize the loss. The smaller the loss, the better the model performance. The cross-entropy loss function is defined as:
对于个类,其中/>是真实值,/>是第/>个类的softmax概率,/>以2为底数计算。for categories, among which/> is the true value,/> Is the first/> The softmax probability of each class,/> Calculated using base 2.
多分支卷积神经网络模型(Multi- Branch EGGNet模型,MBEEGCBAM模型)可以分为上面所阐述的两部分:EEGNet卷积块和CBAM模块。MBEEGCBAM的架构如图8所示,有三个不同的分支,每个分支都有一个EEGNet卷积块、通道注意力模块和空间注意力模块,然后通过一个串联层将他们连接起来。每个分支都有不同数量的参数来捕获不同的特征。The multi-branch convolutional neural network model (Multi-Branch EGGNet model, MBEEGCBAM model) can be divided into the two parts explained above: EEGNet convolution block and CBAM module. The architecture of MBEEGCBAM is shown in Figure 8. It has three different branches. Each branch has an EEGNet convolution block, channel attention module and spatial attention module, and then connects them through a series layer. Each branch has a different number of parameters to capture different features.
S103,通过训练集对多分支卷积神经网络模型进行训练;S103, train the multi-branch convolutional neural network model through the training set;
具体的,训练集中所有训练数据均采用全局参数进行模型训练。第一分支的参数设置如下:EEGNet卷积块使用elu激活函数,4个时序滤波器,核大小为6,丢弃率为0;注意力模块使用relu激活函数,比率为2,核大小为2。第二分支的参数设置如下:EEGNet卷积块使用elu激活函数,8个时序滤波器,核大小为32,丢弃率为0.1;注意力模块使用relu激活函数,比率为8,核大小为4。第三分支的参数设置如下:EEGNet卷积块使用elu激活函数,16个时序滤波器,核大小为64,丢弃率为0.2;注意力模块使用relu激活函数,比率为8,核大小为2。Specifically, all training data in the training set use global parameters for model training. The parameters of the first branch are set as follows: the EEGNet convolution block uses the elu activation function, 4 sequential filters, the kernel size is 6, and the dropout rate is 0; the attention module uses the relu activation function, the ratio is 2, and the kernel size is 2. The parameters of the second branch are set as follows: the EEGNet convolution block uses the elu activation function, 8 temporal filters, the kernel size is 32, and the dropout rate is 0.1; the attention module uses the relu activation function, the ratio is 8, and the kernel size is 4. The parameters of the third branch are set as follows: the EEGNet convolution block uses the elu activation function, 16 sequential filters, the kernel size is 64, and the dropout rate is 0.2; the attention module uses the relu activation function, the ratio is 8, and the kernel size is 2.
在训练阶段,在每个训练轮数结束时使用回调来保存基于当前最佳精度的最佳模型权重,并在测试阶段加载保存的最佳模型。卷积学习率为0.0009,批大小为64,训练轮数为1000。使用Adam优化器,成本函数为交叉熵误差函数。During the training phase, use a callback at the end of each training epoch to save the best model weights based on the current best accuracy, and load the saved best model during the testing phase. The convolutional learning rate is 0.0009, the batch size is 64, and the number of training epochs is 1000. Using the Adam optimizer, the cost function is the cross-entropy error function.
S104,通过测试集对训练后的多分支卷积神经网络进行性能评估,得到目标多分支卷积神经网络模型。S104. Use the test set to evaluate the performance of the trained multi-branch convolutional neural network to obtain the target multi-branch convolutional neural network model.
具体的,使用准确率评价模型性能。该MBEEGCBAM模型在训练集和验证集中都具有良好的平均分类准确率,分别为82.85%和95.45%,均高于其他传统模型。Specifically, accuracy is used to evaluate model performance. The MBEEGCBAM model has good average classification accuracy in both the training set and the validation set, which are 82.85% and 95.45% respectively, both higher than other traditional models.
S105,将待分类的MI-EEG信号输入目标多分支卷积神经网络模型,得到分类结果;S105, input the MI-EEG signal to be classified into the target multi-branch convolutional neural network model to obtain the classification result;
本发明MI-EEG信号的分类方法的多分支卷积神经网络模型,在使用softmax层对基本模型的多个分支进行分类之前,将这些分支的特征连接起来,可以捕获更多的特征,从而对EEG-MI信号进行较准确地分类,其在训练集和验证集中均表现出良好的性能,其平均分类准确率分别为82.85%和95.45%。The multi-branch convolutional neural network model of the MI-EEG signal classification method of the present invention connects the features of these branches before using the softmax layer to classify multiple branches of the basic model, so that more features can be captured, thereby The EEG-MI signal is classified more accurately, and it shows good performance in both the training set and the validation set. Its average classification accuracy is 82.85% and 95.45% respectively.
且多分支卷积神经网络模型为所有测试对象使用全局参数,并且引用了卷积注意力模块,使得特征映射在通道和空间层面更加细化,具有更高的准确性,并且在作为一个通用模型的同时可以完成特定主题的任务,节省了计算成本。The multi-branch convolutional neural network model uses global parameters for all test objects and references the convolutional attention module, making the feature map more refined at the channel and spatial levels, with higher accuracy, and as a general model It can complete tasks on specific topics at the same time, saving computing costs.
图2和图3为本发明MI-EEG信号的分类系统实施例流程图;如图2和图3所示,本发明实施例提供的一种MI-EEG信号的分类系统,包括以下步骤:Figures 2 and 3 are flow charts of an embodiment of the MI-EEG signal classification system of the present invention; as shown in Figures 2 and 3, an MI-EEG signal classification system provided by an embodiment of the present invention includes the following steps:
获取模块,用于获取MI-EEG信号数据集;Acquisition module, used to obtain MI-EEG signal data sets;
预处理模块,用于对所述MI-EEG信号数据集进行预处理得到预处理后的数据集,基于所述数据集划分训练集和测试集;A preprocessing module, used to preprocess the MI-EEG signal data set to obtain a preprocessed data set, and divide a training set and a test set based on the data set;
构建模块,用于构建多分支卷积神经网络模型,其中,所述多分支卷积神经网络模型包括第一分支、第二分支和第三分支,所述第一分支、所述第二分支和所述第三分支分别包括EEGNet卷积块和卷积注意力模块,所述第一分支、所述第二分支和所述第三分支通过softmax层连接;A building module for building a multi-branch convolutional neural network model, wherein the multi-branch convolutional neural network model includes a first branch, a second branch and a third branch, the first branch, the second branch and The third branch includes an EEGNet convolution block and a convolution attention module respectively, and the first branch, the second branch and the third branch are connected through a softmax layer;
训练模块,用于通过所述训练集对所述多分支卷积神经网络模型进行训练;A training module, used to train the multi-branch convolutional neural network model through the training set;
测试模块,用于通过所述测试集对训练后的多分支卷积神经网络进行性能评估,得到目标多分支卷积神经网络模型;A test module, used to evaluate the performance of the trained multi-branch convolutional neural network through the test set to obtain the target multi-branch convolutional neural network model;
所述多分支卷积神经网络模型基于待分类的MI-EEG信号得到分类结果。The multi-branch convolutional neural network model obtains classification results based on the MI-EEG signal to be classified.
所述EEGNet卷积块包括依次连接的第一卷积层、第二卷积层和第三卷积层,且所述第一卷积层、所述第二卷积层和所述第三卷积层的窗口大小不同。The EEGNet convolution block includes a first convolution layer, a second convolution layer and a third convolution layer connected in sequence, and the first convolution layer, the second convolution layer and the third convolution layer Stacked windows have different sizes.
所述卷积注意力模块包括通道注意力子模块和空间注意力子模块;The convolution attention module includes a channel attention sub-module and a spatial attention sub-module;
所述通道注意力子模块包括平均池层、最大池层和共享网络,所述共享网络包括多层感知器,所述多层感知器具有一个隐藏层,所述平均池层和所述最大池层接收输入特征后生成特征图,所述共享网络接收所述特征图后输出通道细化的特征;The channel attention sub-module includes an average pooling layer, a maximum pooling layer and a shared network. The shared network includes a multi-layer perceptron, the multi-layer perceptron has a hidden layer, the average pooling layer and the maximum pooling layer. The layer generates a feature map after receiving input features, and the sharing network outputs channel-refined features after receiving the feature map;
所述空间注意力子模块包括平均池层、最大池层和卷积层,所述通道细化的特征经所述包括平均池层、所述最大池层和所述卷积层输出精细化的特征映射。The spatial attention sub-module includes an average pooling layer, a maximum pooling layer and a convolutional layer, and the channel-refined features are refined through the output of the average pooling layer, the maximum pooling layer and the convolutional layer. Feature mapping.
所述MI-EEG信号的分类系统还包括设置模块,所述设置模块用于将所述第一分支、所述第二分支和所述第三分支中的EEGNet卷积块和卷积注意力模块分别设置不同数量的参数,以捕获不同的特征。The MI-EEG signal classification system also includes a setting module for converting the EEGNet convolution block and the convolution attention module in the first branch, the second branch and the third branch. Set different numbers of parameters respectively to capture different features.
本发明的一种MI-EEG信号的分类系统,通过获取模块获取MI-EEG信号数据集;通过预处理模块对所述MI-EEG信号数据集进行预处理得到预处理后的数据集,基于所述数据集划分训练集和测试集;通过构建模块构建多分支卷积神经网络模型,其中,所述多分支卷积神经网络模型包括第一分支、第二分支和第三分支,所述第一分支、所述第二分支和所述第三分支分别包括EEGNet卷积块和卷积注意力模块,所述第一分支、所述第二分支和所述第三分支通过softmax层连接;通过所述训练集对所述多分支卷积神经网络模型进行训练;通过所述测试集对训练后的多分支卷积神经网络进行性能评估,得到目标多分支卷积神经网络模型;所述多分支卷积神经网络模型基于待分类的MI-EEG信号得到分类结果。解决了现有的传统模型无法在具有固定超参数的同时完成特异性分类任务的问题。A MI-EEG signal classification system of the present invention obtains a MI-EEG signal data set through an acquisition module; preprocesses the MI-EEG signal data set through a preprocessing module to obtain a preprocessed data set. The data set is divided into a training set and a test set; a multi-branch convolutional neural network model is constructed through building modules, wherein the multi-branch convolutional neural network model includes a first branch, a second branch and a third branch, and the first branch The branch, the second branch and the third branch respectively include an EEGNet convolution block and a convolutional attention module, and the first branch, the second branch and the third branch are connected through a softmax layer; through the The training set trains the multi-branch convolutional neural network model; the performance of the trained multi-branch convolutional neural network is evaluated through the test set to obtain the target multi-branch convolutional neural network model; the multi-branch convolutional neural network model is The product neural network model obtains the classification result based on the MI-EEG signal to be classified. It solves the problem that existing traditional models cannot complete specific classification tasks while having fixed hyperparameters.
图9为本发明实施例提供的电子设备实体结构示意图,如图9所示,电子设备70包括:处理器701(processor)、存储器702(memory)和总线703;Figure 9 is a schematic diagram of the physical structure of an electronic device provided by an embodiment of the present invention. As shown in Figure 9, the electronic device 70 includes: a processor 701 (processor), a memory 702 (memory) and a bus 703;
其中,处理器701、存储器702通过总线703完成相互间的通信;Among them, the processor 701 and the memory 702 complete communication with each other through the bus 703;
处理器701用于调用存储器702中的程序指令,以执行上述各方法实施例所提供的方法,例如包括:获取MI-EEG信号数据集,对所述MI-EEG信号数据集进行预处理得到预处理后的数据集,基于所述数据集划分训练集和测试集;构建多分支卷积神经网络模型,其中,所述多分支卷积神经网络模型包括第一分支、第二分支和第三分支,所述第一分支、所述第二分支和所述第三分支分别包括EEGNet卷积块和卷积注意力模块,所述第一分支、所述第二分支和所述第三分支通过softmax层连接;通过所述训练集对所述多分支卷积神经网络模型进行训练;通过所述测试集对训练后的多分支卷积神经网络进行性能评估,得到目标多分支卷积神经网络模型;将待分类的MI-EEG信号输入所述目标多分支卷积神经网络模型,得到分类结果。The processor 701 is used to call the program instructions in the memory 702 to execute the methods provided by the above method embodiments. For example, it includes: obtaining the MI-EEG signal data set, preprocessing the MI-EEG signal data set to obtain the predetermined data. The processed data set is divided into a training set and a test set based on the data set; a multi-branch convolutional neural network model is constructed, wherein the multi-branch convolutional neural network model includes a first branch, a second branch and a third branch. , the first branch, the second branch and the third branch respectively include EEGNet convolution block and convolution attention module, the first branch, the second branch and the third branch pass softmax layer connection; training the multi-branch convolutional neural network model through the training set; performing performance evaluation on the trained multi-branch convolutional neural network through the test set to obtain the target multi-branch convolutional neural network model; The MI-EEG signal to be classified is input into the target multi-branch convolutional neural network model to obtain a classification result.
本实施例提供一种非暂态计算机可读存储介质,非暂态计算机可读存储介质存储计算机指令,计算机指令使计算机执行上述各方法实施例所提供的方法,例如包括:获取MI-EEG信号数据集,对所述MI-EEG信号数据集进行预处理得到预处理后的数据集,基于所述数据集划分训练集和测试集;构建多分支卷积神经网络模型,其中,所述多分支卷积神经网络模型包括第一分支、第二分支和第三分支,所述第一分支、所述第二分支和所述第三分支分别包括EEGNet卷积块和卷积注意力模块,所述第一分支、所述第二分支和所述第三分支通过softmax层连接;通过所述训练集对所述多分支卷积神经网络模型进行训练;通过所述测试集对训练后的多分支卷积神经网络进行性能评估,得到目标多分支卷积神经网络模型;将待分类的MI-EEG信号输入所述目标多分支卷积神经网络模型,得到分类结果。This embodiment provides a non-transitory computer-readable storage medium. The non-transitory computer-readable storage medium stores computer instructions. The computer instructions cause the computer to execute the methods provided by the above method embodiments, for example, including: acquiring MI-EEG signals. A data set, preprocessing the MI-EEG signal data set to obtain a preprocessed data set, dividing a training set and a test set based on the data set; constructing a multi-branch convolutional neural network model, wherein the multi-branch The convolutional neural network model includes a first branch, a second branch and a third branch. The first branch, the second branch and the third branch respectively include an EEGNet convolution block and a convolution attention module. The first branch, the second branch and the third branch are connected through a softmax layer; the multi-branch convolutional neural network model is trained through the training set; the trained multi-branch convolutional neural network model is trained through the test set. Perform performance evaluation on the convolutional neural network to obtain a target multi-branch convolutional neural network model; input the MI-EEG signal to be classified into the target multi-branch convolutional neural network model to obtain a classification result.
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的存储介质。Those of ordinary skill in the art can understand that all or part of the steps to implement the above method embodiments can be completed by hardware related to program instructions. The aforementioned program can be stored in a computer-readable storage medium. When the program is executed, It includes the steps of the above method embodiment; and the aforementioned storage medium includes: ROM, RAM, magnetic disk or optical disk and other various storage media that can store program codes.
以上所描述的装置实施例仅仅是示意性的,其中作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。The device embodiments described above are only illustrative. The units described as separate components may or may not be physically separated. The components shown as units may or may not be physical units, that is, they may be located in one place. , or it can be distributed to multiple network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. Persons of ordinary skill in the art can understand and implement the method without any creative effort.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分的方法。Through the above description of the embodiments, those skilled in the art can clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and of course, it can also be implemented by hardware. Based on this understanding, the part of the above technical solution that essentially contributes to the existing technology can be embodied in the form of a software product. The computer software product can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disc, optical disk, etc., including a number of instructions to cause a computer device (which can be a personal computer, a server, or a network device, etc.) to execute various embodiments or methods of certain parts of the embodiments.
虽然,上文中已经用一般性说明及具体实施例对本发明作了详尽的描述,但在本发明基础上,可以对之作一些修改或改进,这对本领域技术人员而言是显而易见的。因此,在不偏离本发明精神的基础上所做的这些修改或改进,均属于本发明要求保护的范围。Although the present invention has been described in detail with general descriptions and specific examples above, it is obvious to those skilled in the art that some modifications or improvements can be made on the basis of the present invention. Therefore, these modifications or improvements made without departing from the spirit of the present invention all fall within the scope of protection claimed by the present invention.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310218867.3A CN116058852B (en) | 2023-03-09 | 2023-03-09 | Classification system, method, electronic device and storage medium for MI-EEG signals |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310218867.3A CN116058852B (en) | 2023-03-09 | 2023-03-09 | Classification system, method, electronic device and storage medium for MI-EEG signals |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116058852A CN116058852A (en) | 2023-05-05 |
CN116058852B true CN116058852B (en) | 2023-12-22 |
Family
ID=86169960
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310218867.3A Active CN116058852B (en) | 2023-03-09 | 2023-03-09 | Classification system, method, electronic device and storage medium for MI-EEG signals |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116058852B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110163180A (en) * | 2019-05-29 | 2019-08-23 | 长春思帕德科技有限公司 | Mental imagery eeg data classification method and system |
CN113469198A (en) * | 2021-06-30 | 2021-10-01 | 南京航空航天大学 | Image classification method based on improved VGG convolutional neural network model |
CN113610144A (en) * | 2021-08-02 | 2021-11-05 | 合肥市正茂科技有限公司 | Vehicle classification method based on multi-branch local attention network |
CN114266276A (en) * | 2021-12-25 | 2022-04-01 | 北京工业大学 | Motor imagery electroencephalogram signal classification method based on channel attention and multi-scale time domain convolution |
WO2022184124A1 (en) * | 2021-03-05 | 2022-09-09 | 腾讯科技(深圳)有限公司 | Physiological electrical signal classification and processing method and apparatus, computer device, and storage medium |
CN115211870A (en) * | 2022-08-23 | 2022-10-21 | 浙江大学 | A neonatal EEG signal convulsive discharge detection system based on multi-scale feature fusion network |
CN115481695A (en) * | 2022-09-26 | 2022-12-16 | 云南大学 | Motor imagery classification method by utilizing multi-branch feature extraction |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113693613B (en) * | 2021-02-26 | 2024-05-24 | 腾讯科技(深圳)有限公司 | Electroencephalogram signal classification method, electroencephalogram signal classification device, computer equipment and storage medium |
-
2023
- 2023-03-09 CN CN202310218867.3A patent/CN116058852B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110163180A (en) * | 2019-05-29 | 2019-08-23 | 长春思帕德科技有限公司 | Mental imagery eeg data classification method and system |
WO2022184124A1 (en) * | 2021-03-05 | 2022-09-09 | 腾讯科技(深圳)有限公司 | Physiological electrical signal classification and processing method and apparatus, computer device, and storage medium |
CN113469198A (en) * | 2021-06-30 | 2021-10-01 | 南京航空航天大学 | Image classification method based on improved VGG convolutional neural network model |
CN113610144A (en) * | 2021-08-02 | 2021-11-05 | 合肥市正茂科技有限公司 | Vehicle classification method based on multi-branch local attention network |
CN114266276A (en) * | 2021-12-25 | 2022-04-01 | 北京工业大学 | Motor imagery electroencephalogram signal classification method based on channel attention and multi-scale time domain convolution |
CN115211870A (en) * | 2022-08-23 | 2022-10-21 | 浙江大学 | A neonatal EEG signal convulsive discharge detection system based on multi-scale feature fusion network |
CN115481695A (en) * | 2022-09-26 | 2022-12-16 | 云南大学 | Motor imagery classification method by utilizing multi-branch feature extraction |
Non-Patent Citations (1)
Title |
---|
基于多特征卷积神经网路的运动想象脑电信号分析及意图识别;何群;邵丹丹;王煜文;张园园;谢平;;仪器仪表学报(第01期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN116058852A (en) | 2023-05-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Liao et al. | Deep facial spatiotemporal network for engagement prediction in online learning | |
CN111814661B (en) | Human body behavior recognition method based on residual error-circulating neural network | |
CN115804602A (en) | EEG emotion signal detection method, device and medium based on multi-channel feature fusion of attention mechanism | |
CN111950455A (en) | A Feature Recognition Method of Motor Imagery EEG Signals Based on LFFCNN-GRU Algorithm Model | |
CN113133769A (en) | Equipment control method, device and terminal based on motor imagery electroencephalogram signals | |
CN113712573A (en) | Electroencephalogram signal classification method, device, equipment and storage medium | |
CN114170657B (en) | Facial emotion recognition method integrating attention mechanism and high-order feature representation | |
CN113143295A (en) | Equipment control method and terminal based on motor imagery electroencephalogram signals | |
CN115050452B (en) | Construction method and system of universal myoelectric movement intention recognition model | |
CN115131503A (en) | Health monitoring method and system for iris three-dimensional recognition | |
CN114564990A (en) | Electroencephalogram signal classification method based on multi-channel feedback capsule network | |
CN115393968A (en) | Audio-visual event positioning method fusing self-supervision multi-mode features | |
Shu et al. | Sparse autoencoders for word decoding from magnetoencephalography | |
Quach et al. | Evaluation of the efficiency of the optimization algorithms for transfer learning on the rice leaf disease dataset | |
CN116058852B (en) | Classification system, method, electronic device and storage medium for MI-EEG signals | |
Imah et al. | Detecting violent scenes in movies using gated recurrent units and discrete wavelet transform | |
CN115346091B (en) | Method and device for generating Mura defect image data set | |
CN114550047B (en) | Behavior rate guided video behavior recognition method | |
Korablyov et al. | Hybrid Neuro-Fuzzy Model with Immune Training for Recognition of Objects in an Image. | |
CN115859221A (en) | Human body activity recognition method based on multi-position sensor | |
CN114997228A (en) | Action detection method and device based on artificial intelligence, computer equipment and medium | |
CN114611556A (en) | Multi-class motor imagery task identification method based on graph neural network | |
CN114417911A (en) | Multi-channel action recognition method and device based on adaptive convolutional neural network | |
Cakar et al. | Multi adaptive hybrid networks (MAHNet): ensemble learning in convolutional neural network | |
CN118692154B (en) | 3D human body action prediction method based on correlation multi-scale graph clustering network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address |
Address after: 100089 no.2301-1, 2301-2, 23 / F, block D, Tsinghua Tongfang science and technology building, No.1 courtyard, Wangzhuang Road, Haidian District, Beijing Patentee after: Tongxin Intelligent Medical Technology (Beijing) Co.,Ltd. Country or region after: China Address before: 100089 no.2301-1, 2301-2, 23 / F, block D, Tsinghua Tongfang science and technology building, No.1 courtyard, Wangzhuang Road, Haidian District, Beijing Patentee before: Tongxin Zhiyi Technology (Beijing) Co.,Ltd. Country or region before: China |
|
CP03 | Change of name, title or address |