CN110069958B - Electroencephalogram signal rapid identification method of dense deep convolutional neural network - Google Patents

Electroencephalogram signal rapid identification method of dense deep convolutional neural network Download PDF

Info

Publication number
CN110069958B
CN110069958B CN201810057413.1A CN201810057413A CN110069958B CN 110069958 B CN110069958 B CN 110069958B CN 201810057413 A CN201810057413 A CN 201810057413A CN 110069958 B CN110069958 B CN 110069958B
Authority
CN
China
Prior art keywords
layer
convolution
output
input
pooling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810057413.1A
Other languages
Chinese (zh)
Other versions
CN110069958A (en
Inventor
李阳
张先锐
雷梦颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201810057413.1A priority Critical patent/CN110069958B/en
Publication of CN110069958A publication Critical patent/CN110069958A/en
Application granted granted Critical
Publication of CN110069958B publication Critical patent/CN110069958B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

本发明提出一种密集深度卷积神经网络对脑电信号(EEG)的快速识别方法,结合运动想象脑电信号具有时间和空间特征的特点,在卷积神经网络中使用特征连接的方法,设计一种适用于运动想象脑电信号的卷积神经网络。本发明设计的卷积神经网络可以同时提取时间、空间的特征,又将不同卷积层之间的输出相互连接,减少了权值的数量,达到抗过拟合和特征重用的目的。首先将滤波、重采样后的原始数据输入密集深度卷积神经网络,接着通过反向传播与随机梯度下降算法更新网络每一层的参数,最后测试网络,将测试数据输入训练好的网络,对输出结果进行分析。与2017年提出的Shallow ConvNet方法相比,本发明在信号识别准确率和kappa值上提高了5%和0.066。

Figure 201810057413

The present invention proposes a method for quickly identifying electroencephalogram (EEG) signals by a dense deep convolutional neural network. Combined with the temporal and spatial characteristics of the electroencephalographic signal of motor imagination, a method of feature connection is used in the convolutional neural network to design A Convolutional Neural Network for Motor Imagery EEG Signals. The convolutional neural network designed by the invention can simultaneously extract temporal and spatial features, and connect the outputs of different convolutional layers to each other, thereby reducing the number of weights and achieving the purpose of anti-overfitting and feature reuse. First, the filtered and resampled raw data is input into the dense deep convolutional neural network, then the parameters of each layer of the network are updated through backpropagation and stochastic gradient descent algorithms, and finally the network is tested, and the test data is input into the trained network. The output results are analyzed. Compared with the Shallow ConvNet method proposed in 2017, the present invention improves the signal recognition accuracy and kappa value by 5% and 0.066.

Figure 201810057413

Description

一种密集深度卷积神经网络的脑电信号快速识别方法A fast recognition method of EEG signals based on dense deep convolutional neural network

技术领域technical field

本发明涉及原始脑电信号的快速识别、适用于脑电信号的卷积神经网络的设计、模式分类及深度学习,属于信号处理和模式识别技术领域。The invention relates to the rapid identification of original EEG signals, the design of a convolutional neural network suitable for EEG signals, pattern classification and deep learning, and belongs to the technical field of signal processing and pattern recognition.

背景技术Background technique

脑机接口(brain-computer interface,BCI)技术可以在人脑和外部设备之间建立连接,达到不依赖人体肌肉与外部环境进行交流和控制的目的。BCI技术主要处理过程包括记录大脑活动,脑电(Electroencephalogram,EEG)信号处理,信号识别,然后根据识别结果控制外部设备。目前,适用于BCI的脑电信号类型有很多种如P300,稳态视觉诱发电位和运动想象等。P300是由偶然出现的小概率闪烁信号诱发的脑电信号,视觉诱发电位的刺激是以一个固定频率出现的闪烁画面,而运动想象信号只需受试者想象执行某个部位的动作,无需真正执行,运动想象脑电信号具有易采集、无须外界刺激、可实现异步通信等特点,所以运动想象脑电信号成为目前应用最多的EEG信号类型之一。Brain-computer interface (BCI) technology can establish a connection between the human brain and external devices, so as to achieve the purpose of communicating and controlling the external environment without relying on human muscles. The main processing process of BCI technology includes recording brain activity, EEG (Electroencephalogram, EEG) signal processing, signal recognition, and then controlling external devices according to the recognition results. At present, there are many types of EEG signals suitable for BCI, such as P300, steady-state visual evoked potentials, and motor imagery. P300 is an EEG signal induced by a small probability flickering signal that occurs occasionally. The stimulation of visual evoked potential is a flickering picture that appears at a fixed frequency, while the motor imagery signal only requires the subject to imagine and perform the action of a certain part, without the need for real Execution, motor imagery EEG signal has the characteristics of easy acquisition, no external stimulation, and asynchronous communication. Therefore, motor imagery EEG signal has become one of the most widely used EEG signal types.

对于运动想象脑电信号的特征提取,共空间模式(common spatial pattern,CSP)的方法比较常用,但是该算法的特征提取效果依赖于算法指定的频率带宽范围。在此基础上提出的滤波器组共空间模式算法(Filter Bank Common Spatial Pattern,FBCSP)设计了滤波器组,扩大了频率范围,对每个滤波器都使用CSP算法,再从滤波器组的输出中选取可用的特征,该算法在BCIIV datasets 2a数据集上取得了准确率为68%的分类效果。但FBCSP算法仍需要先验知识来设计各个频率带宽的范围,需要人工提取特征后再进行分类,然而大脑信号非常复杂,很多采集的信号还没有找到明确的意义,手动提取特征会造成信息的损失。For the feature extraction of motor imagery EEG signals, the common spatial pattern (CSP) method is commonly used, but the feature extraction effect of this algorithm depends on the frequency bandwidth range specified by the algorithm. On this basis, the proposed filter bank common spatial pattern algorithm (Filter Bank Common Spatial Pattern, FBCSP) designs the filter bank, expands the frequency range, uses the CSP algorithm for each filter, and then uses the output of the filter bank. Selecting the available features in BCIIV datasets 2a, the algorithm achieved a classification accuracy of 68%. However, the FBCSP algorithm still needs prior knowledge to design the range of each frequency bandwidth, and needs to manually extract features before classification. However, brain signals are very complex, and many collected signals have not yet found a clear meaning. Manual feature extraction will cause information loss. .

近年来,随着深度学习的兴起,卷积神经网络(convolution neural network,CNN)受到研究者的广泛关注,并在图像、语音、视频等诸多领域都取得了一定的应用效果,于是开始有研究人员使用卷积神经网络自动提取运动想象脑电信号特征再进行分类。卷积神经网络的权值共享网络结构既可以减少权值的数量降低网络模型的复杂度,又可以提取时间、空间的特征。在训练过程,神经网络依靠反向传播算法更新每一个卷积核的参数,将不同层变为合适的特征抽取器,能够避免使用人工设计的特征抽取器,从而提取更多的特征达到提高分类准确率的效果。与二维的静态图片不同,运动想象脑电信号是从三维大脑头皮采集得到的动态时间序列,并且运动想象脑电信号信噪比低,很容易受到与事件无关的噪声干扰,如电极的扰动、光照刺激、受试者眼动等,这也使得从原始运动想象脑电信号训练卷积神经网络变得困难,所以设计卷积神经网络时需要根据EEG信号的特点调整网络的结构。2017年,Schirrmeister等提出的卷积神经网络结构Shallow ConcNet,将原始运动想象脑电信号作为输入,通过时间卷积、空间卷积、平均池化等操作可以得到输入的原始运动想象信号属于每个类别的概率,是一种端到端的自动识别方法。端到端指卷积神经网络不需要任何先验知识,从原始运动想象脑电信号中学习得到特征,直接得到最终分类结果。在BCIIV2a数据集上使用Shallow ConvNet结构,准确率达到72%,比传统的FBCSP方法高出5%,可见使用卷积神经网络可以明显提高运动想象脑电信号的识别准确率。但是该模型的准确率仍不高,因为使用的卷积层只有两层,而直接加深模型又会导致过拟合严重,所以无法提取更深层的特征。本发明研究如何结合运动想象脑电信号的特征,调整模型的连接方式和超参数,在加深模型深度的同时避免加重过拟合,提高运动想象脑电信号识别准确率。In recent years, with the rise of deep learning, convolutional neural network (CNN) has received extensive attention from researchers, and has achieved certain application effects in many fields such as image, voice, video, etc. People use convolutional neural networks to automatically extract motor imagery EEG signal features for classification. The weight sharing network structure of the convolutional neural network can not only reduce the number of weights and reduce the complexity of the network model, but also extract temporal and spatial features. In the training process, the neural network relies on the back-propagation algorithm to update the parameters of each convolution kernel, and turns different layers into suitable feature extractors, which can avoid the use of artificially designed feature extractors, thereby extracting more features to improve classification. effect of accuracy. Different from the two-dimensional static pictures, the motor imagery EEG signal is a dynamic time series collected from the three-dimensional brain scalp, and the motor imagery EEG signal has a low signal-to-noise ratio and is easily disturbed by noise unrelated to events, such as the disturbance of electrodes. , light stimulation, subjects' eye movements, etc., which also make it difficult to train convolutional neural networks from the original motor imagery EEG signals, so when designing a convolutional neural network, it is necessary to adjust the structure of the network according to the characteristics of the EEG signal. In 2017, the convolutional neural network structure Shallow ConcNet proposed by Schirrmeister et al. takes the original motor imagery EEG signal as input, and the input original motor imagery signal can be obtained through temporal convolution, spatial convolution, average pooling and other operations. Class Probability, an end-to-end automatic identification method. End-to-end means that the convolutional neural network does not require any prior knowledge, learns features from the original motor imagery EEG signals, and directly obtains the final classification result. Using the Shallow ConvNet structure on the BCIIV2a dataset, the accuracy rate reaches 72%, which is 5% higher than the traditional FBCSP method. It can be seen that the use of convolutional neural networks can significantly improve the recognition accuracy of motor imagery EEG signals. However, the accuracy of the model is still not high, because only two convolutional layers are used, and directly deepening the model will lead to serious overfitting, so it is impossible to extract deeper features. The invention studies how to adjust the connection mode and hyperparameters of the model in combination with the characteristics of the motor imagery EEG signal, so as to deepen the model depth while avoiding aggravating overfitting, and improve the recognition accuracy of the motor imagery EEG signal.

发明内容SUMMARY OF THE INVENTION

本发明在Shallow ConvNet基础上,提出一种密集深度卷积神经网络。本发明在BCIIV2a数据集上测试得到77%的准确率,与Shallow ConvNet网络在BCIIV2a数据集上得到72%的准确率相比,本发明的准确率仍高出5%,可见本发明对运动想象脑电信号识别准确率有明显提高。与Shallow ConvNet相比本发明提出的密集连接方法将中间卷积层的输入输出特征图相连接作为下一层的输入,这样两个卷积层产生的特征图都可以直接传递到下一层,在不增加参数量的情况下充分利用了中间层的特征,所以使得本发明提出的模型准确率更高。Based on Shallow ConvNet, the present invention proposes a dense deep convolutional neural network. The accuracy of the present invention is 77% when tested on the BCIIV2a data set. Compared with the 72% accuracy of the Shallow ConvNet network on the BCIIV2a data set, the accuracy of the present invention is still 5% higher. The accuracy of EEG signal recognition has been significantly improved. Compared with Shallow ConvNet, the dense connection method proposed by the present invention connects the input and output feature maps of the intermediate convolution layer as the input of the next layer, so that the feature maps generated by the two convolution layers can be directly transmitted to the next layer, The features of the intermediate layer are fully utilized without increasing the amount of parameters, so the accuracy of the model proposed by the present invention is higher.

本发明设计了一种密集深度卷积神经网络,实现对原始运动想象脑电信号的端到端识别,包括如下步骤:The present invention designs a dense deep convolutional neural network to realize the end-to-end identification of the original motor imagery EEG signal, including the following steps:

⑴对脑电信号进行三阶带通0-40hz滤波,过滤掉一些信号采集时的噪声或者其它无关的成分;(1) Perform third-order band-pass 0-40hz filtering on the EEG signal to filter out some noise or other irrelevant components during signal acquisition;

⑵对滤波后的信号重采样,原因是输入卷积神经网络的数据长度需要保持一致,要保证相同时间长度下的数据量相同,所以需要对不同采样频率的数据采样至同一频率;(2) Resampling the filtered signal. The reason is that the length of the data input to the convolutional neural network needs to be consistent, and to ensure the same amount of data under the same time length, it is necessary to sample the data of different sampling frequencies to the same frequency;

⑶从以上预处理过的数据中截取固定长度的事件,并获取其对应的标签;(3) Intercept fixed-length events from the above preprocessed data, and obtain their corresponding labels;

⑷设计密集深度卷积神经网络,与常规卷积神经网络采用连续的卷积、池化操作不同,本发明设计的密集卷积神经网络,将中间两个卷积层的输入输出特征图相连接后作为下一层的输入;(4) Design a dense deep convolutional neural network. Different from the conventional convolutional neural network that adopts continuous convolution and pooling operations, the dense convolutional neural network designed in the present invention connects the input and output feature maps of the two middle convolutional layers. Then as the input of the next layer;

⑸训练密集深度卷积神经网络。采用平方误差函数作为损失函数计算预测值与标签的误差,通过反向传播与随机梯度下降算法更新网络每一层的参数,至准确率收敛到某个值,或出现准确率下降时,停止训练;⑸ Train dense deep convolutional neural networks. The squared error function is used as the loss function to calculate the error between the predicted value and the label, and the parameters of each layer of the network are updated through the back-propagation and stochastic gradient descent algorithm. When the accuracy rate converges to a certain value, or the accuracy rate decreases, the training is stopped. ;

⑹测试密集深度卷积神经网络,网络的参数不再改变,输入测试数据以及标签,对输出结果进行分析。(6) Test the dense deep convolutional neural network, the parameters of the network will not change, input the test data and labels, and analyze the output results.

上述步骤⑷的具体步骤如下:The specific steps of the above step (4) are as follows:

①数据输入层:输入数据的格式为四维数组n×m×990×22,数组各个维度意义为样本数×特征图数×采样点数×通道数;①Data input layer: The format of the input data is a four-dimensional array n×m×990×22, and the meaning of each dimension of the array is the number of samples×the number of feature maps×the number of sampling points×the number of channels;

②时间卷积层:对数据进行时间维度的卷积操作,卷积核大小为11×1,产生25个特征图;②Temporal convolution layer: perform convolution operation on the data in the time dimension, the size of the convolution kernel is 11×1, and 25 feature maps are generated;

③空间卷积层:为减小数据维度,对数据进行空间卷积,把所有通道映射到一个特征图中,卷积核大小为22×1,卷积后进入激活函数,特征图个数仍为25;③Spatial convolution layer: In order to reduce the data dimension, spatial convolution is performed on the data, and all channels are mapped to a feature map. The size of the convolution kernel is 22×1. After convolution, it enters the activation function, and the number of feature maps remains is 25;

④池化层:输入为上层激活函数的输出,池化核大小为3×1,步长为3;④ Pooling layer: the input is the output of the upper layer activation function, the size of the pooling kernel is 3×1, and the step size is 3;

⑤特征连接层:池化后的数据先进入两个连续的卷积池化层,第一个卷积核大小为1×1,个数为200,第二个卷积核大小为11×1,个数为50,将第二个卷积层激活后的50个输出特征图与第一个卷积层的25个输入特征图相连接,所以输出一共有75个特征图;⑤ Feature connection layer: The pooled data first enters two consecutive convolution pooling layers, the first convolution kernel size is 1×1, the number is 200, and the second convolution kernel size is 11×1 , the number is 50, and the 50 output feature maps after activation of the second convolution layer are connected with the 25 input feature maps of the first convolution layer, so there are a total of 75 feature maps in the output;

⑥池化层:操作同④;⑥Pooling layer: the operation is the same as ④;

⑦网络最后两层是全连接层和输出层,全连接层将上层池化后的输出展开为一维数据,标签有四类,所以输出层有四个神经元,输出结果为输入数据属于各个类别的概率值。⑦The last two layers of the network are the fully connected layer and the output layer. The fully connected layer expands the pooled output of the upper layer into one-dimensional data. There are four types of labels, so the output layer has four neurons, and the output result is that the input data belongs to each The probability value of the category.

需要注意的是网络的每一次卷积操作后都对数据进行一次批正则化操作再输入激活函数,Ioffe等证明在深度神经网络中使用批正则化操作可以明显减少迭代次数,同时减少每次迭代需要的计算时间。It should be noted that after each convolution operation of the network, a batch regularization operation is performed on the data and then the activation function is input. Ioffe et al. proved that the use of batch regularization operations in deep neural networks can significantly reduce the number of iterations, while reducing each iteration. required computation time.

上述步骤⑸具体操作如下:The specific operations of the above steps ⑸ are as follows:

①从BCIIV2a数据集的训练集中分离20%作为验证集,剩下80%作为训练集,而测试集不变;① Separate 20% from the training set of the BCIIV2a dataset as the validation set, and the remaining 80% as the training set, while the test set remains unchanged;

②第一次训练,首先在训练集上迭代训练,然后在验证集上测试。当在验证集上的测试准确率不再变化或者出现准确率下降时停止训练,记录下此时在验证集上表现最好的准确率,并保存该模型。经过多次实验,发现该数据集迭代次数达到1000次后模型准确率已不在变化,所以对于BCIIV2a数据集,最大迭代次数设为1000;②For the first training, first iteratively train on the training set, and then test on the validation set. Stop training when the test accuracy on the validation set no longer changes or the accuracy drops, record the best accuracy on the validation set at this time, and save the model. After many experiments, it is found that the accuracy of the model does not change after the number of iterations of the dataset reaches 1000, so for the BCIIV2a dataset, the maximum number of iterations is set to 1000;

③第二次训练,将训练集和验证集混合在一起作为新的训练集,在验证集上测试。在新的训练集上继续训练,在原来验证集上测试,直至在验证集上的准确率高于②记录的值并且不再变化为止,或者达到1000次迭代次数后停止训练;③ In the second training, the training set and the validation set are mixed together as a new training set and tested on the validation set. Continue training on the new training set and test on the original validation set until the accuracy on the validation set is higher than the value recorded in ② and does not change, or stop training after 1000 iterations;

④第二次训练结束后将模型保存,输入测试数据和标签,记录测试结果。④ After the second training, save the model, input test data and labels, and record the test results.

本发明所提供的面向运动想象脑电信号快速识别方法的优点包括:The advantages of the motor-imagination-oriented EEG signal fast identification method provided by the present invention include:

①原始脑电信号数据只需滤波、重采样这些简单操作即可用于训练神经网络,是一种端到端的方法,不需要对信号进行时频分析,也不需要先验知识来手动挑选特征,从而避免了信息损失;①The original EEG signal data can be used to train the neural network with simple operations such as filtering and resampling. It is an end-to-end method that does not require time-frequency analysis of the signal or prior knowledge to manually select features. Thereby avoiding information loss;

②与Shallow ConvNet模型相比,本发明提出的密集卷积神经网络层数更多,可以提取更深层的特征;② Compared with the Shallow ConvNet model, the dense convolutional neural network proposed by the present invention has more layers and can extract deeper features;

③本发明提出了特征连接层,将中间卷积层的输入输出连接起来再传递到下一层,不仅有效解决梯度消失问题,而且支持特征重用,没有增加参数数量,在数据量小时可以有效抑制过拟合;③ The present invention proposes a feature connection layer, which connects the input and output of the intermediate convolutional layer and transmits it to the next layer, which not only effectively solves the problem of gradient disappearance, but also supports feature reuse without increasing the number of parameters, which can be effectively suppressed when the amount of data is small. overfitting;

④模型泛化能力强,根据数据集的不同,只需改变输入输出层的参数,微调其它层的一些超参数,即可应用到类似的运动想象脑电信号数据集。④The model has strong generalization ability. According to different data sets, it can be applied to similar motor imagery EEG signal data sets only by changing the parameters of the input and output layers and fine-tuning some hyperparameters of other layers.

附图说明Description of drawings

图1为本发明提出的密集深度卷积神经网络结构图。FIG. 1 is a structural diagram of a dense deep convolutional neural network proposed by the present invention.

图2为本发明的数据处理、训练和测试过程流程图。FIG. 2 is a flow chart of the data processing, training and testing process of the present invention.

图3为Shallow ConvNet在BCIIV 2a上的分类结果混淆矩阵。Figure 3 shows the confusion matrix of the classification results of Shallow ConvNet on BCIIV 2a.

图4为本发明的网络在BCIIV 2a上的分类结果混淆矩阵。Figure 4 is the confusion matrix of the classification result of the network of the present invention on BCIIV 2a.

具体实施方式Detailed ways

下面结合附图和具体实施方式对本发明作进一步的详细说明。本发明主要利用卷积神经网络的权值共享与局部感受野思想,将中间层输出特征图的通道相连接来提高识别准确率。卷积神经网络每个神经元对不同的特征图进行卷积时采用相同的卷积核,将大大减少权重参数量,而在卷积操作后,让卷积的输入输出相连接,实现特征的重复利用,这样在卷积后只产生非常少的新特征图,达到降低冗余性的目的。本发明的密集深度卷积神经网络使用随机梯度下降方法对误差进行反向传播,调整卷积核的权重,最后通过全连接和线性分类层得到输入数据属于每个类别的概率。The present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments. The invention mainly utilizes the weight sharing of the convolutional neural network and the idea of the local receptive field, and connects the channels of the output feature map of the middle layer to improve the recognition accuracy. Each neuron of the convolutional neural network uses the same convolution kernel when convolving different feature maps, which will greatly reduce the amount of weight parameters. Reuse, so that only very few new feature maps are generated after convolution to achieve the purpose of reducing redundancy. The dense deep convolutional neural network of the present invention uses the stochastic gradient descent method to backpropagate the error, adjust the weight of the convolution kernel, and finally obtain the probability that the input data belongs to each category through the full connection and the linear classification layer.

本发明的卷积神经网络特征提取分类方法,具体包括以下步骤:The convolutional neural network feature extraction and classification method of the present invention specifically includes the following steps:

⑴获取数据。本发明的数据来源于2008年柏林BCI研究组提供的数据集BCICompetition IV Dataset2a。该数据集共9组,分别采集自9名健康的受试者,分为训练数据和测试数据两部分。对于每个受试者,执行左手、右手、脚、舌四类运动想象;⑴ Get data. The data of the present invention comes from the data set BCICompetition IV Dataset2a provided by the Berlin BCI Research Group in 2008. The dataset consists of 9 groups, collected from 9 healthy subjects, and divided into two parts: training data and test data. For each subject, perform four types of motor imagery: left hand, right hand, foot, and tongue;

⑵数据预处理,采样频率为100Hz,使用Scipy库滤波器工具箱中的3阶Butterworth滤波器,设为0-40hz的带通滤波器,过滤掉高频信号以及部分噪声;(2) Data preprocessing, the sampling frequency is 100Hz, the third-order Butterworth filter in the filter toolbox of the Scipy library is used, and the band-pass filter is set to 0-40hz to filter out high-frequency signals and some noise;

⑶设计密集深度卷积神经网络,网络结构可参照图1。第一层为输入层,输入数据为990×22,22代表22个通道,990为采样点个数。第二层为时间卷积层,对数据进行时间维度的卷积操作,卷积核大小为11×1,产生25个特征图。第三层为空间卷积层,卷积核大小为22×1,通过Relu激活函数后,得到25个特征图。第四层是池化层,池化范围是3×1,池化方式采用最大池化,步长为3。接着是特征连接层,先是1×1卷积层,卷积核个数是200,然后是11×1卷积层,卷积核个数是50,经过特征连接进入池化层。网络最后两层是全连接和线性分类层,数据标签有四类,所以全连接后有四个输出单元,输出结果为输入数据属于各个类别的概率值;(3) Design a dense deep convolutional neural network, the network structure can refer to Figure 1. The first layer is the input layer, the input data is 990 × 22, 22 represents 22 channels, and 990 is the number of sampling points. The second layer is the temporal convolution layer, which performs the convolution operation on the data in the time dimension. The size of the convolution kernel is 11×1, and 25 feature maps are generated. The third layer is a spatial convolution layer, and the size of the convolution kernel is 22 × 1. After passing through the Relu activation function, 25 feature maps are obtained. The fourth layer is the pooling layer, the pooling range is 3×1, the pooling method adopts maximum pooling, and the step size is 3. Next is the feature connection layer, firstly a 1×1 convolutional layer with 200 convolution kernels, and then an 11×1 convolutional layer with 50 convolution kernels, which are connected to the pooling layer through feature connections. The last two layers of the network are fully connected and linear classification layers. There are four types of data labels, so there are four output units after full connection, and the output result is the probability value of the input data belonging to each category;

⑷训练密集深度卷积神经网络;(4) Training dense deep convolutional neural networks;

本发明的网络采用平方误差代价函数作为评判,公式如下:The network of the present invention adopts the square error cost function as the judgment, and the formula is as follows:

Figure BDA0001554199760000041
Figure BDA0001554199760000041

其中N代表样本个数,c代表样本类数,

Figure BDA0001554199760000042
表示第n个样本对应标签的第k维,
Figure BDA0001554199760000043
表示第n个样本对应网络输出的第k个输出。where N represents the number of samples, c represents the number of sample classes,
Figure BDA0001554199760000042
represents the kth dimension of the label corresponding to the nth sample,
Figure BDA0001554199760000043
Indicates that the nth sample corresponds to the kth output of the network output.

第一层为数据输入层,输入数据格式固定,没有需要训练的参数。The first layer is the data input layer, the input data format is fixed, and there are no parameters that need to be trained.

第二层为时间卷积层,每次输入一个批次的样本,用大小为11×1,步长为1并且随机初始化的25个卷积核对输入图像进行卷积,得到25个特征图,该层没有偏置和激活函数,公式如下:The second layer is a temporal convolution layer. Each time a batch of samples is input, the input image is convolved with 25 convolution kernels with a size of 11 × 1, a stride of 1 and a random initialization, and 25 feature maps are obtained. This layer has no bias and activation function, and the formula is as follows:

Figure BDA0001554199760000051
Figure BDA0001554199760000051

其中,

Figure BDA0001554199760000052
为第l层的第j个特征图,Mj为输入的特征图集合,
Figure BDA0001554199760000053
是输入的第i种特征图和输出的第j种特征图之间的连接所用的卷积核。in,
Figure BDA0001554199760000052
is the jth feature map of the lth layer, Mj is the input feature map set,
Figure BDA0001554199760000053
is the convolution kernel used for the connection between the input i-th feature map and the output j-th feature map.

第三层为空间卷积层,输入为上层卷积后的输出,卷积核大小为22×1,输入和输出特征图个数均为25,但该层有激活函数,无偏置项,公式如下:The third layer is a spatial convolution layer, the input is the output of the upper layer convolution, the size of the convolution kernel is 22 × 1, and the number of input and output feature maps is 25, but this layer has an activation function and no bias term. The formula is as follows:

Figure BDA0001554199760000054
Figure BDA0001554199760000054

其中

Figure BDA0001554199760000055
为第l层的第j个特征图,Mj为输入的特征图集合,
Figure BDA0001554199760000056
为本层选用的卷积核,f为激活函数Relu函数,即f(x)=max(0,x)。in
Figure BDA0001554199760000055
is the jth feature map of the lth layer, Mj is the input feature map set,
Figure BDA0001554199760000056
The convolution kernel selected for this layer, f is the activation function Relu function, that is, f(x)=max(0, x).

第四层为池化层,大小为3×1,步长为3,输入为25个特征图,池化不改变特征图个数,所以也会得到25个输出特征图,公式如下:The fourth layer is the pooling layer, the size is 3×1, the stride is 3, and the input is 25 feature maps. Pooling does not change the number of feature maps, so 25 output feature maps will also be obtained. The formula is as follows:

Figure BDA0001554199760000057
Figure BDA0001554199760000057

其中,

Figure BDA0001554199760000058
为第l-1层的第j个特征图,
Figure BDA0001554199760000059
为第l层偏置,f是池化层的激活函数,这里无激活函数,所以f=f(x)。其中down()表示一个下采样函数,这里是将相邻3个像素值取最大一个,所以下采样函数为max()。in,
Figure BDA0001554199760000058
is the jth feature map of the l-1th layer,
Figure BDA0001554199760000059
is the bias for the first layer, f is the activation function of the pooling layer, there is no activation function here, so f=f(x). where down() represents a downsampling function, here is to take the largest one of the three adjacent pixel values, so the downsampling function is max().

第五层为特征连接层,该层包括两层卷积层,第一个卷积层的输入为池化后的25个特征图,采用1×1大小的卷积核,步长为1进行卷积,产生200个特征图,再通过relu函数激活进入下一卷积层。第二个卷积核大小为11×1,步长为1,输入为200个特征图,输出为50个特征图,relu激活后将第二个卷积的50个输出特征图和第一个卷积输入的25个特征图相连接作为新的输出特征图,公式如下:The fifth layer is the feature connection layer, which includes two convolutional layers. The input of the first convolutional layer is the pooled 25 feature maps, using a 1×1 convolution kernel with a stride of 1. Convolution to generate 200 feature maps, and then activated by the relu function to enter the next convolution layer. The size of the second convolution kernel is 11×1, the stride is 1, the input is 200 feature maps, and the output is 50 feature maps. After relu activation, the 50 output feature maps of the second convolution are combined with the first one. The 25 feature maps of the convolution input are connected as a new output feature map, and the formula is as follows:

xl=Hl([x0,x1]) (5)x l =H l ([x 0 ,x 1 ]) (5)

其中Hl(·)表示特征连接操作,x0为特征连接层第一个卷积的输入,x1为特征连接层第二个卷积的输出。where H l ( ) represents the feature connection operation, x 0 is the input of the first convolution of the feature connection layer, and x 1 is the output of the second convolution of the feature connection layer.

第六层为池化层,输入为特征连接后的特征图,池化大小为3×1,步长为3,同第四层操作。The sixth layer is the pooling layer, the input is the feature map after feature connection, the pooling size is 3×1, the step size is 3, and the operation is the same as the fourth layer.

第七层为全连接层,其将上层的特征图展开并全连接,转化为一维数据。The seventh layer is the fully connected layer, which expands and fully connects the feature map of the upper layer and converts it into one-dimensional data.

最后为输出层,通过sigmoid激活函数输出需要的分类结果。The last layer is the output layer, which outputs the required classification results through the sigmoid activation function.

⑸各层误差的计算及传播;⑸ Calculation and propagation of errors at each layer;

①对于最后的输出层,可以直接算出网络产生的激活值与实际值之间的误差,公式如下:①For the final output layer, the error between the activation value generated by the network and the actual value can be directly calculated. The formula is as follows:

Figure BDA0001554199760000061
Figure BDA0001554199760000061

其中第nl层表示输出层,

Figure BDA0001554199760000062
表示输出层未经过激活函数的权重加权,hw,b(x)表示输出结果,y表示标准输出,
Figure BDA0001554199760000063
表示nl层的第i个输出,
Figure BDA0001554199760000064
表示求导。where the n lth layer represents the output layer,
Figure BDA0001554199760000062
Indicates that the output layer is not weighted by the weight of the activation function, h w,b (x) represents the output result, y represents the standard output,
Figure BDA0001554199760000063
represents the ith output of n l layers,
Figure BDA0001554199760000064
Indicates guidance.

②对于l=n1-1,n1-2,n1-3,···,2各层误差,通用公式为:②For each layer error of l=n 1 -1,n 1 -2,n 1 -3,...,2, the general formula is:

Figure BDA0001554199760000065
Figure BDA0001554199760000065

其中wl+1为第l+1层的权重,δ(l+1)为第l+1层计算的误差,符号

Figure BDA0001554199760000066
表示每个元素相乘,f′(ul)表示对该层的输出ul求导。Where w l+1 is the weight of the l+1th layer, δ (l+1) is the error calculated by the l+1th layer, the symbol
Figure BDA0001554199760000066
Represents the multiplication of each element, and f'(u l ) represents the derivation of the output u l of this layer.

如果第l层为卷积层,该层下层为池化层,池化层的一个像素对应卷积层的输出图的一块3×1大小的像素,那么就出现大小不匹配现象,因此需要将池化层进行上采样,公式为:If the lth layer is a convolutional layer, the lower layer of this layer is a pooling layer, and one pixel of the pooling layer corresponds to a 3×1 pixel of the output image of the convolutional layer, then there will be a size mismatch. The pooling layer performs upsampling, the formula is:

Figure BDA0001554199760000067
Figure BDA0001554199760000067

其中up(·)表示上采样操作,

Figure BDA0001554199760000068
表示第l+1层权重,δ(l+1)为第l+1层计算的误差。where up( ) represents the upsampling operation,
Figure BDA0001554199760000068
represents the weight of the l+1 layer, and δ (l+1) is the error calculated by the l+1 layer.

如果第l层为下采样层,第l+1层为卷积层则公式为:If the lth layer is a downsampling layer and the l+1th layer is a convolutional layer, the formula is:

Figure BDA0001554199760000069
Figure BDA0001554199760000069

其中conv2为卷积实现函数,rot180表示将卷积核翻转180度。Among them, conv2 is the convolution implementation function, and rot180 means flipping the convolution kernel by 180 degrees.

⑹计算最终需要的偏导数,并更新权重参数,使用公式为;⑹ Calculate the final required partial derivative and update the weight parameters, using the formula:

Figure BDA00015541997600000610
Figure BDA00015541997600000610

Figure BDA00015541997600000611
Figure BDA00015541997600000611

其中

Figure BDA00015541997600000612
表示旧的权值,
Figure BDA00015541997600000613
为新的权值,η是学习率。
Figure BDA00015541997600000614
表示旧的偏置,
Figure BDA00015541997600000615
为新的偏置。in
Figure BDA00015541997600000612
represents the old weight,
Figure BDA00015541997600000613
is the new weight, η is the learning rate.
Figure BDA00015541997600000614
represents the old bias,
Figure BDA00015541997600000615
for the new offset.

①对于卷积层,权重更新公式为:①For the convolutional layer, the weight update formula is:

Figure BDA00015541997600000616
Figure BDA00015541997600000616

其中

Figure BDA00015541997600000617
Figure BDA00015541997600000618
在做卷积时,与kij做卷积的每一个patch,(u,v)是patch中心,输出特征图中(u,v)位置的值是由输入特征图中(u,v)位置的patch和卷积核kij卷积所得的值。in
Figure BDA00015541997600000617
Yes
Figure BDA00015541997600000618
When doing convolution, each patch that is convolved with k ij , (u, v) is the center of the patch, and the value of the (u, v) position in the output feature map is determined by the (u, v) position in the input feature map. The value obtained by convolving the patch and the convolution kernel k ij .

②对于池化层,权重更新公式为:②For the pooling layer, the weight update formula is:

Figure BDA0001554199760000071
Figure BDA0001554199760000071

其中

Figure BDA0001554199760000072
in
Figure BDA0001554199760000072

⑺测试网络,加入测试数据以及真实标签,将输出结果与真实标签比较,得到输出结果的混淆矩阵对模型进行分析。⑺Test the network, add test data and real labels, compare the output results with the real labels, and get the confusion matrix of the output results to analyze the model.

本发明的效果可以通过实验结果进一步说明。实验的测试数据采用BCI竞赛的官方数据BCIIV2a。该数据集一共有九个受试者,每个受试者均有训练集和测试集。数据的标签有四类,分别为左手、右手、脚以及舌的运动想象。每一类标签对应数据有72个样本,所以一个受试者的训练集和测试集分别有288个样本。图3和图4是使用Shallow ConvNet和本发明的方法分类得到的混淆矩阵。根据图3计算得到ShallowConvNet的准确率为72%,kappa值为0.632,根据图4计算本发明的准确率为77%,kappa值0.698。本发明的方法与ShallowConvNet相比较,本发明提出的密集深度卷积神经网络识别准确率高出5%,kappa值高出0.066,可见本发明对运动想象脑电信号分类效果明显提升。The effect of the present invention can be further explained by the experimental results. The test data of the experiment adopts the official data BCIIV2a of the BCI competition. The dataset has a total of nine subjects, and each subject has a training set and a test set. There are four types of labels for the data, which are the motor imagery of the left hand, right hand, foot, and tongue. Each type of label corresponds to 72 samples of data, so the training set and test set of a subject have 288 samples respectively. Figures 3 and 4 are confusion matrices classified using Shallow ConvNet and the method of the present invention. According to Fig. 3, the accuracy rate of ShallowConvNet is 72%, and the kappa value is 0.632. According to Fig. 4, the accuracy rate of the present invention is 77%, and the kappa value is 0.698. Compared with ShallowConvNet, the method of the present invention has a higher recognition accuracy rate of 5% and a higher kappa value of 0.066 by the dense deep convolutional neural network proposed by the present invention. It can be seen that the present invention significantly improves the classification effect of motor imagery EEG signals.

Claims (3)

1. A motor imagery electroencephalogram signal rapid identification method of a dense deep convolutional neural network is characterized by comprising the following steps:
A) inputting a motor imagery electroencephalogram signal into a densely connected deep convolutional neural network, wherein the densely connected deep convolutional neural network comprises:
as an input layer of the first layer, input data is 990 × 22 data of 22 channels 990 sampling points,
a time convolution layer, which is a second layer, performs a convolution operation of the data in a time dimension with a convolution kernel size of 11 x 1, generates 25 feature maps,
the space convolution layer as the third layer has convolution kernel size of 22 × 1, and is used for obtaining 25 characteristic maps after passing through Relu activation function,
the first pooling layer as the fourth layer had a pooling range of 3X 1, the pooling manner adopted maximum pooling with a step size of 3,
the output of the featured connection layer after the first pooling layer goes to the second pooling layer,
the second layer of the pool is a second layer of the pool,
a fully-connected layer after the second pooling layer,
the linear classification layer behind the full connection layer, the data label has four categories, four output units are arranged after full connection, the output result is the probability value of the input data belonging to each category,
wherein:
the time convolution layer inputs one batch of samples at a time, convolves the input image with 25 convolution kernels of size 11 × 1, step size 1 and random initialization, resulting in 25 feature maps, the layer being free of bias and activation functions, as follows:
Figure FDA0003296278760000011
wherein,
Figure FDA0003296278760000012
is the jth feature map of the ith layer, MjIn order to input a set of feature maps,
Figure FDA0003296278760000013
is a convolution kernel for the connection between the input ith feature map and the output jth feature map,
the input of the space convolution layer is the output of the time convolution layer, the size of the convolution kernel is 22 multiplied by 1, the number of input and output characteristic graphs is 25, the layer has an activation function and has no offset term, and the formula is as follows:
Figure FDA0003296278760000014
wherein
Figure FDA0003296278760000015
Is the first layerj feature maps, MjIn order to input a set of feature maps,
Figure FDA0003296278760000016
the convolution kernel is selected for this layer, and f is the activation function Relu, i.e. f (x) max (0, x),
the input of the first pooling layer is 25 characteristic graphs, the pooling does not change the number of the characteristic graphs, and 25 output characteristic graphs are obtained, and the formula is as follows:
Figure FDA0003296278760000017
wherein,
Figure FDA0003296278760000018
is the jth characteristic diagram of the l-1 layer,
Figure FDA0003296278760000019
for layer bias, f is the activation function of the pooling layer, where there is no activation function, so f ═ f (x), where down (·) denotes a down-sampling function, where the neighboring 3 pixel values are taken to be the largest one, so the down-sampling function is max (·),
the featured connection layer includes two convolution layers, wherein: the input of the first convolutional layer is 25 feature maps after pooling of the first pooling layer, convolution kernels with the size of 1 x 1 and the step length of 1 are adopted for convolution to generate 200 feature maps, and then the feature maps are activated through a relu function to enter a second convolutional layer; the convolution kernel size of the second convolution layer is 11 × 1, the step size is 1, the input of the convolution kernel size is 200 feature maps output by the first convolution layer, the output of the convolution kernel size is 50 feature maps, the 50 output feature maps of the second convolution are activated by relu and connected with the 25 feature maps input by the first convolution to serve as new output feature maps, and the formula is as follows:
xl=Hl([x0,x1]) (5)
wherein Hl(. represents a splicing operation, x)0For the first convolutionInput, x1Is the output of the second convolution and,
the input of the second pooling layer is the new output characteristic map after the splicing, the pooling size is 3 x 1, the step length is 3,
the full connection layer expands and fully connects the characteristic diagram output by the second pooling layer, converts the characteristic diagram into one-dimensional data,
the linear classification layer outputs probability values of input data belonging to various categories through a sigmoid activation function,
B) calculating and propagating errors for each layer, including:
calculating the error between the activation value and the actual value generated by each layer of the network directly for the final output layer, wherein the formula is as follows:
Figure FDA0003296278760000021
wherein the n islThe layer represents an output layer of the video stream,
Figure FDA0003296278760000022
weights, h, representing the output layer without the activation functionw,b(x) Indicating the output result, y indicates the standard output,
Figure FDA0003296278760000023
the ith output of the output layer is represented,
Figure FDA0003296278760000024
it is indicated that the derivation is performed,
n for l1-1,n1-2,n1-3, …,2 layer error, general formula:
Figure FDA0003296278760000025
wherein wl+1Is the weight of layer l +1, δl+1For the error calculated for layer l +1, the notation, indicates each elementMultiplication, f' (u)l) Representing the output u for that layerlThe derivation is carried out by the derivation,
when the first layer is a convolutional layer, the lower layer of the layer is a pooling layer, one pixel of the pooling layer corresponds to one pixel (3 × 1) of an output image of the convolutional layer, and in order to eliminate the size mismatch phenomenon, the pooling layer is up-sampled by the formula:
Figure FDA0003296278760000026
where up (-) denotes an upsample operation,
Figure FDA0003296278760000027
represents the l +1 th layer weight, δ(l+1)The error calculated for the l +1 th layer,
when the l-th layer is a down-sampling layer, and the l + 1-th layer is a convolutional layer, the error formula is:
Figure FDA0003296278760000028
where conv2 is the convolution implementation function, rot180 denotes flipping the convolution kernel 180 degrees,
C) updating the weight parameter by using the formula:
Figure FDA0003296278760000031
Figure FDA0003296278760000032
wherein
Figure FDA0003296278760000033
The weight value of the old is represented,
Figure FDA0003296278760000034
as a new weight, η is the learning rate,
Figure FDA0003296278760000035
the old offset is represented by the value of the old offset,
Figure FDA0003296278760000036
in order to be the new offset,
wherein:
for convolutional layers, the weight update formula is:
Figure FDA0003296278760000037
wherein
Figure FDA0003296278760000038
Is that
Figure FDA0003296278760000039
When making convolution, with kijEach patch for convolution, (u, v) is the patch center, and the value of the (u, v) position in the output feature map is determined by the patch of the (u, v) position in the input feature map and the convolution kernel kijThe value obtained by the convolution is used,
for the pooling layer, the weight updating formula is as follows:
Figure FDA00032962787600000310
wherein
Figure FDA00032962787600000311
2. The method for rapidly identifying motor imagery electroencephalogram signals of the dense deep convolutional neural network of claim 1, wherein the following operations are performed before the step A: carrying out third-order band-pass 0-40hz filtering on the originally acquired motor imagery electroencephalogram signals to acquire signal components of larger frequency bands related to the motor imagery electroencephalogram signals and the motor imagery;
resampling the filtered signal components, and sampling data with different sampling frequencies to the same frequency so as to keep the data length of the input convolutional neural network consistent and ensure that the data volume under the same time length is the same;
and intercepting the events with fixed length from the preprocessed data, and acquiring corresponding labels of the events as motor imagery electroencephalogram signals input into the dense connection type deep convolution neural network.
3. The motor imagery electroencephalogram signal rapid identification method of the dense deep convolutional neural network of claim 1, wherein:
the four categories of data tags are the motor imagery of the left hand, right hand, feet, and tongue, respectively.
CN201810057413.1A 2018-01-22 2018-01-22 Electroencephalogram signal rapid identification method of dense deep convolutional neural network Active CN110069958B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810057413.1A CN110069958B (en) 2018-01-22 2018-01-22 Electroencephalogram signal rapid identification method of dense deep convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810057413.1A CN110069958B (en) 2018-01-22 2018-01-22 Electroencephalogram signal rapid identification method of dense deep convolutional neural network

Publications (2)

Publication Number Publication Date
CN110069958A CN110069958A (en) 2019-07-30
CN110069958B true CN110069958B (en) 2022-02-01

Family

ID=67364510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810057413.1A Active CN110069958B (en) 2018-01-22 2018-01-22 Electroencephalogram signal rapid identification method of dense deep convolutional neural network

Country Status (1)

Country Link
CN (1) CN110069958B (en)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458066B (en) * 2019-07-31 2022-11-18 同济大学 An age classification method based on resting-state EEG data
CN110543831A (en) * 2019-08-13 2019-12-06 同济大学 A brain pattern recognition method based on convolutional neural network
CN110555468A (en) * 2019-08-15 2019-12-10 武汉科技大学 Electroencephalogram signal identification method and system combining recursion graph and CNN
CN110796175A (en) * 2019-09-30 2020-02-14 武汉大学 An online classification method of EEG data based on lightweight convolutional neural network
CN110765920B (en) * 2019-10-18 2023-03-24 西安电子科技大学 Motor imagery classification method based on convolutional neural network
CN111012336B (en) * 2019-12-06 2022-08-23 重庆邮电大学 Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion
CN111368884B (en) * 2020-02-22 2023-04-07 杭州电子科技大学 Motor imagery electroencephalogram feature extraction method based on matrix variable Gaussian model
CN111339975B (en) * 2020-03-03 2023-04-21 华东理工大学 Object Detection, Recognition and Tracking Method Based on Central Scale Prediction and Siamese Neural Network
CN111428648B (en) * 2020-03-26 2023-03-28 五邑大学 Electroencephalogram signal generation network, method and storage medium
CN111709267B (en) * 2020-03-27 2022-03-29 吉林大学 Electroencephalogram signal emotion recognition method of deep convolutional neural network
CN111407269A (en) * 2020-03-30 2020-07-14 华南理工大学 A Reinforcement Learning-Based EEG Signal Emotion Recognition Method
CN111638249B (en) * 2020-05-31 2022-05-17 天津大学 Water cut measurement method based on deep learning and its application in oil well production
CN111783857A (en) * 2020-06-18 2020-10-16 内蒙古工业大学 Brain-computer interface for motor imagery based on nonlinear network infographics
CN111882036B (en) * 2020-07-22 2023-10-31 广州大学 Convolutional neural network training method, EEG signal recognition method, device and medium
CN112528819B (en) * 2020-12-05 2023-01-20 西安电子科技大学 P300 electroencephalogram signal classification method based on convolutional neural network
CN112633365B (en) * 2020-12-21 2024-03-19 西安理工大学 Mirror convolution neural network model and motor imagery electroencephalogram recognition algorithm
CN112890828A (en) * 2021-01-14 2021-06-04 重庆兆琨智医科技有限公司 Electroencephalogram signal identification method and system for densely connecting gating network
CN112818876B (en) * 2021-02-04 2022-09-20 成都理工大学 Electromagnetic signal extraction and processing method based on deep convolutional neural network
CN113057653B (en) * 2021-03-19 2022-11-04 浙江科技学院 Channel mixed convolution neural network-based motor electroencephalogram signal classification method
CN113642528B (en) * 2021-09-14 2022-12-09 西安交通大学 Hand movement intention classification method based on convolutional neural network
CN113791691B (en) * 2021-09-18 2022-05-20 中国科学院自动化研究所 Electroencephalogram signal band positioning method and device
CN114004257B (en) * 2021-11-02 2024-11-22 南京邮电大学 Myoelectric gesture recognition method based on lightweight convolutional neural network
CN114652326B (en) * 2022-01-30 2024-06-14 天津大学 Real-time brain fatigue monitoring device and data processing method based on deep learning
CN114781441B (en) * 2022-04-06 2024-01-26 电子科技大学 EEG motor imagery classification method and multi-spatial convolutional neural network model
CN114818803B (en) * 2022-04-25 2024-11-05 上海韶脑传感技术有限公司 EEG modeling method for motor imagery in patients with unilateral limbs based on neuron optimization
CN115355948A (en) * 2022-09-01 2022-11-18 山西农业大学 A method for detecting body size, body weight and backfat thickness of sows
CN115337026B (en) * 2022-10-19 2023-03-10 之江实验室 Convolutional neural network-based EEG signal feature retrieval method and device
CN116630697B (en) * 2023-05-17 2024-04-05 安徽大学 An image classification method based on biased selection pooling
CN117763399B (en) * 2024-02-21 2024-05-14 电子科技大学 A neural network classification method for adaptive variable-length signal input

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105068644A (en) * 2015-07-24 2015-11-18 山东大学 Method for detecting P300 electroencephalogram based on convolutional neural network
CN106821681A (en) * 2017-02-27 2017-06-13 浙江工业大学 A kind of upper limbs ectoskeleton control method and system based on Mental imagery
CN107506774A (en) * 2017-10-09 2017-12-22 深圳市唯特视科技有限公司 A kind of segmentation layered perception neural networks method based on local attention mask

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10417529B2 (en) * 2015-09-15 2019-09-17 Samsung Electronics Co., Ltd. Learning combinations of homogenous feature arrangements
US20170132511A1 (en) * 2015-11-10 2017-05-11 Facebook, Inc. Systems and methods for utilizing compressed convolutional neural networks to perform media content processing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105068644A (en) * 2015-07-24 2015-11-18 山东大学 Method for detecting P300 electroencephalogram based on convolutional neural network
CN106821681A (en) * 2017-02-27 2017-06-13 浙江工业大学 A kind of upper limbs ectoskeleton control method and system based on Mental imagery
CN107506774A (en) * 2017-10-09 2017-12-22 深圳市唯特视科技有限公司 A kind of segmentation layered perception neural networks method based on local attention mask

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Deep learning with convolutional neural networks for decoding and visualization of EEG pathology;R. T. Schirrmeister等;《2017 IEEE Signal Processing in Medicine and Biology Symposium 》;20171202;第1-7页 *
Densely connected convolutional networks;Gao Huang等;《2017 IEEE Conference on Computer Vision and Pattern Recognition》;20170726;第2261-2269页 *

Also Published As

Publication number Publication date
CN110069958A (en) 2019-07-30

Similar Documents

Publication Publication Date Title
CN110069958B (en) Electroencephalogram signal rapid identification method of dense deep convolutional neural network
CN108491077A (en) A kind of surface electromyogram signal gesture identification method for convolutional neural networks of being divided and ruled based on multithread
CN112381008B (en) Electroencephalogram emotion recognition method based on parallel sequence channel mapping network
CN114283158A (en) Retinal blood vessel image segmentation method and device and computer equipment
CN108959895B (en) Electroencephalogram EEG (electroencephalogram) identity recognition method based on convolutional neural network
CN108960299B (en) A method for identifying multi-type motor imagery EEG signals
CN111863244B (en) Functional connection mental disease classification method and system based on sparse pooling graph convolution
CN105068644A (en) Method for detecting P300 electroencephalogram based on convolutional neural network
CN112766355B (en) A method for EEG emotion recognition under label noise
CN112990008B (en) Emotion recognition method and system based on three-dimensional feature map and convolutional neural network
CN110598793A (en) Brain function network feature classification method
CN113243924A (en) Identity recognition method based on electroencephalogram signal channel attention convolution neural network
CN111436929A (en) A method for generating and identifying neurophysiological signals
CN115919330A (en) EEG Emotional State Classification Method Based on Multi-level SE Attention and Graph Convolution
He et al. Alzheimer's disease diagnosis model based on three-dimensional full convolutional DenseNet
CN115054272A (en) Electroencephalogram signal identification method and system for dyskinesia function remodeling
CN113017645A (en) P300 signal detection method based on void convolutional neural network
CN112465069A (en) Electroencephalogram emotion classification method based on multi-scale convolution kernel CNN
CN111931656A (en) User independent motor imagery classification model training method based on transfer learning
CN109359610A (en) Method and system for constructing CNN-GB model, data feature classification method
CN114564990A (en) Electroencephalogram signal classification method based on multi-channel feedback capsule network
Abibullaev et al. A brute-force CNN model selection for accurate classification of sensorimotor rhythms in BCIs
Liu et al. P300 event-related potential detection using one-dimensional convolutional capsule networks
CN117371494A (en) Cognitive load analysis method based on multi-objective optimization and group convolution network fusion
CN114863572B (en) Myoelectric gesture recognition method of multi-channel heterogeneous sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant