CN111012336B - Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion - Google Patents
Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion Download PDFInfo
- Publication number
- CN111012336B CN111012336B CN201911241265.XA CN201911241265A CN111012336B CN 111012336 B CN111012336 B CN 111012336B CN 201911241265 A CN201911241265 A CN 201911241265A CN 111012336 B CN111012336 B CN 111012336B
- Authority
- CN
- China
- Prior art keywords
- convolution
- electroencephalogram
- network
- data
- eeg
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 230000004927 fusion Effects 0.000 title claims abstract description 16
- 238000013528 artificial neural network Methods 0.000 claims abstract description 19
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 16
- 238000012549 training Methods 0.000 claims abstract description 16
- 238000009826 distribution Methods 0.000 claims description 18
- 238000005520 cutting process Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000011176 pooling Methods 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 8
- 230000009466 transformation Effects 0.000 claims description 8
- 230000006870 function Effects 0.000 claims description 7
- 238000010606 normalization Methods 0.000 claims description 7
- 238000004422 calculation algorithm Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 5
- 230000009467 reduction Effects 0.000 claims description 5
- 230000002123 temporal effect Effects 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 238000012935 Averaging Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 abstract description 6
- 238000011160 research Methods 0.000 abstract description 4
- 238000007635 classification algorithm Methods 0.000 abstract 2
- 108091006146 Channels Proteins 0.000 description 26
- 210000004556 brain Anatomy 0.000 description 10
- 238000000605 extraction Methods 0.000 description 8
- 230000003993 interaction Effects 0.000 description 4
- 210000004761 scalp Anatomy 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 210000005036 nerve Anatomy 0.000 description 2
- 239000011664 nicotinic acid Substances 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 208000018152 Cerebral disease Diseases 0.000 description 1
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004958 brain cell Anatomy 0.000 description 1
- 210000003710 cerebral cortex Anatomy 0.000 description 1
- 230000002490 cerebral effect Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 239000000084 colloidal system Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 210000003792 cranial nerve Anatomy 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000004070 electrodeposition Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000001766 physiological effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 238000003892 spreading Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000007474 system interaction Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7203—Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7253—Details of waveform analysis characterised by using transforms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7253—Details of waveform analysis characterised by using transforms
- A61B5/7257—Details of waveform analysis characterised by using transforms using Fourier transforms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Psychiatry (AREA)
- Heart & Thoracic Surgery (AREA)
- Signal Processing (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Medical Informatics (AREA)
- Pathology (AREA)
- General Physics & Mathematics (AREA)
- Physiology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Psychology (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Fuzzy Systems (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses an electroencephalogram identification method for a spatio-temporal feature fusion parallel convolutional neural network motor imagery. A new depth network model-parallel convolution neural network is provided to extract the space-time characteristics of the motor imagery electroencephalogram signal by taking the motor imagery electroencephalogram signal as a research object. Different from the traditional electroencephalogram classification algorithm which usually discards electroencephalogram spatial feature information, the 2D electroencephalogram feature map is generated by extracting Theta waves (4-8Hz), alpha waves (8-12Hz) and beta waves (12-36Hz) through fast Fourier transform. And training the electroencephalogram characteristic diagram based on the multiple convolutional neural networks, and extracting spatial characteristics. In addition, a time convolution neural network is used for parallel training, and time sequence characteristics are extracted. And finally, fusing and classifying the spatial features and the time sequence features based on Softmax. Experimental results show that the parallel convolution neural network has good identification precision and is superior to other latest classification algorithms.
Description
Technical Field
The invention belongs to the field of motor imagery electroencephalogram classification, and particularly relates to a parallel convolutional neural network motor imagery electroencephalogram identification method based on space-time feature fusion.
Background
The brain electricity is a comprehensive reflection of the physiological activity of the brain cells of the scalp, and contains a large amount of physiological and disease information. Brain-computer interaction system (BCI) based on EEG signal communication can replace brain nerve and muscle tissue transmission to be used as a signal transmission channel, and therefore interaction between the brain and bionic machinery is achieved. BCI has been receiving a great deal of attention from researchers and scientists as an extension of human-computer interaction. Based on motor imagery electroencephalogram recognition, the method is a key node for BCI system interaction and interaction with the outside. The motor imagination is the subjective imagination performed by the human brain, such as the imagination of a left hand handshake, the imagination of a right hand handshake, the imagination of leg flexion and extension and the like. Through analysis of the motor imagery electroencephalogram signals, the intention of human brain motor imagery can be identified and output to a bionic system of BCI, and brain-computer control is achieved. Therefore, the research on the motor imagery electroencephalogram signal processing can promote the exploration on the cerebral nerve cognition, the cerebral disease rehabilitation and the cerebral cortex signal analysis. The potential application prospects push EEG research to a high-speed stage, making it one of the most attractive disciplines.
In the BCI system, there are two important parts of feature extraction and feature classification. Common feature extraction methods include Fast Fourier Transform (FFT), Common Spatial Pattern (CSP), wavelet transform (DT), etc., which not only require a lot of manual data processing, but also are sensitive to noise and easily cause feature confusion. Common feature classification methods include artificial neural networks, support vector machines, and the like. Due to the complex generation mechanism of the EEG, the feature classification methods have the problems of shallow iteration level and insufficient feature extraction.
In recent years, deep learning has been highly successful in the fields of research such as image recognition, natural language processing, power load prediction, and pattern recognition. It is also applied to electroencephalogram data analysis due to its powerful capability of processing non-linear and high-dimensional data.
The EEG signal contains spatial information represented by the electrode positions as well as intrinsic temporal information. However, in the past, due to the fact that electroencephalogram acquisition equipment only visualizes time sequence channel data, most researchers mainly study how to extract electroencephalogram features under time sequences. Therefore, a new network model is needed to extract and fuse the temporal features and the spatial features of the motor imagery EEG so as to improve the classification performance of the motor imagery EEG. The invention provides a parallel deep convolution neural network, which fully utilizes space-time information to enhance electroencephalogram feature extraction. The 2D electroencephalogram characteristic diagram is generated based on fast Fourier transform for effectively converting the time-space characteristic information of the electroencephalogram. The convolutional neural network is sparse in connection, and the convolutional kernel parameters are shared, so that the storage capacity of the model is reduced, and the spatial features of the graph can be effectively extracted. And the time convolution neural network is based on the extended convolution modeling and conforms to the time sequence characteristics of the EEG. The method combines the advantages of the two methods, constructs a parallel convolution network, extracts EEG spatial characteristics in convolution, extracts EEG time sequence characteristics in time convolution, and finally fuses EEG space-time characteristics in a characteristic splicing mode.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. A parallel convolution network motor imagery electroencephalogram recognition method with fusion of space-time characteristics is provided. The technical scheme of the invention is as follows:
an electroencephalogram identification method for a spatio-temporal feature fusion parallel convolution network motor imagery, which comprises the following steps:
step 1: acquiring original EEG (electroencephalogram) channel data, and processing the original EEG channel data by adopting steps including normalization and mean value removal;
step 2: segmenting the original EEG channel data preprocessed in the step 1 based on an overlapping cutting mode;
and 3, step 3: performing wavelet transformation on each EEG channel obtained in the step 2 to obtain three frequency bands of Theta wave, alpha wave and beta wave;
and 4, step 4: solving the sum of squares of values of each frequency band of the Theta wave, the alpha wave and the beta wave obtained in the step 3;
and 5: interpolating the 2D channel distribution map based on an interpolation algorithm by using the sum of squares of each frequency band value obtained in the step 4 to generate a 2D electroencephalogram feature distribution map;
step 6: performing network training on the 2D feature distribution map generated in the step 5 by adopting a multiple convolutional neural network;
and 7: and simultaneously carrying out parallel training on the 2D feature maps in the step 5 based on the time convolution neural network.
And step 8: and fusing and classifying the spatial features and the time sequence features based on Softmax.
Further, the step 1 adopts steps including normalization and de-averaging to process the raw EEG channel data, which specifically includes:
and (3) mean value removal: subtracting the amplitude from the average value in the data to make the average value of the electroencephalogram signal be 0;
normalization: the original data is linearly transformed such that the result maps between [0,1 ].
Further, the step 2 of segmenting the raw EEG channel data preprocessed in the step 1 based on the overlap-cut method specifically includes:
processing raw time-series channel EEG data based on overlap cutting to enable each frame of EEG data extracted in a motor imagery period to have partial overlap, and defining a formula
x i =x i-1 +f-o*f i≠0
x i =0 i=0
Wherein x is a cutting starting point, i is a few samples, f is a frequency size, and o x f is an overlapping size, wherein o is a cutting weight and ranges from 0 to 1;
according to the data matrix [ [ x ] 0 ,x 0 +128],[x 1 ,x 1 +128],[x 0 ,x 0 +128],...,[x n ,x n +128]]The 14 EEG channels are segmented and the data for each time window is arranged to ensure that the data time series is not corrupted.
Further, the step 3 performs wavelet transformation on each EEG channel obtained in the step 2 to obtain three frequency bands of Theta wave, alpha wave and beta wave, and specifically includes:
for each EEG channel data, after preprocessing, Fourier transform is performed on the data of each frame, and x is set to be equal to C N Is an EEG signal of length N, the fast fourier transform is:
wherein N is 0,1, N-1 is different frequency, W N =e -j(2π/N) 。
The inverse fast fourier transform is:
the real-valued discrete Fourier of length N is obtained by a complex-valued fast Fourier transform of length N/2. Let x be an element of R N Then the real-valued fast fourier transform is:
after fast Fourier transform, we obtain a data matrix x containing each frequency band by belonging to Theta wave, alpha wave and beta wave n Extracting.
Further, in step 5, a 2D channel distribution map is generated according to the acquired electroencephalogram channel position data, and the sum of squares of each frequency band value is obtained, where the calculation formula is:
wherein x is a frequency band value, and i ranges from 1 to n.
The sum of squares of the Theta, alpha and beta values obtained from the previous steps is used as the RGB three-channel value of the image. And interpolating the 2D channel distribution map based on an interpolation algorithm to generate a brain electrical characteristic 2D distribution map.
Further, in step 6, the specific network structure of the multiple convolutional neural network is as follows: inputting a 28-by-28 2D electroencephalogram characteristic map of a layer; after the input layer is a convolution module 1, which is composed of two convolution layers stacked continuously, wherein the convolution layer 1 is based on an edge filling mode, and the convolution 2 is based on an edge reduction mode; the convolution module 1 is followed by a maximum pooling layer, the convolution module 2 is formed by continuously stacking two convolution layers, wherein the convolution layer 3 is based on an edge filling mode, the convolution 4 is based on an edge reduction mode, the convolution module 2 is followed by the maximum pooling layer, and finally a full connection layer is stacked;
initializing multiple convolution network parameters, and carrying out forward propagation training; adjusting network parameters based on the mean square error by back propagation; and when the error meets the precision requirement, storing the weight and the bias, finishing the network training, and otherwise, continuously and iteratively adjusting the weight and the bias until the error precision requirement is met.
Further, in step 7, EEG data time sequence features are extracted based on a time convolution neural network, and the specific network structure of the network is as follows: an input layer, wherein a time sequence convolution layer is stacked behind the input layer, and a full connection layer is stacked behind the time sequence convolution layer;
the original input is 28 × 28, the input sequence is acted by a one-dimensional convolution module to obtain a T × M characteristic sequence, T is the length of a time sequence, and M is the number of one-dimensional convolution kernels; the one-dimensional convolution is input into the sequence by extended convolution, and the calculation size of the convolution kernel is as follows:
f k_d =(d-1)*(f k -1)+f k
wherein f is k Represents a convolution kernel, f k_d Representing the size of a convolution kernel after adding the expansion convolution, d is the expansion rate, and k is the number of convolution kernels;
the output of the extended convolution is acted on each element by an activation function, renl, which is calculated as:
f(x)=max(0,x)
wherein f (x) is output and x is input;
and setting a residual error module, adding the extended connection identity mapping based on the residual error, and subtracting the input from the original learning transformation H (x), namely F (x) H (x) -x.
Further, the step 8 of fusing and classifying the spatial features and the time sequence features based on Softmax specifically includes:
extracting a full-connection layer containing a spatial feature multiple convolution neural network and a full-connection layer containing a time sequence feature time convolution network, fusing the spatial and temporal features based on a feature splicing mode, and defining a formula:
the FC is a new full-connection layer, the FC1 is a multiple convolution neural network full-connection layer, the FC2 is a time convolution convolutional neural network full-connection layer, and the new full-connection layer is used as the input of a classifier Softmax to realize classification.
The invention has the following advantages and beneficial effects:
the method is based on fast Fourier transform, combines brain electrode scalp position data, effectively maps time-space characteristics of EEG data in a 2D characteristic diagram, and improves the problem that only time sequence channel data are visualized by brain electricity acquisition equipment. Further, the convolutional neural network is combined with the time convolutional neural network to fully mine the time-space characteristics of the EEG data. Using a convolutional neural network, the spatial features of the EEG can be extracted. Using a time-convolutional neural network, the temporal features of the EEG can be extracted. Secondly, based on the mode of feature concatenation, the space characteristic and the time characteristic of EEG are effectively fused together, have solved in traditional EEG characteristic classification, often abandon the defect of space characteristic, have improved classification performance.
Drawings
FIG. 1 is a flow chart of a motor imagery electroencephalogram feature extraction and classification method based on space-time feature fusion parallel convolution neural network according to an embodiment of the present invention.
Fig. 2 is a diagram of a parallel convolutional neural network structure.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
as shown in the figure, the method for extracting and classifying the motor imagery electroencephalogram features based on the space-time feature fusion provided by the embodiment comprises the following steps:
step 1: the raw data is preprocessed. In general, raw EEG channel data obtained from experiments includes noises such as myoelectricity and electrooculogram, and is not suitable for direct network training. Therefore, before feature extraction, BCI researchers perform a series of data processing procedures to improve the signal-to-noise ratio, such as high-pass filtering, normalization, and shadow removal. In this patent we take the following data processing method: and (6) removing the average value. In order to prevent the influence of larger difference values on the experiment, the amplitude is subtracted from the average value in the data, so that the average value of the electroencephalogram signal is 0; and (6) normalizing. Normalizing the data can effectively reduce the computational magnitude of the network, quicken the iteration of the network, and perform linear transformation on the original data to enable the result to be mapped between [0,1 ].
And 2, step: motor imagery EEG has a strong time-series signature, with the peripheral cranial nerves of the scalp producing a signal response over a period of time. The mainstream non-invasive electroencephalogram acquisition equipment adopts a copper sheet or a colloid sensor to collect signals. Due to hardware devices, human brain response, and the like, certain hysteresis problems can arise when data collection is performed. The original time sequence channel EEG data is processed based on overlap cutting, so that each frame of EEG data extracted in a motor imagery period has partial overlap, the effects of discarding useless data and expanding a data set as far as possible are achieved, and meanwhile, the method is more suitable for the actual scene of signal reaction of a human brain. Defining a formula
x i =x i-1 +f-o*f i≠0
x i =0 i=0
Wherein x is the starting point of cutting, i is the number of samples, f is the frequency, and o x f is the overlap, wherein o is the cutting weight, and the range is 0-1.
According to the data matrix [ [ x ] 0 ,x 0 +128],[x 1 ,x 1 +128],[x 0 ,x 0 +128],...,[x n ,x n +128]]The segmentation process is performed on the 14 EEG channels and the data for each time window is arranged to ensure that the data time series is not corrupted.
And 3, step 3: for each EEG channel data, after preprocessing, Fourier transform is carried out on the data of each frame, and three frequency bands of Theta wave, alpha wave and beta wave are extracted.
And 4, step 4: the sum of squares of the values of each frequency band is obtained by three frequency bands of Theta wave, alpha wave and beta wave, and the calculation formula is as follows:
where x is the band value.
And 5: based on an interpolation algorithm, interpolating the 2D channel distribution map to generate a 2D electroencephalogram feature distribution map, which specifically comprises the following steps: and generating a 2D channel distribution diagram according to the acquired channel position data, and taking the sum of squares of Theta, alpha and beta values obtained from the previous step as three channel values of the image RGB. And interpolating the 2D channel distribution map based on an interpolation algorithm to generate an electroencephalogram characteristic 2D distribution map.
Step 6: adopting a plurality of continuous convolution stacks to improve the traditional neural network, and carrying out network training by utilizing the 2D feature distribution map generated in the step 5, wherein the network training specifically comprises the following steps: the input layer is a 28 × 28 2D electroencephalogram. After the input layer is a convolution module 1, which consists of stacking two convolutional layers in series, where convolutional layer 1 is based on the edge-filling approach and convolution 2 is based on the edge-reduction approach. The convolution module 1 is followed by a max pooling layer. And the convolution module 2 is formed by continuously stacking two convolution layers, wherein the convolution layer 3 is based on an edge filling mode, the convolution 4 is based on an edge reduction mode, the convolution module 2 is followed by a maximum pooling layer, and finally a full connection layer is stacked.
The convolutional layer is a feature extraction, and is convolved with M convolution kernels by input, and N feature maps are obtained by nonlinear function mapping. The convolution layer is calculated as:
wherein f is an activation function,is the index vector of the feature map i in layer l, w is the convolution kernel term, and b is the bias term.
The pooling layer is a feature dimension reduction, and the calculation of the pooling layer is as follows:
where down () is the sampling function, N l The window boundary size required for the l-th sub-sampling layer,and outputting the jth characteristic of the ith layer.
Initializing a network parameter weight { w, b }, and carrying out forward propagation training according to (1) and (2). And (4) performing back propagation to adjust the network parameters (w, b) based on the mean square error. And when the error meets the precision requirement, storing the weight and the bias, finishing the network training, and otherwise, continuing to iteratively adjust the weight and the bias until the error precision requirement is met.
And 7: and (3) simultaneously carrying out parallel training on the 2D characteristic diagram in the step (5) based on a time convolution neural network, and specifically comprising the following steps: and the input layer is stacked with the time sequence convolution layer, and the full connection layer is stacked after the time sequence convolution layer.
The original input is 28 x 28, the input sequence is acted by a one-dimensional convolution module to obtain a T x M characteristic sequence, T is the length of a time sequence, and M is the number of one-dimensional convolution kernels. The one-dimensional convolution is input into a sequence by an extended convolution, and the calculation size of a convolution kernel of the one-dimensional convolution is as follows:
f k_d =(d-1)*(f k -1)+f k
wherein f is k Representing a convolution kernel, f k_d Representing the size of a convolution kernel after adding the expansion convolution, d is the expansion rate, and k is the number of convolution kernels;
the output of the spreading convolution is acted on by the activation function renl for each element. The calculation formula of the ReUL function is as follows:
f(x)=max(0,x)
where f (x) is the output and x is the input.
And setting a residual module, adding the extended connection identity mapping based on the residual, and changing the learning transformation H (x) into F (x) H (x) -x.
And 8: fusing and classifying the spatial features and the time sequence features based on Softmax, and specifically comprising the following steps: the extraction contains the spatial feature multiple convolution neural network full link layer and contains the time sequence feature time convolution network full link layer, and two full link layers of the parallel network are spliced based on the feature splicing mode, so that a formula is defined:
the FC is a new full-link layer, the FC1 is a multiple convolution neural network full-link layer, and the FC2 is a time convolution convolutional neural network full-link layer. And taking the new full connection layer as the input of Softmax, performing feature fusion on the full connection layer containing the space-time features, and testing the classification performance.
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.
Claims (7)
1. A parallel convolution network motor imagery electroencephalogram classification method based on spatiotemporal feature fusion is characterized by comprising the following steps:
step 1: acquiring original EEG channel data, and processing the original EEG channel data by adopting the steps including normalization and mean value removal;
step 2: segmenting the original EEG channel data preprocessed in the step 1 based on an overlapping cutting mode;
and step 3: performing wavelet transformation on each EEG channel obtained in the step 2 to obtain three frequency bands of Theta wave, alpha wave and beta wave;
and 4, step 4: solving the sum of squares of values of each frequency band of the Theta wave, the alpha wave and the beta wave obtained in the step 3;
and 5: interpolating the 2D channel distribution map based on an interpolation algorithm by using the sum of squares of each frequency band value obtained in the step 4 to generate a 2D electroencephalogram feature distribution map;
step 6: performing network training on the 2D feature distribution map generated in the step 5 by adopting a multiple convolutional neural network;
and 7: simultaneously performing parallel training on the 2D characteristic graphs in the step 5 based on a time convolution neural network;
and 8: fusing and classifying the spatial features and the time sequence features based on Softmax;
the step 8 of fusing and classifying the spatial features and the time sequence features based on Softmax specifically includes:
extracting a full-connection layer containing a spatial feature multiple convolution neural network and a full-connection layer containing a time sequence feature time convolution network, fusing the spatial and temporal features based on a feature splicing mode, and defining a formula:
the FC is a new full-connection layer, the FC1 is a multiple convolution neural network full-connection layer, the FC2 is a time convolution convolutional neural network full-connection layer, and the new full-connection layer is used as the input of a classifier Softmax to realize classification.
2. The electroencephalogram classification method based on spatiotemporal feature fusion and parallel convolution network motor imagery according to claim 1, wherein the step 1 adopts steps including normalization and de-averaging to process raw EEG channel data, and specifically comprises:
and (3) mean value removal: subtracting the amplitude from the average value in the data to make the average value of the electroencephalogram signal be 0;
normalization: the original data is linearly transformed such that the result maps between [0,1 ].
3. The electroencephalogram classification method based on spatiotemporal feature fusion and parallel convolution network motor imagery according to claim 2, wherein the step 2 is to segment the raw EEG channel data preprocessed in the step 1 based on an overlap-cut manner, and specifically comprises:
processing raw time-series channel EEG data based on overlap cutting to enable each frame of EEG data extracted in a motor imagery period to have partial overlap, and defining a formula
x i =x i-1 +f-o*f i≠0
x i =0 i=0
Wherein x is a cutting starting point, i is a few samples, f is a frequency size, and o x f is an overlapping size, wherein o is a cutting weight and ranges from 0 to 1;
according to the data matrix [ [ x ] 0 ,x 0 +128],[x 1 ,x 1 +128],[x 0 ,x 0 +128],...,[x n ,x n +128]]The 14 EEG channels are segmented and the data for each time window is arranged to ensure that the data time series is not corrupted.
4. The electroencephalogram classification method based on spatiotemporal feature fusion and parallel convolution network motor imagery according to claim 3, wherein the step 3 is to perform wavelet transformation on each EEG channel obtained in the step 2 to obtain three frequency bands of Theta wave, alpha wave and beta wave, and specifically comprises:
for each EEG channel data, after preprocessing, Fourier transform is performed on the data of each frame, and x is set to be equal to C N Is an EEG signal of length N, the fast fourier transform is:
wherein N is 0,1, N-1 is different frequency, W N =e -j(2π/N) ;
The inverse fast fourier transform is:
the real-valued discrete Fourier of N with length is obtained by complex-valued fast Fourier transform with length of N/2, and x belongs to R N Then the real-valued fast fourier transform:
after fast Fourier transform, the frequency bands of Theta wave, alpha wave and beta wave are obtained to obtain a data matrix x containing each frequency band n Is extracted.
5. The electroencephalogram classification method based on the spatiotemporal feature fusion and the parallel convolutional network motor imagery, according to claim 4, wherein in the step 5, a 2D channel distribution map is generated according to the acquired electroencephalogram channel position data, the sum of squares of each frequency band value is obtained, and the calculation formula is as follows:
wherein x is a frequency band value, and the range of i is 1-n;
and (3) taking the sum of squares of Theta, alpha and beta values obtained in the previous step as three channel values of RGB (red, green and blue) of the image, and interpolating the 2D channel distribution map based on an interpolation algorithm to generate the electroencephalogram characteristic 2D distribution map.
6. The electroencephalogram classification method based on the spatiotemporal feature fusion and the parallel convolutional network motor imagery according to claim 5, wherein in the step 6, the specific network structure of the multiple convolutional neural network is as follows: the input layer is a 28-by-28 2D electroencephalogram characteristic map; after the input layer is a convolution module 1, which is composed of two convolution layers stacked continuously, wherein convolution layer 1 is based on the edge filling method, and convolution 2 is based on the edge reduction method; the convolution module 1 is followed by a maximum pooling layer, the convolution module 2 is formed by continuously stacking two convolution layers, wherein the convolution layer 3 is based on an edge filling mode, the convolution 4 is based on an edge shrinking mode, the convolution module 2 is followed by the maximum pooling layer, and finally a full connection layer is stacked;
initializing multiple convolution network parameters, and carrying out forward propagation training; adjusting network parameters based on the mean square error by back propagation; and when the error meets the precision requirement, storing the weight and the bias, finishing the network training, and otherwise, continuously and iteratively adjusting the weight and the bias until the error precision requirement is met.
7. The electroencephalogram classification method based on the spatiotemporal feature fusion and the motor imagery of the parallel convolution network as claimed in claim 6, wherein in the step 7, EEG data time sequence features are extracted based on a time convolution neural network, and the network specific network structure is as follows: an input layer, wherein a time sequence convolution layer is stacked behind the input layer, and a full connection layer is stacked behind the time sequence convolution layer;
the original input is 28 × 28, the input sequence is acted by a one-dimensional convolution module to obtain a T × M characteristic sequence, T is the length of a time sequence, and M is the number of one-dimensional convolution kernels; the one-dimensional convolution is input into the sequence by extended convolution, and the calculation size of the convolution kernel is as follows:
f k_d =(d-1)*(f k -1)+f k
wherein, f k Representing a convolution kernel, f k_d The size of a convolution kernel after the expansion convolution is added is shown, d is the expansion rate, and k is the number of convolution kernels;
the output of the extended convolution is acted on each element by an activation function, renl, which is calculated as:
f(x)=max(0,x)
wherein f (x) is output and x is input;
and setting a residual error module, adding the extended connection identity mapping based on the residual error, and subtracting the input from the original learning transformation H (x), namely F (x) H (x) -x.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911241265.XA CN111012336B (en) | 2019-12-06 | 2019-12-06 | Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911241265.XA CN111012336B (en) | 2019-12-06 | 2019-12-06 | Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111012336A CN111012336A (en) | 2020-04-17 |
CN111012336B true CN111012336B (en) | 2022-08-23 |
Family
ID=70204518
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911241265.XA Active CN111012336B (en) | 2019-12-06 | 2019-12-06 | Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111012336B (en) |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111461463B (en) * | 2020-04-30 | 2023-11-24 | 南京工程学院 | Short-term load prediction method, system and equipment based on TCN-BP |
CN111523520A (en) * | 2020-06-11 | 2020-08-11 | 齐鲁工业大学 | Method for analyzing electroencephalogram signals of brain patients with motor imagery stroke by using cycleGAN |
CN111803059A (en) * | 2020-06-30 | 2020-10-23 | 武汉中旗生物医疗电子有限公司 | Electrocardiosignal classification method and device based on time domain convolution network |
CN111882036B (en) * | 2020-07-22 | 2023-10-31 | 广州大学 | Convolutional neural network training method, electroencephalogram signal identification method, device and medium |
CN112120694B (en) * | 2020-08-19 | 2021-07-13 | 中国地质大学(武汉) | Motor imagery electroencephalogram signal classification method based on neural network |
CN112244878B (en) * | 2020-08-31 | 2023-08-04 | 北京工业大学 | Method for identifying key frequency band image sequence by using parallel multi-module CNN and LSTM |
CN112057047A (en) * | 2020-09-11 | 2020-12-11 | 首都师范大学 | Device for realizing motor imagery classification and hybrid network system construction method thereof |
CN112070067B (en) * | 2020-10-12 | 2023-11-21 | 乐普(北京)医疗器械股份有限公司 | Scatter diagram classification method and device for photoplethysmograph signals |
CN112381008B (en) * | 2020-11-17 | 2022-04-29 | 天津大学 | Electroencephalogram emotion recognition method based on parallel sequence channel mapping network |
CN113011239B (en) * | 2020-12-02 | 2024-02-09 | 杭州电子科技大学 | Motor imagery classification method based on optimal narrow-band feature fusion |
CN112507881A (en) * | 2020-12-09 | 2021-03-16 | 山西三友和智慧信息技术股份有限公司 | sEMG signal classification method and system based on time convolution neural network |
CN112784892A (en) * | 2021-01-14 | 2021-05-11 | 重庆兆琨智医科技有限公司 | Electroencephalogram movement intention identification method and system |
CN113693613B (en) * | 2021-02-26 | 2024-05-24 | 腾讯科技(深圳)有限公司 | Electroencephalogram signal classification method, electroencephalogram signal classification device, computer equipment and storage medium |
CN113057654B (en) * | 2021-03-10 | 2022-05-20 | 重庆邮电大学 | Memory load detection and extraction system and method based on frequency coupling neural network model |
CN113057652A (en) * | 2021-03-17 | 2021-07-02 | 西安电子科技大学 | Brain load detection method based on electroencephalogram and deep learning |
CN113143295A (en) * | 2021-04-23 | 2021-07-23 | 河北师范大学 | Equipment control method and terminal based on motor imagery electroencephalogram signals |
CN113229828A (en) * | 2021-04-26 | 2021-08-10 | 山东师范大学 | Motor imagery electroencephalogram signal classification method and system |
CN113128459B (en) * | 2021-05-06 | 2022-06-10 | 昆明理工大学 | Feature fusion method based on multi-level electroencephalogram signal expression |
CN113261980B (en) * | 2021-05-14 | 2022-10-21 | 清华大学 | Large-scale visual classification method and device based on electroencephalogram combined feature learning |
CN113598734A (en) * | 2021-07-28 | 2021-11-05 | 厦门大学 | Cuff-free blood pressure prediction method based on deep neural network model |
WO2023044612A1 (en) * | 2021-09-22 | 2023-03-30 | 深圳先进技术研究院 | Image classification method and apparatus |
CN114745299B (en) * | 2022-03-16 | 2023-06-13 | 南京工程学院 | Non-invasive load monitoring method based on sequence delay reconstruction CSP convolutional neural network |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101243974A (en) * | 2008-03-28 | 2008-08-20 | 天津和德脑象图技术开发研究有限公司 | Method and apparatus for generating brain phase image detection and analysis with electroencephalogram |
US8885887B1 (en) * | 2012-01-23 | 2014-11-11 | Hrl Laboratories, Llc | System for object detection and recognition in videos using stabilization |
US9107595B1 (en) * | 2014-09-29 | 2015-08-18 | The United States Of America As Represented By The Secretary Of The Army | Node excitation driving function measures for cerebral cortex network analysis of electroencephalograms |
CN107844755B (en) * | 2017-10-23 | 2021-07-13 | 重庆邮电大学 | Electroencephalogram characteristic extraction and classification method combining DAE and CNN |
CN108042132A (en) * | 2017-12-27 | 2018-05-18 | 南京邮电大学 | Brain electrical feature extracting method based on DWT and EMD fusions CSP |
CN107961007A (en) * | 2018-01-05 | 2018-04-27 | 重庆邮电大学 | A kind of electroencephalogramrecognition recognition method of combination convolutional neural networks and long memory network in short-term |
CN110069958B (en) * | 2018-01-22 | 2022-02-01 | 北京航空航天大学 | Electroencephalogram signal rapid identification method of dense deep convolutional neural network |
KR20190130808A (en) * | 2018-05-15 | 2019-11-25 | 연세대학교 산학협력단 | Emotion Classification Device and Method using Convergence of Features of EEG and Face |
CN108932480B (en) * | 2018-06-08 | 2022-03-15 | 电子科技大学 | Distributed optical fiber sensing signal feature learning and classifying method based on 1D-CNN |
CN109190479A (en) * | 2018-08-04 | 2019-01-11 | 台州学院 | A kind of video sequence expression recognition method based on interacting depth study |
CN109805898B (en) * | 2019-03-22 | 2024-04-05 | 中国科学院重庆绿色智能技术研究院 | Critical death prediction method based on attention mechanism time sequence convolution network algorithm |
CN110163180A (en) * | 2019-05-29 | 2019-08-23 | 长春思帕德科技有限公司 | Mental imagery eeg data classification method and system |
CN110232341B (en) * | 2019-05-30 | 2022-05-03 | 重庆邮电大学 | Semi-supervised learning image identification method based on convolution-stacking noise reduction coding network |
CN110222643B (en) * | 2019-06-06 | 2021-11-30 | 西安交通大学 | Steady-state visual evoked potential signal classification method based on convolutional neural network |
-
2019
- 2019-12-06 CN CN201911241265.XA patent/CN111012336B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111012336A (en) | 2020-04-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111012336B (en) | Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion | |
Guo et al. | A review of wavelet analysis and its applications: Challenges and opportunities | |
CN110399857B (en) | Electroencephalogram emotion recognition method based on graph convolution neural network | |
Xia et al. | A novel improved deep convolutional neural network model for medical image fusion | |
CN107844755B (en) | Electroencephalogram characteristic extraction and classification method combining DAE and CNN | |
CN109598222B (en) | EEMD data enhancement-based wavelet neural network motor imagery electroencephalogram classification method | |
CN112633195B (en) | Myocardial infarction recognition and classification method based on frequency domain features and deep learning | |
CN113065526B (en) | Electroencephalogram signal classification method based on improved depth residual error grouping convolution network | |
CN111967506A (en) | Electroencephalogram signal classification method for optimizing BP neural network by artificial bee colony | |
CN113158964B (en) | Sleep stage method based on residual error learning and multi-granularity feature fusion | |
CN115221969A (en) | Motor imagery electroencephalogram signal identification method based on EMD data enhancement and parallel SCN | |
Jinliang et al. | EEG emotion recognition based on granger causality and capsnet neural network | |
CN114781441B (en) | EEG motor imagery classification method and multi-space convolution neural network model | |
CN115795346A (en) | Classification and identification method of human electroencephalogram signals | |
CN113569997A (en) | Emotion classification method and system based on graph convolution neural network | |
CN113128384B (en) | Brain-computer interface software key technical method of cerebral apoplexy rehabilitation system based on deep learning | |
CN115238796A (en) | Motor imagery electroencephalogram signal classification method based on parallel DAMSCN-LSTM | |
CN116340824A (en) | Electromyographic signal action recognition method based on convolutional neural network | |
CN117520891A (en) | Motor imagery electroencephalogram signal classification method and system | |
CN113052099B (en) | SSVEP classification method based on convolutional neural network | |
CN112259228B (en) | Depression screening method by dynamic attention network non-negative matrix factorization | |
CN116919422A (en) | Multi-feature emotion electroencephalogram recognition model establishment method and device based on graph convolution | |
CN113627391A (en) | Cross-mode electroencephalogram signal identification method considering individual difference | |
CN117609951A (en) | Emotion recognition method, system and medium integrating electroencephalogram and function near infrared | |
CN116421200A (en) | Brain electricity emotion analysis method of multi-task mixed model based on parallel training |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |