CN111012336A - Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion - Google Patents

Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion Download PDF

Info

Publication number
CN111012336A
CN111012336A CN201911241265.XA CN201911241265A CN111012336A CN 111012336 A CN111012336 A CN 111012336A CN 201911241265 A CN201911241265 A CN 201911241265A CN 111012336 A CN111012336 A CN 111012336A
Authority
CN
China
Prior art keywords
convolution
electroencephalogram
network
data
eeg
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911241265.XA
Other languages
Chinese (zh)
Other versions
CN111012336B (en
Inventor
唐贤伦
孔德松
邹密
刘行谋
马伟昌
李伟
王婷
彭德光
李锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201911241265.XA priority Critical patent/CN111012336B/en
Publication of CN111012336A publication Critical patent/CN111012336A/en
Application granted granted Critical
Publication of CN111012336B publication Critical patent/CN111012336B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • A61B5/7257Details of waveform analysis characterised by using transforms using Fourier transforms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention requests to protect a parallel convolution neural network motor imagery electroencephalogram identification method with fused space-time characteristics. A new depth network model-parallel convolution neural network is provided to extract the space-time characteristics of the motor imagery electroencephalogram signal by taking the motor imagery electroencephalogram signal as a research object. Different from the traditional electroencephalogram classification algorithm which usually discards electroencephalogram spatial feature information, the 2D electroencephalogram feature map is generated by extracting Theta waves (4-8Hz), alpha waves (8-12Hz) and beta waves (12-36Hz) through fast Fourier transform. And training the electroencephalogram characteristic diagram based on the multiple convolutional neural networks, and extracting spatial characteristics. In addition, a time convolution neural network is used for parallel training, and time sequence characteristics are extracted. And finally, fusing and classifying the spatial features and the time sequence features based on Softmax. Experimental results show that the parallel convolutional neural network has good identification precision and is superior to other latest classification algorithms.

Description

Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion
Technical Field
The invention belongs to the field of motor imagery electroencephalogram classification, and particularly relates to a parallel convolutional neural network motor imagery electroencephalogram identification method based on space-time feature fusion.
Background
The brain electricity is a comprehensive reflection of the physiological activity of the brain cells of the scalp, and contains a large amount of physiological and disease information. Brain-computer interaction system (BCI) based on EEG signal communication can replace brain nerve and muscle tissue transmission to be used as a signal transmission channel, and therefore interaction between the brain and bionic machinery is achieved. BCI has been receiving a great deal of attention from researchers and scientists as an extension of human-computer interaction. Based on motor imagery electroencephalogram recognition, the method is a key node for BCI system interaction and interaction with the outside. The motor imagination is the subjective imagination performed by the human brain, such as the imagination of a left-hand handshake, the imagination of a right-hand handshake, the imagination of leg flexion and extension and the like. Through the analysis of the motor imagery brain electric signal, the intention of human brain motor imagery can be identified and output to a bionic system of BCI, thereby realizing brain-computer control. Therefore, the research on the motor imagery electroencephalogram signal processing can promote the exploration on the cerebral nerve cognition, the cerebral disease rehabilitation and the cerebral cortex signal analysis. The potential application prospects push EEG research to a high-speed stage, making it one of the most attractive disciplines.
In the BCI system, there are two important parts of feature extraction and feature classification. Common feature extraction methods include Fast Fourier Transform (FFT), Common Spatial Pattern (CSP), wavelet transform (DT), etc., which not only require a lot of manual data processing, but also are sensitive to noise and easily cause feature confusion. Common feature classification methods include artificial neural networks, support vector machines, and the like. Due to the complex generation mechanism of the EEG, the characteristic classification methods have the problems of shallow iteration level and insufficient characteristic extraction.
In recent years, deep learning has been highly successful in the fields of research such as image recognition, natural language processing, power load prediction, and pattern recognition. It is also applied to electroencephalogram data analysis due to its powerful capability of processing non-linear and high-dimensional data.
The EEG signal contains spatial information represented by the electrode positions as well as intrinsic temporal information. However, in the past, the electroencephalogram acquisition equipment only visualizes time sequence channel data, so that most researchers mainly research how to extract electroencephalogram characteristics under a time sequence. Therefore, a new network model is needed to extract and fuse the temporal features and the spatial features of the motor imagery EEG so as to improve the classification performance of the motor imagery EEG. The invention provides a parallel deep convolution neural network, which fully utilizes space-time information to enhance electroencephalogram feature extraction. The 2D electroencephalogram characteristic map is generated for effectively converting the time-space characteristic information of the electroencephalogram based on the fast Fourier transform. The convolutional neural network is sparse in connection, and the convolutional kernel parameters are shared, so that the storage capacity of the model is reduced, and the spatial characteristics of the pattern can be effectively extracted. The time convolution neural network is based on extended convolution modeling and conforms to the time sequence characteristics of EEG. The method combines the advantages of the two methods, constructs a parallel convolution network, extracts EEG spatial characteristics in convolution, extracts EEG time sequence characteristics in time convolution, and finally fuses EEG space-time characteristics in a characteristic splicing mode.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. A parallel convolution network motor imagery electroencephalogram recognition method with fusion of space-time characteristics is provided. The technical scheme of the invention is as follows:
a parallel convolution network motor imagery electroencephalogram identification method based on spatiotemporal feature fusion comprises the following steps:
step 1: acquiring original EEG (electroencephalogram) channel data, and processing the original EEG channel data by adopting the steps including normalization and mean value removal;
step 2: segmenting the original EEG channel data preprocessed in the step 1 based on an overlapping cutting mode;
and step 3: performing wavelet transformation on each EEG channel obtained in the step 2 to obtain three frequency bands of Theta wave, alpha wave and beta wave;
and 4, step 4: solving the sum of squares of values of each frequency band of the Theta wave, the alpha wave and the beta wave obtained in the step 3;
and 5: interpolating the 2D channel distribution map based on an interpolation algorithm by using the sum of squares of each frequency band value obtained in the step 4 to generate a 2D electroencephalogram feature distribution map;
step 6: performing network training on the 2D feature distribution diagram generated in the step 5 by adopting a multiple convolutional neural network;
and 7: and simultaneously carrying out parallel training on the 2D feature maps in the step 5 based on the time convolution neural network.
And 8: and fusing and classifying the spatial features and the time sequence features based on Softmax.
Further, the step 1 adopts steps including normalization and de-averaging to process the raw EEG channel data, which specifically includes:
and (3) mean value removal: subtracting the amplitude from the average value in the data to make the average value of the electroencephalogram signal be 0;
normalization: the original data is linearly transformed such that the result maps between [0,1 ].
Further, the step 2 of segmenting the raw EEG channel data preprocessed in the step 1 based on the overlap-cut method specifically includes:
processing raw time-series channel EEG data based on overlap cutting to enable each frame of EEG data extracted in a motor imagery period to have partial overlap, and defining a formula
xi=xi-1+f-o*f i≠0
xi=0 i=0
Wherein x is a cutting starting point, i is a few samples, f is a frequency size, o x f is an overlapping size, and o is a cutting weight and ranges from 0 to 1;
according to the data matrix [ [ x ]0,x0+128],[x1,x1+128],[x0,x0+128],...,[xn,xn+128]]The 14 EEG channels are segmented and the data for each time window is arranged to ensure that the data time series is not corrupted.
Further, the step 3 performs wavelet transformation on each EEG channel obtained in the step 2 to obtain three frequency bands of Theta wave, alpha wave and beta wave, and specifically includes:
for each EEG channel data, after preprocessing, Fourier transform is performed on the data of each frame, and x is set to be equal to CNIs an EEG signal of length N, the fast fourier transform is:
Figure BDA0002306296750000031
wherein N is 0,1, N-1 is different frequency, WN=e-j(2π/N)
The inverse fast fourier transform is:
Figure BDA0002306296750000032
the real-valued discrete Fourier of length N is obtained by a complex-valued fast Fourier transform of length N/2. Let x be an element of RNThen the real-valued fast fourier transform:
Figure BDA0002306296750000041
after fast Fourier transform, we can obtain data matrix x containing each frequency band by belonging frequency bands to Theta wave, alpha wave and beta wavenExtracting.
Further, in the step 5, a 2D channel distribution map is generated according to the acquired electroencephalogram channel position data, the sum of squares of each frequency band value is obtained, and the calculation formula is:
Figure BDA0002306296750000042
wherein x is a frequency band value, and i ranges from 1 to n.
The sum of squares of the Theta, alpha and beta values obtained from the previous steps is used as the RGB three-channel value of the image. And interpolating the 2D channel distribution map based on an interpolation algorithm to generate an electroencephalogram characteristic 2D distribution map.
Further, in step 6, the specific network structure of the multiple convolutional neural network is as follows: the input layer is a 28-by-28 2D electroencephalogram characteristic map; after the input layer is a convolution module 1, which is composed of two convolution layers stacked continuously, wherein the convolution layer 1 is based on an edge filling mode, and the convolution 2 is based on an edge reduction mode; the convolution module 1 is followed by a maximum pooling layer, the convolution module 2 is formed by continuously stacking two convolution layers, wherein the convolution layer 3 is based on an edge filling mode, the convolution 4 is based on an edge reduction mode, the convolution module 2 is followed by the maximum pooling layer, and finally a full connection layer is stacked;
initializing multiple convolution network parameters, and carrying out forward propagation training; adjusting network parameters based on the mean square error by back propagation; and when the error meets the precision requirement, storing the weight and the bias, finishing the network training, and otherwise, continuously and iteratively adjusting the weight and the bias until the error precision requirement is met.
Further, in step 7, EEG data time sequence features are extracted based on a time convolution neural network, and the specific network structure of the network is as follows: an input layer, wherein a time sequence convolution layer is stacked behind the input layer, and a full connection layer is stacked behind the time sequence convolution layer;
the original input is 28 × 28, the input sequence is acted by a one-dimensional convolution module to obtain a T × M characteristic sequence, T is the length of a time sequence, and M is the number of one-dimensional convolution kernels; the one-dimensional convolution is input to the sequence by an extended convolution, and the magnitude of the convolution kernel is calculated as:
fj_d=(d-1)*(fk-1)+fk
wherein f isk_dRepresenting the size of a convolution kernel after adding the expansion convolution, d is the expansion rate, and k is the number of convolution kernels;
the output of the extended convolution is acted on each element by an activation function, renl, which is calculated as:
f(x)=max(0,x)
wherein f (x) is output and x is input;
and setting a residual error module, adding the extended connection identity mapping based on the residual error, and subtracting the input from the original learning transformation H (x), namely F (x) H (x) -x.
Further, the step 8 of fusing and classifying the spatial features and the time sequence features based on Softmax specifically includes:
extracting a full-connection layer containing a spatial feature multiple convolution neural network and a full-connection layer containing a time sequence feature time convolution network, fusing the spatial and temporal features based on a feature splicing mode, and defining a formula:
Figure BDA0002306296750000051
the FC is a new full-connection layer, the FC1 is a multiple convolution neural network full-connection layer, the FC2 is a time convolution convolutional neural network full-connection layer, and the new full-connection layer is used as the input of a classifier Softmax to realize classification.
The invention has the following advantages and beneficial effects:
the method is based on fast Fourier transform, combines brain electrode scalp position data, effectively maps the space-time characteristics of EEG data in the 2D characteristic diagram, and improves the problem that only visual time sequence channel data is acquired by brain electricity acquisition equipment. Further, the convolutional neural network is combined with the time convolutional neural network to fully mine the time-space characteristics of the EEG data. Using a convolutional neural network, the spatial features of the EEG can be extracted. Using a time-convolved neural network, the temporal features of the EEG can be extracted. Secondly, based on the mode of feature concatenation, the spatial feature and the time feature of EEG are effectively fused together, have solved in traditional EEG feature classification, often abandon the defect of spatial feature, have improved classification performance.
Drawings
FIG. 1 is a flow chart of a motor imagery electroencephalogram feature extraction and classification method based on space-time feature fusion parallel convolution neural network according to an embodiment of the present invention.
Fig. 2 is a diagram of a parallel convolutional neural network structure.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
as shown in the figure, the method for extracting and classifying the motor imagery electroencephalogram features based on the space-time feature fusion provided by the embodiment comprises the following steps:
step 1: the raw data is preprocessed. In general, raw EEG channel data obtained from experiments includes noises such as myoelectricity and electrooculogram, and is not suitable for direct network training. Therefore, before feature extraction, BCI researchers perform a series of data processing procedures to improve the signal-to-noise ratio, such as high-pass filtering, normalization, and shadow removal. In this patent we take the following data processing method: and (6) removing the average value. To prevent the effect of large differences on the experiment, the amplitude was subtracted from the mean in the data so that the mean of the brain electrical signals was 0; and (6) normalizing. Normalizing the data can effectively reduce the computational magnitude of the network, quicken the iteration of the network, and perform linear transformation on the original data to enable the result to be mapped between [0,1 ].
Step 2: motor imagery EEG has a strong time-series signature, with the peripheral cranial nerves of the scalp producing a signal response over a period of time. The mainstream non-invasive electroencephalogram acquisition equipment adopts a copper sheet or a colloid sensor to collect signals. Due to hardware devices, human brain response, and the like, certain hysteresis problems can arise when data collection is performed. The original time sequence channel EEG data are processed based on overlap cutting, so that each frame of EEG data extracted in a motion image cycle has partial overlap, the effects of discarding useless data and expanding a data set are achieved as far as possible, and meanwhile, the method is more suitable for the actual scene of signal reaction of a human brain. Defining a formula
xi=xi-1+f-o*f i≠0
xi=0 i=0
Wherein x is the starting point of cutting, i is the number of samples, f is the frequency, o x f is the overlap, and o is the cutting weight and ranges from 0 to 1.
According to the data matrix [ [ x ]0,x0+128],[x1,x1+128],[x0,x0+128],...,[xn,xn+128]]The 14 EEG channels are sliced and the data for each time window is arranged to ensure that the time series of data is not corrupted.
And step 3: for each EEG channel data, after preprocessing, Fourier transform is carried out on the data of each frame, and three frequency bands of Theta wave, alpha wave and beta wave are extracted.
And 4, step 4: the sum of squares of the values of each frequency band is obtained by three frequency bands of Theta wave, alpha wave and beta wave, and the calculation formula is as follows:
Figure BDA0002306296750000071
where x is the band value.
And 5: based on an interpolation algorithm, interpolating the 2D channel distribution map to generate a 2D electroencephalogram feature distribution map, which specifically comprises the following steps: and generating a 2D channel distribution diagram according to the acquired channel position data, and taking the sum of squares of Theta, alpha and beta values obtained from the previous step as three channel values of the image RGB. And interpolating the 2D channel distribution map based on an interpolation algorithm to generate an electroencephalogram characteristic 2D distribution map.
Step 6: adopting a plurality of continuous convolution stacks to improve the traditional neural network, and carrying out network training by utilizing the 2D feature distribution map generated in the step 5, wherein the network training specifically comprises the following steps: the input layer is 28 x 28 2D electroencephalogram. After the input layer is a convolution module 1, which consists of stacking two convolutional layers in series, where convolutional layer 1 is based on the edge-filling approach and convolution 2 is based on the edge-reduction approach. The convolution module 1 is followed by a max pooling layer. And the convolution module 2 is formed by continuously stacking two convolution layers, wherein the convolution layer 3 is based on an edge filling mode, the convolution 4 is based on an edge reduction mode, the convolution module 2 is followed by a maximum pooling layer, and finally a full connection layer is stacked.
The convolutional layer is a feature extraction, and is convolved with M convolution kernels by input, and N feature maps are obtained by nonlinear function mapping. The convolution layer is calculated as:
Figure BDA0002306296750000072
wherein f is an activation function,
Figure BDA0002306296750000073
is the index vector of the feature map i in layer l, w is the convolution kernel term, and b is the bias term.
The pooling layer is a feature dimension reduction, and the calculation of the pooling layer is as follows:
Figure BDA0002306296750000081
where down () is the sampling function, NlThe window boundary size required for the l-th sub-sampling layer,
Figure BDA0002306296750000082
and outputting the jth characteristic of the ith layer.
Initializing a network parameter weight { w, b }, and carrying out forward propagation training according to (1) and (2). And based on the mean square error, performing back propagation to adjust the network parameters (w, b). And when the error meets the precision requirement, storing the weight and the bias, finishing the network training, and otherwise, continuously and iteratively adjusting the weight and the bias until the error precision requirement is met.
And 7: and (3) simultaneously carrying out parallel training on the 2D characteristic diagram in the step (5) based on a time convolution neural network, and specifically comprising the following steps: and the input layer is stacked with the time sequence convolution layer, and the full connection layer is stacked after the time sequence convolution layer.
The original input is 28 x 28, the input sequence is acted by a one-dimensional convolution module to obtain a T x M characteristic sequence, T is the length of a time sequence, and M is the number of one-dimensional convolution kernels. The one-dimensional convolution is input to the sequence by an extended convolution, and the magnitude of the convolution kernel is calculated as:
fj_d=(d-1)*(fk-1)+fk
wherein f isk_dDenotes the convolution kernel size after adding the extended convolution, d is the expansion ratio, and k is the number of convolution kernels.
The output of the spreading convolution is acted on by the activation function renl for each element. The calculation formula of the ReUL function is as follows:
f(x)=max(0,x)
where f (x) is the output and x is the input.
And setting a residual module, adding the extended connection identity mapping based on the residual, and changing the learning transformation H (x) into F (x) H (x) -x.
And 8: fusing and classifying the spatial features and the time sequence features based on Softmax, and specifically comprises the following steps: the extraction contains the spatial feature multiple convolution neural network full link layer and contains the time sequence feature time convolution network full link layer, and two full link layers of the parallel network are spliced based on the feature splicing mode, so that a formula is defined:
Figure BDA0002306296750000091
the FC is a new full-link layer, the FC1 is a multiple convolution neural network full-link layer, and the FC2 is a time convolution convolutional neural network full-link layer. And taking the new full-connection layer as the input of Softmax, performing feature fusion on the full-connection layer containing the space-time features, and testing the classification performance.
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (8)

1. A parallel convolution network motor imagery electroencephalogram classification method based on spatiotemporal feature fusion is characterized by comprising the following steps:
step 1: acquiring original EEG channel data, and processing the original EEG channel data by adopting the steps including normalization and mean value removal;
step 2: segmenting the original EEG channel data preprocessed in the step 1 based on an overlapping cutting mode;
and step 3: performing wavelet transformation on each EEG channel obtained in the step 2 to obtain three frequency bands of Theta wave, alpha wave and beta wave;
and 4, step 4: solving the sum of squares of values of each frequency band of the Theta wave, the alpha wave and the beta wave obtained in the step 3;
and 5: interpolating the 2D channel distribution map based on an interpolation algorithm by using the sum of squares of each frequency band value obtained in the step 4 to generate a 2D electroencephalogram feature distribution map;
step 6: performing network training on the 2D feature distribution map generated in the step 5 by adopting a multiple convolutional neural network;
and 7: simultaneously performing parallel training on the 2D characteristic graphs in the step 5 based on a time convolution neural network;
and 8: and fusing and classifying the spatial features and the time sequence features based on Softmax.
2. The electroencephalogram classification method based on spatiotemporal feature fusion and parallel convolution network motor imagery according to claim 1, wherein the step 1 adopts steps including normalization and de-averaging to process raw EEG channel data, and specifically comprises:
and (3) mean value removal: subtracting the amplitude from the average value in the data to make the average value of the electroencephalogram signal be 0;
normalization: the original data is linearly transformed such that the result maps between [0,1 ].
3. The electroencephalogram classification method based on spatiotemporal feature fusion and parallel convolution network motor imagery according to claim 2, wherein the step 2 is to segment the raw EEG channel data preprocessed in the step 1 based on an overlap-cut manner, and specifically comprises:
processing raw time-series channel EEG data based on overlap cutting to enable each frame of EEG data extracted in a motor imagery period to have partial overlap, and defining a formula
xi=xi-1+f-o*f i≠0
xi=0 i=0
Wherein x is a cutting starting point, i is a few samples, f is a frequency size, and o x f is an overlapping size, wherein o is a cutting weight and ranges from 0 to 1;
according to the data matrix [ [ x ]0,x0+128],[x1,x1+128],[x0,x0+128],...,[xn,xn+128]]The 14 EEG channels are segmented and the data for each time window is arranged to ensure that the data time series is not corrupted.
4. The electroencephalogram classification method based on spatiotemporal feature fusion and parallel convolution network motor imagery according to claim 3, wherein the step 3 is to perform wavelet transformation on each EEG channel obtained in the step 2 to obtain three frequency bands of Theta wave, alpha wave and beta wave, and specifically comprises:
for each EEG channel data, after preprocessing, Fourier transform is performed on the data of each frame, and x is set to be equal to CNIs longDegree N EEG signal, then fast fourier transform:
Figure FDA0002306296740000021
wherein N is 0,1, N-1 is different frequency, WN=e-j(2π/N)
The inverse fast fourier transform is:
Figure FDA0002306296740000022
the real-valued discrete Fourier of length N is obtained by a complex-valued fast Fourier transform of length N/2. Let x be an element of RNThen the real-valued fast fourier transform:
Figure FDA0002306296740000023
after fast Fourier transform, we obtain a data matrix x containing each frequency band by belonging to Theta wave, alpha wave and beta wavenExtracting.
5. The electroencephalogram classification method based on the spatiotemporal feature fusion and the parallel convolutional network motor imagery, according to claim 4, wherein in the step 5, a 2D channel distribution map is generated according to the acquired electroencephalogram channel position data, the sum of squares of each frequency band value is obtained, and the calculation formula is as follows:
Figure FDA0002306296740000031
wherein x is a frequency band value, and the range of i is 1-n;
the sum of squares of the Theta, alpha and beta values obtained from the previous steps is used as the RGB three-channel value of the image. And interpolating the 2D channel distribution map based on an interpolation algorithm to generate an electroencephalogram characteristic 2D distribution map.
6. The electroencephalogram classification method based on the spatiotemporal feature fusion and the parallel convolutional network motor imagery according to claim 5, wherein in the step 6, the specific network structure of the multiple convolutional neural network is as follows: the input layer is a 28-by-28 2D electroencephalogram characteristic map; after the input layer is a convolution module 1, which is composed of two convolution layers stacked continuously, wherein the convolution layer 1 is based on an edge filling mode, and the convolution 2 is based on an edge reduction mode; the convolution module 1 is followed by a maximum pooling layer, the convolution module 2 is formed by continuously stacking two convolution layers, wherein the convolution layer 3 is based on an edge filling mode, the convolution 4 is based on an edge reduction mode, the convolution module 2 is followed by the maximum pooling layer, and finally a full connection layer is stacked;
initializing multiple convolution network parameters, and carrying out forward propagation training; adjusting network parameters based on the mean square error by back propagation; and when the error meets the precision requirement, storing the weight and the bias, finishing the network training, and otherwise, continuously and iteratively adjusting the weight and the bias until the error precision requirement is met.
7. The electroencephalogram classification method based on the spatiotemporal feature fusion and the motor imagery of the parallel convolution network as claimed in claim 6, wherein in the step 7, EEG data time sequence features are extracted based on a time convolution neural network, and the network specific network structure is as follows: an input layer, wherein a time sequence convolution layer is stacked behind the input layer, and a full connection layer is stacked behind the time sequence convolution layer;
the original input is 28 × 28, the input sequence is acted by a one-dimensional convolution module to obtain a T × M characteristic sequence, T is the length of a time sequence, and M is the number of one-dimensional convolution kernels; the one-dimensional convolution is input into the sequence by extended convolution, and the calculation size of the convolution kernel is as follows:
fj_d=(d-1)*(fk-1)+fk
wherein f iskRepresenting a convolution kernel, fk_dRepresenting the size of a convolution kernel after adding the expansion convolution, d is the expansion rate, and k is the number of convolution kernels;
the output of the extended convolution is acted on each element by an activation function, renl, which is calculated as:
f(x)=max(0,x)
wherein f (x) is output and x is input;
and setting a residual error module, adding the extended connection identity mapping based on the residual error, and subtracting the input from the original learning transformation H (x), namely F (x) H (x) -x.
8. The electroencephalogram classification method based on spatiotemporal feature fusion and parallel convolutional network motor imagery according to claim 7, wherein the step 8 of fusing and classifying the spatial features and the time series features based on Softmax specifically comprises the following steps:
extracting a full-connection layer containing a spatial feature multiple convolution neural network and a full-connection layer containing a time sequence feature time convolution network, fusing the spatial and temporal features based on a feature splicing mode, and defining a formula:
Figure FDA0002306296740000041
the FC is a new full-connection layer, the FC1 is a multiple convolution neural network full-connection layer, the FC2 is a time convolution convolutional neural network full-connection layer, and the new full-connection layer is used as the input of a classifier Softmax to realize classification.
CN201911241265.XA 2019-12-06 2019-12-06 Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion Active CN111012336B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911241265.XA CN111012336B (en) 2019-12-06 2019-12-06 Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911241265.XA CN111012336B (en) 2019-12-06 2019-12-06 Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion

Publications (2)

Publication Number Publication Date
CN111012336A true CN111012336A (en) 2020-04-17
CN111012336B CN111012336B (en) 2022-08-23

Family

ID=70204518

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911241265.XA Active CN111012336B (en) 2019-12-06 2019-12-06 Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion

Country Status (1)

Country Link
CN (1) CN111012336B (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461463A (en) * 2020-04-30 2020-07-28 南京工程学院 Short-term load prediction method, system and equipment based on TCN-BP
CN111523520A (en) * 2020-06-11 2020-08-11 齐鲁工业大学 Method for analyzing electroencephalogram signals of brain patients with motor imagery stroke by using cycleGAN
CN111803059A (en) * 2020-06-30 2020-10-23 武汉中旗生物医疗电子有限公司 Electrocardiosignal classification method and device based on time domain convolution network
CN111882036A (en) * 2020-07-22 2020-11-03 广州大学 Convolutional neural network training method, electroencephalogram signal identification method, device and medium
CN112070067A (en) * 2020-10-12 2020-12-11 乐普(北京)医疗器械股份有限公司 Scatter diagram classification method and device for photoplethysmograph signals
CN112057047A (en) * 2020-09-11 2020-12-11 首都师范大学 Device for realizing motor imagery classification and hybrid network system construction method thereof
CN112120694A (en) * 2020-08-19 2020-12-25 中国地质大学(武汉) Motor imagery electroencephalogram signal classification method based on neural network
CN112244878A (en) * 2020-08-31 2021-01-22 北京工业大学 Method for identifying key frequency band image sequence by using parallel multi-module CNN and LSTM
CN112381008A (en) * 2020-11-17 2021-02-19 天津大学 Electroencephalogram emotion recognition method based on parallel sequence channel mapping network
CN112507881A (en) * 2020-12-09 2021-03-16 山西三友和智慧信息技术股份有限公司 sEMG signal classification method and system based on time convolution neural network
CN112784892A (en) * 2021-01-14 2021-05-11 重庆兆琨智医科技有限公司 Electroencephalogram movement intention identification method and system
CN113011239A (en) * 2020-12-02 2021-06-22 杭州电子科技大学 Optimal narrow-band feature fusion-based motor imagery classification method
CN113057654A (en) * 2021-03-10 2021-07-02 重庆邮电大学 Memory load detection and extraction system and method based on frequency coupling neural network model
CN113057652A (en) * 2021-03-17 2021-07-02 西安电子科技大学 Brain load detection method based on electroencephalogram and deep learning
CN113128459A (en) * 2021-05-06 2021-07-16 昆明理工大学 Feature fusion method based on multi-level electroencephalogram signal expression
CN113143295A (en) * 2021-04-23 2021-07-23 河北师范大学 Equipment control method and terminal based on motor imagery electroencephalogram signals
CN113229828A (en) * 2021-04-26 2021-08-10 山东师范大学 Motor imagery electroencephalogram signal classification method and system
CN113261980A (en) * 2021-05-14 2021-08-17 清华大学 Large-scale visual classification method and device based on electroencephalogram combined feature learning
CN113693613A (en) * 2021-02-26 2021-11-26 腾讯科技(深圳)有限公司 Electroencephalogram signal classification method and device, computer equipment and storage medium
CN114745299A (en) * 2022-03-16 2022-07-12 南京工程学院 Non-invasive load monitoring method based on sequence delay reconstruction CSP convolutional neural network
WO2023044612A1 (en) * 2021-09-22 2023-03-30 深圳先进技术研究院 Image classification method and apparatus

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101243974A (en) * 2008-03-28 2008-08-20 天津和德脑象图技术开发研究有限公司 Method and apparatus for generating brain phase image detection and analysis with electroencephalogram
US8885887B1 (en) * 2012-01-23 2014-11-11 Hrl Laboratories, Llc System for object detection and recognition in videos using stabilization
US9107595B1 (en) * 2014-09-29 2015-08-18 The United States Of America As Represented By The Secretary Of The Army Node excitation driving function measures for cerebral cortex network analysis of electroencephalograms
CN107844755A (en) * 2017-10-23 2018-03-27 重庆邮电大学 A kind of combination DAE and CNN EEG feature extraction and sorting technique
CN107961007A (en) * 2018-01-05 2018-04-27 重庆邮电大学 A kind of electroencephalogramrecognition recognition method of combination convolutional neural networks and long memory network in short-term
CN108042132A (en) * 2017-12-27 2018-05-18 南京邮电大学 Brain electrical feature extracting method based on DWT and EMD fusions CSP
US20180357542A1 (en) * 2018-06-08 2018-12-13 University Of Electronic Science And Technology Of China 1D-CNN-Based Distributed Optical Fiber Sensing Signal Feature Learning and Classification Method
CN109190479A (en) * 2018-08-04 2019-01-11 台州学院 A kind of video sequence expression recognition method based on interacting depth study
CN109805898A (en) * 2019-03-22 2019-05-28 中国科学院重庆绿色智能技术研究院 Critical illness Mortality Prediction method based on attention mechanism timing convolutional network algorithm
CN110069958A (en) * 2018-01-22 2019-07-30 北京航空航天大学 A kind of EEG signals method for quickly identifying of dense depth convolutional neural networks
CN110163180A (en) * 2019-05-29 2019-08-23 长春思帕德科技有限公司 Mental imagery eeg data classification method and system
CN110222643A (en) * 2019-06-06 2019-09-10 西安交通大学 A kind of Steady State Visual Evoked Potential Modulation recognition method based on convolutional neural networks
CN110232341A (en) * 2019-05-30 2019-09-13 重庆邮电大学 Based on convolution-stacking noise reduction codes network semi-supervised learning image-recognizing method
KR20190130808A (en) * 2018-05-15 2019-11-25 연세대학교 산학협력단 Emotion Classification Device and Method using Convergence of Features of EEG and Face

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101243974A (en) * 2008-03-28 2008-08-20 天津和德脑象图技术开发研究有限公司 Method and apparatus for generating brain phase image detection and analysis with electroencephalogram
US8885887B1 (en) * 2012-01-23 2014-11-11 Hrl Laboratories, Llc System for object detection and recognition in videos using stabilization
US9107595B1 (en) * 2014-09-29 2015-08-18 The United States Of America As Represented By The Secretary Of The Army Node excitation driving function measures for cerebral cortex network analysis of electroencephalograms
CN107844755A (en) * 2017-10-23 2018-03-27 重庆邮电大学 A kind of combination DAE and CNN EEG feature extraction and sorting technique
CN108042132A (en) * 2017-12-27 2018-05-18 南京邮电大学 Brain electrical feature extracting method based on DWT and EMD fusions CSP
CN107961007A (en) * 2018-01-05 2018-04-27 重庆邮电大学 A kind of electroencephalogramrecognition recognition method of combination convolutional neural networks and long memory network in short-term
CN110069958A (en) * 2018-01-22 2019-07-30 北京航空航天大学 A kind of EEG signals method for quickly identifying of dense depth convolutional neural networks
KR20190130808A (en) * 2018-05-15 2019-11-25 연세대학교 산학협력단 Emotion Classification Device and Method using Convergence of Features of EEG and Face
US20180357542A1 (en) * 2018-06-08 2018-12-13 University Of Electronic Science And Technology Of China 1D-CNN-Based Distributed Optical Fiber Sensing Signal Feature Learning and Classification Method
CN109190479A (en) * 2018-08-04 2019-01-11 台州学院 A kind of video sequence expression recognition method based on interacting depth study
CN109805898A (en) * 2019-03-22 2019-05-28 中国科学院重庆绿色智能技术研究院 Critical illness Mortality Prediction method based on attention mechanism timing convolutional network algorithm
CN110163180A (en) * 2019-05-29 2019-08-23 长春思帕德科技有限公司 Mental imagery eeg data classification method and system
CN110232341A (en) * 2019-05-30 2019-09-13 重庆邮电大学 Based on convolution-stacking noise reduction codes network semi-supervised learning image-recognizing method
CN110222643A (en) * 2019-06-06 2019-09-10 西安交通大学 A kind of Steady State Visual Evoked Potential Modulation recognition method based on convolutional neural networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
TANG, XIANLUN: "Hidden-layer visible deep stacking network optimized by PSO for motor imagery EEG recognition", 《NEUROCOMPUTING》 *
ZHANG, JIUWEN: "A new approach for classification of epilepsy EEG signals based on Temporal Convolutional Neural Networks", 《2018 11TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID), VOL 2》 *
毕晓君: "《基于改进深度学习模型C-NTM的脑电鲁棒特征学习》", 《哈尔滨工程大学学报》 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461463B (en) * 2020-04-30 2023-11-24 南京工程学院 Short-term load prediction method, system and equipment based on TCN-BP
CN111461463A (en) * 2020-04-30 2020-07-28 南京工程学院 Short-term load prediction method, system and equipment based on TCN-BP
CN111523520A (en) * 2020-06-11 2020-08-11 齐鲁工业大学 Method for analyzing electroencephalogram signals of brain patients with motor imagery stroke by using cycleGAN
CN111803059A (en) * 2020-06-30 2020-10-23 武汉中旗生物医疗电子有限公司 Electrocardiosignal classification method and device based on time domain convolution network
CN111882036A (en) * 2020-07-22 2020-11-03 广州大学 Convolutional neural network training method, electroencephalogram signal identification method, device and medium
CN111882036B (en) * 2020-07-22 2023-10-31 广州大学 Convolutional neural network training method, electroencephalogram signal identification method, device and medium
CN112120694B (en) * 2020-08-19 2021-07-13 中国地质大学(武汉) Motor imagery electroencephalogram signal classification method based on neural network
CN112120694A (en) * 2020-08-19 2020-12-25 中国地质大学(武汉) Motor imagery electroencephalogram signal classification method based on neural network
CN112244878A (en) * 2020-08-31 2021-01-22 北京工业大学 Method for identifying key frequency band image sequence by using parallel multi-module CNN and LSTM
CN112244878B (en) * 2020-08-31 2023-08-04 北京工业大学 Method for identifying key frequency band image sequence by using parallel multi-module CNN and LSTM
CN112057047A (en) * 2020-09-11 2020-12-11 首都师范大学 Device for realizing motor imagery classification and hybrid network system construction method thereof
CN112070067A (en) * 2020-10-12 2020-12-11 乐普(北京)医疗器械股份有限公司 Scatter diagram classification method and device for photoplethysmograph signals
CN112070067B (en) * 2020-10-12 2023-11-21 乐普(北京)医疗器械股份有限公司 Scatter diagram classification method and device for photoplethysmograph signals
CN112381008A (en) * 2020-11-17 2021-02-19 天津大学 Electroencephalogram emotion recognition method based on parallel sequence channel mapping network
CN112381008B (en) * 2020-11-17 2022-04-29 天津大学 Electroencephalogram emotion recognition method based on parallel sequence channel mapping network
CN113011239A (en) * 2020-12-02 2021-06-22 杭州电子科技大学 Optimal narrow-band feature fusion-based motor imagery classification method
CN113011239B (en) * 2020-12-02 2024-02-09 杭州电子科技大学 Motor imagery classification method based on optimal narrow-band feature fusion
CN112507881A (en) * 2020-12-09 2021-03-16 山西三友和智慧信息技术股份有限公司 sEMG signal classification method and system based on time convolution neural network
CN112784892A (en) * 2021-01-14 2021-05-11 重庆兆琨智医科技有限公司 Electroencephalogram movement intention identification method and system
CN113693613A (en) * 2021-02-26 2021-11-26 腾讯科技(深圳)有限公司 Electroencephalogram signal classification method and device, computer equipment and storage medium
CN113057654B (en) * 2021-03-10 2022-05-20 重庆邮电大学 Memory load detection and extraction system and method based on frequency coupling neural network model
CN113057654A (en) * 2021-03-10 2021-07-02 重庆邮电大学 Memory load detection and extraction system and method based on frequency coupling neural network model
CN113057652A (en) * 2021-03-17 2021-07-02 西安电子科技大学 Brain load detection method based on electroencephalogram and deep learning
CN113143295A (en) * 2021-04-23 2021-07-23 河北师范大学 Equipment control method and terminal based on motor imagery electroencephalogram signals
CN113229828A (en) * 2021-04-26 2021-08-10 山东师范大学 Motor imagery electroencephalogram signal classification method and system
CN113128459A (en) * 2021-05-06 2021-07-16 昆明理工大学 Feature fusion method based on multi-level electroencephalogram signal expression
CN113261980A (en) * 2021-05-14 2021-08-17 清华大学 Large-scale visual classification method and device based on electroencephalogram combined feature learning
WO2023044612A1 (en) * 2021-09-22 2023-03-30 深圳先进技术研究院 Image classification method and apparatus
CN114745299A (en) * 2022-03-16 2022-07-12 南京工程学院 Non-invasive load monitoring method based on sequence delay reconstruction CSP convolutional neural network
CN114745299B (en) * 2022-03-16 2023-06-13 南京工程学院 Non-invasive load monitoring method based on sequence delay reconstruction CSP convolutional neural network

Also Published As

Publication number Publication date
CN111012336B (en) 2022-08-23

Similar Documents

Publication Publication Date Title
CN111012336B (en) Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion
Guo et al. A review of wavelet analysis and its applications: Challenges and opportunities
Xia et al. A novel improved deep convolutional neural network model for medical image fusion
CN110399857B (en) Electroencephalogram emotion recognition method based on graph convolution neural network
CN107844755B (en) Electroencephalogram characteristic extraction and classification method combining DAE and CNN
CN109711383B (en) Convolutional neural network motor imagery electroencephalogram signal identification method based on time-frequency domain
CN109784242A (en) EEG Noise Cancellation based on one-dimensional residual error convolutional neural networks
CN113065526B (en) Electroencephalogram signal classification method based on improved depth residual error grouping convolution network
CN109598222B (en) EEMD data enhancement-based wavelet neural network motor imagery electroencephalogram classification method
CN111967506A (en) Electroencephalogram signal classification method for optimizing BP neural network by artificial bee colony
Madhavi et al. Cardiac arrhythmia detection using dual-tree wavelet transform and convolutional neural network
Jinliang et al. EEG emotion recognition based on granger causality and capsnet neural network
CN113569997A (en) Emotion classification method and system based on graph convolution neural network
CN116340824A (en) Electromyographic signal action recognition method based on convolutional neural network
CN114781441B (en) EEG motor imagery classification method and multi-space convolution neural network model
CN113158964A (en) Sleep staging method based on residual learning and multi-granularity feature fusion
CN113052099B (en) SSVEP classification method based on convolutional neural network
CN113128384A (en) Brain-computer interface software key technical method of stroke rehabilitation system based on deep learning
Li et al. A novel motor imagery EEG recognition method based on deep learning
CN116919422A (en) Multi-feature emotion electroencephalogram recognition model establishment method and device based on graph convolution
CN113476056A (en) Motor imagery electroencephalogram signal classification method based on frequency domain graph convolution neural network
CN116421200A (en) Brain electricity emotion analysis method of multi-task mixed model based on parallel training
CN115795346A (en) Classification and identification method of human electroencephalogram signals
CN115221969A (en) Motor imagery electroencephalogram signal identification method based on EMD data enhancement and parallel SCN
CN115813409A (en) Ultra-low-delay moving image electroencephalogram decoding method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant