CN114533083A - Motor imagery state identification method based on multi-fusion convolutional neural network - Google Patents
Motor imagery state identification method based on multi-fusion convolutional neural network Download PDFInfo
- Publication number
- CN114533083A CN114533083A CN202210079960.6A CN202210079960A CN114533083A CN 114533083 A CN114533083 A CN 114533083A CN 202210079960 A CN202210079960 A CN 202210079960A CN 114533083 A CN114533083 A CN 114533083A
- Authority
- CN
- China
- Prior art keywords
- electroencephalogram
- time
- matrix
- domain
- frequency band
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 31
- 238000000034 method Methods 0.000 title claims abstract description 31
- 239000011159 matrix material Substances 0.000 claims abstract description 105
- 238000007781 pre-processing Methods 0.000 claims abstract description 25
- 238000011176 pooling Methods 0.000 claims abstract description 13
- 238000004590 computer program Methods 0.000 claims description 14
- 210000004556 brain Anatomy 0.000 claims description 12
- 238000001914 filtration Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 6
- 239000010410 layer Substances 0.000 description 38
- 238000012549 training Methods 0.000 description 11
- 230000033764 rhythmic process Effects 0.000 description 8
- 238000012360 testing method Methods 0.000 description 5
- 230000033001 locomotion Effects 0.000 description 4
- 239000013598 vector Substances 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000010183 spectrum analysis Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 208000012661 Dyskinesia Diseases 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 210000000624 ear auricle Anatomy 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 210000000337 motor cortex Anatomy 0.000 description 1
- 210000002161 motor neuron Anatomy 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 239000002356 single layer Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
- A61B5/377—Electroencephalography [EEG] using evoked responses
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7203—Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7225—Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Biophysics (AREA)
- Heart & Thoracic Surgery (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Physiology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Psychology (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Power Engineering (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
The invention relates to a motor imagery state identification method based on a multi-fusion convolutional neural network, which is characterized by comprising the following steps: acquiring user electroencephalogram data and performing data preprocessing; extracting an electroencephalogram time-frequency matrix and an electroencephalogram time-domain matrix of a specific frequency band from electroencephalogram data subjected to data preprocessing, and determining an electroencephalogram time-domain energy matrix of the specific frequency band based on the electroencephalogram time-domain matrix of the specific frequency band; inputting the electroencephalogram time-frequency matrix and the electroencephalogram time-domain energy matrix under a specific frequency band into a trained motor imagery state identification model based on a convolutional neural network, and identifying a motor imagery state; the input of the full-connection layer in the motor imagery state recognition model consists of time-frequency domain electroencephalogram characteristics and electroencephalogram time-domain energy characteristics under a specific frequency band, wherein the time-frequency domain electroencephalogram characteristics are extracted by the electroencephalogram time-frequency matrix through a convolutional layer and a pooling layer, and the electroencephalogram time-domain energy characteristics under the specific frequency band are extracted by the electroencephalogram time-domain energy matrix under the specific frequency band through a co-space mode.
Description
Technical Field
The invention relates to a motor imagery state identification method based on a multi-fusion convolutional neural network. The method is suitable for the field of brain-computer interaction.
Background
Motor imagery brain-computer interaction is a technical scheme for helping limb dyskinesia patients to carry out rehabilitation training, and the main rehabilitation principle is that brain-computer interface equipment is used for capturing electroencephalogram characteristics formed by motor imagery, mainly Mu rhythm (8-13Hz) of motor cortex and variation of beta rhythm (18-24 Hz). For example, when the human body performs the motor imagery of the left and right limbs, the two brain electrical rhythms on the same side can be increased, while the opposite side is reduced, and when the human body does not imagine, the rhythms on the two sides do not change obviously. By capturing the electroencephalogram rhythm characteristics of a patient during motor imagery, the feedback of motion perception can be formed, and damaged motor neurons are stimulated to construct a new neural circuit, so that the efficiency of rehabilitation training of exercise functions is improved.
The detection of the motor imagery state is always a key part in the rehabilitation technical scheme, so in order to improve the accuracy of the motor state identification, researchers have proposed different schemes including power spectrum analysis, common space mode, sample entropy method and the like, but all have some problems.
The power spectrum analysis method calculates the power spectrum of Mu and beta rhythm of the motion perception cortex and compares the threshold value to determine whether the motor imagery is performed before. The method is simple and convenient to implement, but needs a specific determination threshold, needs a large amount of prior data, is not flexible enough, and has poor fuzzy data processing capacity. The common space mode method is a method which is widely applied in single two-classification motor imagery, and a group of optimal space filters are found for projection by utilizing diagonalization of a matrix, so that the variance difference of two types of signals is maximized, and the feature vector with high discrimination is obtained. The method has higher requirements on leads, the more leads, the better accuracy, but the method is susceptible to noise interference. Although the sample entropy method has stable algorithm and small operation amount, the sample entropy method is more suitable for processing small sample data.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: in order to solve the existing problems, a motor imagery state identification method based on a multi-fusion convolutional neural network is provided.
The technical scheme adopted by the invention is as follows: a motor imagery state identification method based on a multi-fusion convolutional neural network is characterized in that:
acquiring user electroencephalogram data and performing data preprocessing;
extracting an electroencephalogram time-frequency matrix and an electroencephalogram time-domain matrix of a specific frequency band from electroencephalogram data subjected to data preprocessing, and determining an electroencephalogram time-domain energy matrix of the specific frequency band based on the electroencephalogram time-domain matrix of the specific frequency band;
inputting the electroencephalogram time-frequency matrix and the electroencephalogram time-domain energy matrix under a specific frequency band into a trained motor imagery state identification model based on a convolutional neural network, and identifying a motor imagery state;
the input of the full-connection layer in the motor imagery state recognition model consists of time-frequency domain electroencephalogram characteristics and electroencephalogram time-domain energy characteristics under a specific frequency band, wherein the time-frequency domain electroencephalogram characteristics are extracted by the electroencephalogram time-frequency matrix through a convolutional layer and a pooling layer, and the electroencephalogram time-domain energy characteristics under the specific frequency band are extracted by the electroencephalogram time-domain energy matrix under the specific frequency band through a co-space mode.
The data preprocessing comprises the following steps:
and performing low-pass filtering on the electroencephalogram data at 0Hz-30Hz to remove interference, and then performing time domain data enhancement.
The method for extracting the electroencephalogram time domain matrix of the specific frequency band from the electroencephalogram data subjected to data preprocessing and determining the electroencephalogram time domain energy matrix under the specific frequency band based on the electroencephalogram time domain matrix of the specific frequency band comprises the following steps:
obtaining the filtering data of the brain electrical data which is preprocessed under Mu [8Hz-13Hz ] and beta [18Hz-24Hz ] through a band-pass filter to obtain 2 brain electrical time domain matrixes;
obtaining electroencephalogram time domain energy corresponding to the 2 electroencephalogram time domain matrixes to obtain 2 electroencephalogram time domain energy matrixes, and then combining the 2 electroencephalogram time domain energy matrixes according to a row matrix to obtain an electroencephalogram time domain energy matrix;
the row number in the electroencephalogram time domain energy matrix represents channels under different frequencies, the column number represents a time point, and the mean value of the matrix represents the instantaneous energy of single channel data under a certain frequency band at a certain time.
The electroencephalogram time domain energy corresponding to the 2 electroencephalogram time domain matrixes is obtained by adopting the following formula:
in the formula, XijThe value of the ith row and the jth column in the electroencephalogram time domain matrix represents electroencephalogram data under a certain frequency band and single channel data at a certain time.
The extraction of the electroencephalogram time-frequency matrix from the electroencephalogram data subjected to data preprocessing comprises the following steps:
and solving a time-frequency matrix of each channel, wherein a row number in each matrix represents a frequency point, a column number represents a time point, the time-frequency matrix represents energy values under a single channel, each frequency and each time, and the time-frequency matrix of each channel forms an n-dimensional matrix.
A motor imagery state recognition device based on multi-fusion convolutional neural network is characterized in that:
the data acquisition and preprocessing module is used for acquiring user electroencephalogram data and carrying out data preprocessing;
the parameter extraction module is used for extracting an electroencephalogram time-frequency matrix and an electroencephalogram time-domain matrix of a specific frequency band from the electroencephalogram data subjected to data preprocessing, and determining an electroencephalogram time-domain energy matrix of the specific frequency band based on the electroencephalogram time-domain matrix of the specific frequency band;
the model identification module is used for inputting the electroencephalogram time-frequency matrix and the electroencephalogram time-domain energy matrix under the specific frequency band into a trained motor imagery state identification model based on the convolutional neural network to identify a motor imagery state;
the input of the full-connection layer in the motor imagery state recognition model consists of time-frequency domain electroencephalogram characteristics and electroencephalogram time-domain energy characteristics under a specific frequency band, wherein the time-frequency domain electroencephalogram characteristics are extracted by the electroencephalogram time-frequency matrix through a convolutional layer and a pooling layer, and the electroencephalogram time-domain energy characteristics under the specific frequency band are extracted by the electroencephalogram time-domain energy matrix under the specific frequency band through a co-space mode.
A storage medium having stored thereon a computer program executable by a processor, the computer program comprising: the computer program when executed implements the steps of the multi-fusion convolutional neural network-based motor imagery state recognition method.
A motor imagery state recognition device, comprising:
the electroencephalogram acquisition device is used for acquiring electroencephalogram data of a user;
a data processing device having a memory and a processor, the memory having stored thereon a computer program executable by the processor, the computer program when executed implementing the steps of the multi-fusion convolutional neural network-based motor imagery state recognition method.
The invention has the beneficial effects that: the motor imagery state recognition model is used for motor imagery state recognition, full-connection layer input in the motor imagery state recognition model is composed of convolution layers for extracting time-frequency domain electroencephalogram characteristics and time-domain electroencephalogram energy characteristics under specific frequency bands output by a co-space mode, the former provides energy changes of all frequency bands under the motor imagery state along with time, the latter mainly provides recognized electroencephalogram energy characteristics under the specific frequency bands, when the classification layer is trained by combining the two parts of information, firstly, the reference frequency domain characteristic information is comprehensive, secondly, the characteristics needing to be observed in an emphasized mode can be highlighted, the classification result cannot be influenced by noise generated by other frequency bands, and the classification layer trained by combining the two characteristics is beneficial to improving the classification accuracy.
Drawings
Fig. 1 is a schematic structural diagram of a motor imagery state recognition model in an embodiment.
FIG. 2 is a schematic diagram of model training data acquisition in an embodiment.
FIG. 3 is a flowchart of model training in the embodiment.
Detailed Description
The embodiment is a motor imagery state identification method based on a multi-fusion convolutional neural network, which specifically comprises the following steps:
and S1, acquiring user electroencephalogram data and preprocessing the data.
Collecting the electroencephalogram data of n leads and m points, and obtaining an electroencephalogram matrix after collection, wherein the shape of the electroencephalogram matrix is n x m1, and the sampling rate is 250 Hz.
And performing one-pass filtering on the original electroencephalogram data at 0Hz-30Hz to remove interference such as power frequency, high-frequency myoelectricity and the like, and then performing time domain data enhancement, namely sliding a time window with the length of m2 at the speed of step, selecting a proper time window for superposition and averaging, and storing the time window into filtered data of n x m 2.
S2, extracting an electroencephalogram time-frequency matrix and an electroencephalogram time-domain matrix of a specific frequency band from the electroencephalogram data subjected to data preprocessing, and determining an electroencephalogram time-domain energy matrix of the specific frequency band based on the electroencephalogram time-domain matrix of the specific frequency band.
Firstly, electroencephalogram time domain energy matrix under specific frequency band
a. And designing a two-wheel band-pass filter to obtain the filtering data of the filtering data under the specific frequency bands Mu [8Hz-13Hz ] and beta [18Hz-24Hz ], and finally obtaining 2 electroencephalogram time domain matrixes with the data format still being n m 2.
b. Calculating time domain electroencephalogram energy based on the electroencephalogram time domain matrix by using the following formula to obtain 2 electroencephalogram time domain energy matrixes of n x m 2;
then combining the 2 matrixes according to rows to obtain a 2n × m2 electroencephalogram time domain energy matrix; the row number represents the channel at different frequencies, and the column number represents the time point, so the value in the matrix represents the instantaneous energy of a single channel at a single time in each frequency band.
Two, brain electricity time frequency matrix
In order to observe the energy distribution of the brain electrical channel at each frequency more precisely, the present embodiment obtains (0-30Hz) time-frequency matrices of each channel, n (corresponding to the brain electrical channels), and the format of each matrix is as follows:
in this example, the frequency interval is 2Hz, i.e. the number of lines is (0-2Hz, 2-4Hz, … …, 28-30Hz), and thus there are 15 lines.
In the above formula, the row number represents a frequency point, and the column number represents a time point, so that the matrix represents the energy values under a single channel, each frequency and each time, and the time-frequency matrixes of the channels are combined to form an n-dimensional matrix.
And S3, inputting the electroencephalogram time-frequency matrix and the electroencephalogram time-domain energy matrix under the specific frequency band into the trained motor imagery state recognition model based on the convolutional neural network, and recognizing the motor imagery state.
As shown in fig. 1, the motor imagery state recognition model in this embodiment has a convolutional neural network, a full connection layer, and a Softmax classifier, where input data of the convolutional neural network is an electroencephalogram time-frequency matrix (i.e., time-frequency matrices of respective channels are combined to form an n-dimensional matrix), and outputs time-frequency domain electroencephalogram features to the full connection layer; the brain electrical time domain energy matrix under the specific frequency band extracts the brain electrical time domain energy characteristics under the specific frequency band through a common space mode and then inputs the brain electrical time domain energy characteristics to the full connection layer; the full connection layer integrates the electroencephalogram time-frequency matrix and electroencephalogram time-domain energy characteristics under the specific frequency band, outputs the characteristics to the Softmax classifier, and classifies the characteristics by the Softmax classifier.
The construction steps of the convolutional neural network in this example are as follows, and the purpose is to change the layer-by-layer compression characteristics of the large-size input matrix into 1 small-size matrix which can accurately describe the input matrix, and train the fully-connected neural network for classification:
a. building convolutional layers
If the input layer data format is n × n2 × m2 and the single layer convolution kernel format is n3 × m3, its weight is w1, bias is b2, kernel function is f, and the output is calculated according to equation 3
output1=f(w1x1+w1x2+......w1xn+b) (3)
The convolution kernel slides according to the step length s1, and 1 result is obtained by using the formula (3) every time the convolution kernel slides 1 time, the results form a matrix with the format of n4 m4 as the convolution layer output, wherein the relation between the convolution layer output format and the input data format, the convolution layer format, the step length and the filling parameter p1 is as the formula (4)
n4*m4=(((n2-2p1-n3)/s1)+1)*(((m2-2p1-m3)/s1)+1) (4)
b. Building pooling layers
The function of the pooling layer is to perform down-sampling on the output result of the convolutional layer according to the formula (5) to obtain a sample with smaller dimension, and the output of the pooling layer is n5 × m5 if the format of the pooling layer is set
output2=max(output1,output2,....output(n5+m5)) (5)
The pooling layer is the same as the convolutional layer in the sliding calculation result, and if the sliding step is s2 and the filling parameter is p2, the output dimension n6 × m6 of the pooling layer is calculated as the formula (6)
n6*m6=(((n5-2p2-n4)/s2)+1)*(((m5-2p2-m4)/s2)+1) (6)
And training the fully-connected layer by combining the characteristic vectors output by the convolutional layer and the pooling layer with the characteristic vector output by the common space mode, storing the model and then classifying.
The training of the motor imagery state recognition model in the embodiment comprises the following steps:
A. the 8-lead helmet is worn and connected with the acquisition box to acquire data, three leads of C3, C4 and Cz are selected from the acquired data by taking the A1 and A2 of the ear lobe part as reference electrodes, and the data sets are written.
B. The screen is blacked for 10s when training is started, the tested person needs to be rested with eyes closed at the moment, then left-hand and right-hand imagination is started according to the indication of fig. 2, the screen presents a dot at the beginning, the tested person needs to be prepared at the moment, the link lasts for 1s, then a left arrow, a right arrow or a black screen appears, the link lasts for 6s, if the left arrow is tested to need to be focused on left-hand movement, and if the right arrow is tested to be focused on imagining right-hand movement, the black screen does not want to be imaged.
C. A total of 2000 trials (which could be run for many consecutive days) were collected, data pre-processing was performed including outlier removal, filtering at 0.5-30Hz, data normalization, data segmentation and labeling (where the left hand imagination is denoted 0, the right hand imagination is denoted 1, and the image is not intended to be denoted 2), and then data segmentation was performed for 80% of the trials as training set and 20% as test set.
D. Two kinds of feature extraction are carried out on the data of the single test, 1, a time frequency matrix is calculated through fft, 2, Mu rhythm (8-13Hz) and beta rhythm (18-24Hz) filtering are carried out on the data of the single test, and a time domain energy matrix is calculated according to the formula (1).
E. And D, sending the time-frequency matrix calculated in the step D into a convolution layer in the model, and extracting the characteristic vector of the calculated time-domain energy matrix by using a common space mode algorithm.
F. And E, transmitting the two parts of characteristics calculated in the step E into the full-connection layer, training, adjusting the neuron weights of the full-connection layer and the convolution layer by using a back propagation algorithm, calculating an output error, and storing the model when the output error is lower than a threshold value. The learning rate can be adjusted with an amplitude of 0.0001 during model training, the model with the best performance on the test set is selected, and the training process is shown in fig. 3.
G. After the model is stored, the data of each test can be collected on line, and the preprocessing, the feature extraction and the model calling are performed according to the processes.
The embodiment also provides a motor imagery state recognition device based on the multi-fusion convolutional neural network, which comprises a data acquisition and preprocessing module, a parameter extraction module and a model recognition module.
In the embodiment, the data acquisition and preprocessing module is used for acquiring user electroencephalogram data and carrying out data preprocessing; the parameter extraction module is used for extracting an electroencephalogram time-frequency matrix and an electroencephalogram time-domain matrix of a specific frequency band from electroencephalogram data subjected to data preprocessing, and determining an electroencephalogram time-domain energy matrix of the specific frequency band based on the electroencephalogram time-domain matrix of the specific frequency band; the model identification module is used for inputting the electroencephalogram time-frequency matrix and the electroencephalogram time-domain energy matrix under the specific frequency band into the trained motor imagery state identification model based on the convolutional neural network, and identifying the motor imagery state.
In the embodiment, the input of the full-link layer in the motor imagery state recognition model consists of time-frequency domain electroencephalogram characteristics and electroencephalogram time-domain energy characteristics under a specific frequency band, wherein the time-frequency domain electroencephalogram characteristics are extracted by the electroencephalogram time-frequency matrix through a convolutional layer and a pooling layer, and the electroencephalogram time-domain energy characteristics under the specific frequency band are extracted by the electroencephalogram time-domain energy matrix under the specific frequency band through a common space mode.
The present embodiment also provides a storage medium having stored thereon a computer program executable by a processor, the computer program when executed implementing the steps of the multi-fusion convolutional neural network-based motor imagery state recognition method.
The embodiment also provides motor imagery state recognition equipment which comprises an electroencephalogram acquisition device and a data processing device, wherein the electroencephalogram acquisition device is used for acquiring electroencephalogram data of a user; the data processing device is provided with a memory and a processor, wherein the memory is stored with a computer program which can be executed by the processor, and the computer program is executed to realize the steps of the motor imagery state identification method based on the multi-fusion convolutional neural network.
Claims (8)
1. A motor imagery state identification method based on a multi-fusion convolutional neural network is characterized in that:
acquiring user electroencephalogram data and performing data preprocessing;
extracting an electroencephalogram time-frequency matrix and an electroencephalogram time-domain matrix of a specific frequency band from electroencephalogram data subjected to data preprocessing, and determining an electroencephalogram time-domain energy matrix of the specific frequency band based on the electroencephalogram time-domain matrix of the specific frequency band;
inputting the electroencephalogram time-frequency matrix and the electroencephalogram time-domain energy matrix under a specific frequency band into a trained motor imagery state identification model based on a convolutional neural network, and identifying a motor imagery state;
the input of the full-link layer in the motor imagery state recognition model consists of time-frequency domain electroencephalogram characteristics and electroencephalogram time-domain energy characteristics under a specific frequency band, wherein the time-frequency domain electroencephalogram characteristics are extracted by the electroencephalogram time-frequency matrix through a convolutional layer and a pooling layer, and the electroencephalogram time-domain energy characteristics under the specific frequency band are extracted by the electroencephalogram time-domain energy matrix under the specific frequency band through a co-space mode.
2. The method for identifying motor imagery states based on multi-fused convolutional neural networks according to claim 1, wherein the data preprocessing comprises:
and performing low-pass filtering on the electroencephalogram data at 0Hz-30Hz to remove interference, and then performing time domain data enhancement.
3. The method for recognizing the motor imagery state based on the multi-fusion convolutional neural network of claim 1, wherein the extracting a specific frequency band electroencephalogram time domain matrix from the electroencephalogram data subjected to data preprocessing, and determining a brain electroencephalogram time domain energy matrix under the specific frequency band based on the specific frequency band electroencephalogram time domain matrix comprises:
obtaining the filtering data of the brain electrical data which is preprocessed under Mu [8Hz-13Hz ] and beta [18Hz-24Hz ] through a band-pass filter to obtain 2 brain electrical time domain matrixes;
obtaining electroencephalogram time domain energy corresponding to the 2 electroencephalogram time domain matrixes to obtain 2 electroencephalogram time domain energy matrixes, and then combining the 2 electroencephalogram time domain energy matrixes according to a row matrix to obtain an electroencephalogram time domain energy matrix;
the row number in the electroencephalogram time domain energy matrix represents channels under different frequencies, the column number represents a time point, and the mean value of the matrix represents the instantaneous energy of single channel data under a certain frequency band at a certain time.
4. The method for recognizing the motor imagery state based on the multi-fusion convolutional neural network of claim 3, wherein the following formula is adopted for solving the electroencephalogram time-domain energy corresponding to the 2 electroencephalogram time-domain matrices:
in the formula, XijThe value of the ith row and the jth column in the electroencephalogram time domain matrix represents electroencephalogram data under a certain frequency band and single channel data at a certain time.
5. The method for recognizing the motor imagery state based on the multi-fusion convolutional neural network of claim 1, wherein the extracting of the electroencephalogram time-frequency matrix from the data-preprocessed electroencephalogram data comprises:
and solving a time-frequency matrix of each channel, wherein a row number in each matrix represents a frequency point, a column number represents a time point, the time-frequency matrix represents energy values under a single channel, each frequency and each time, and the time-frequency matrix of each channel forms an n-dimensional matrix.
6. A motor imagery state recognition device based on multi-fusion convolutional neural network is characterized in that:
the data acquisition and preprocessing module is used for acquiring user electroencephalogram data and carrying out data preprocessing;
the parameter extraction module is used for extracting an electroencephalogram time-frequency matrix and an electroencephalogram time-domain matrix of a specific frequency band from the electroencephalogram data subjected to data preprocessing, and determining an electroencephalogram time-domain energy matrix of the specific frequency band based on the electroencephalogram time-domain matrix of the specific frequency band;
the model identification module is used for inputting the electroencephalogram time-frequency matrix and the electroencephalogram time-domain energy matrix under the specific frequency band into a trained motor imagery state identification model based on the convolutional neural network to identify a motor imagery state;
the input of the full-connection layer in the motor imagery state recognition model consists of time-frequency domain electroencephalogram characteristics and electroencephalogram time-domain energy characteristics under a specific frequency band, wherein the time-frequency domain electroencephalogram characteristics are extracted by the electroencephalogram time-frequency matrix through a convolutional layer and a pooling layer, and the electroencephalogram time-domain energy characteristics under the specific frequency band are extracted by the electroencephalogram time-domain energy matrix under the specific frequency band through a co-space mode.
7. A storage medium having stored thereon a computer program executable by a processor, the computer program comprising: the computer program is executed to realize the steps of the motor imagery state recognition method based on the multi-fusion convolutional neural network according to any one of claims 1 to 5.
8. A motor imagery state recognition device, comprising:
the electroencephalogram acquisition device is used for acquiring electroencephalogram data of a user;
a data processing device having a memory and a processor, the memory having stored thereon a computer program executable by the processor, the computer program when executed implementing the steps of the method for identifying motor imagery state based on a multi-fusion convolutional neural network of any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210079960.6A CN114533083B (en) | 2022-01-24 | 2022-01-24 | Motor imagery state identification method based on multi-fusion convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210079960.6A CN114533083B (en) | 2022-01-24 | 2022-01-24 | Motor imagery state identification method based on multi-fusion convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114533083A true CN114533083A (en) | 2022-05-27 |
CN114533083B CN114533083B (en) | 2023-12-01 |
Family
ID=81672572
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210079960.6A Active CN114533083B (en) | 2022-01-24 | 2022-01-24 | Motor imagery state identification method based on multi-fusion convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114533083B (en) |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9116835B1 (en) * | 2014-09-29 | 2015-08-25 | The United States Of America As Represented By The Secretary Of The Army | Method and apparatus for estimating cerebral cortical source activations from electroencephalograms |
CN105809124A (en) * | 2016-03-06 | 2016-07-27 | 北京工业大学 | DWT- and Parametric t-SNE-based characteristic extracting method of motor imagery EEG(Electroencephalogram) signals |
CN106502410A (en) * | 2016-10-27 | 2017-03-15 | 天津大学 | Improve the transcranial electrical stimulation device of Mental imagery ability and method in brain-computer interface |
US20180089531A1 (en) * | 2015-06-03 | 2018-03-29 | Innereye Ltd. | Image classification by brain computer interface |
CN110163180A (en) * | 2019-05-29 | 2019-08-23 | 长春思帕德科技有限公司 | Mental imagery eeg data classification method and system |
CN110765920A (en) * | 2019-10-18 | 2020-02-07 | 西安电子科技大学 | Motor imagery classification method based on convolutional neural network |
CN111110230A (en) * | 2020-01-09 | 2020-05-08 | 燕山大学 | Motor imagery electroencephalogram feature enhancement method and system |
CN111950455A (en) * | 2020-08-12 | 2020-11-17 | 重庆邮电大学 | Motion imagery electroencephalogram characteristic identification method based on LFFCNN-GRU algorithm model |
CN112528834A (en) * | 2020-12-08 | 2021-03-19 | 杭州电子科技大学 | Sub-band target alignment common space mode electroencephalogram signal cross-subject classification method |
CN112741637A (en) * | 2020-12-23 | 2021-05-04 | 杭州国辰迈联机器人科技有限公司 | P300 electroencephalogram signal extraction method, cognitive rehabilitation training method and system |
CN113011239A (en) * | 2020-12-02 | 2021-06-22 | 杭州电子科技大学 | Optimal narrow-band feature fusion-based motor imagery classification method |
CN113558644A (en) * | 2021-07-20 | 2021-10-29 | 陕西科技大学 | Emotion classification method, medium and equipment for 3D matrix and multidimensional convolution network |
CN113576495A (en) * | 2021-07-19 | 2021-11-02 | 浙江迈联医疗科技有限公司 | Motor imagery evaluation method combined with EEG data quality |
CN113780134A (en) * | 2021-08-31 | 2021-12-10 | 昆明理工大学 | Motor imagery electroencephalogram decoding method based on ShuffleNet V2 network |
US20220054071A1 (en) * | 2019-09-06 | 2022-02-24 | Tencent Technology (Shenzhen) Company Limited | Motor imagery electroencephalogram signal processing method, device, and storage medium |
-
2022
- 2022-01-24 CN CN202210079960.6A patent/CN114533083B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9116835B1 (en) * | 2014-09-29 | 2015-08-25 | The United States Of America As Represented By The Secretary Of The Army | Method and apparatus for estimating cerebral cortical source activations from electroencephalograms |
US20180089531A1 (en) * | 2015-06-03 | 2018-03-29 | Innereye Ltd. | Image classification by brain computer interface |
CN105809124A (en) * | 2016-03-06 | 2016-07-27 | 北京工业大学 | DWT- and Parametric t-SNE-based characteristic extracting method of motor imagery EEG(Electroencephalogram) signals |
CN106502410A (en) * | 2016-10-27 | 2017-03-15 | 天津大学 | Improve the transcranial electrical stimulation device of Mental imagery ability and method in brain-computer interface |
CN110163180A (en) * | 2019-05-29 | 2019-08-23 | 长春思帕德科技有限公司 | Mental imagery eeg data classification method and system |
US20220054071A1 (en) * | 2019-09-06 | 2022-02-24 | Tencent Technology (Shenzhen) Company Limited | Motor imagery electroencephalogram signal processing method, device, and storage medium |
CN110765920A (en) * | 2019-10-18 | 2020-02-07 | 西安电子科技大学 | Motor imagery classification method based on convolutional neural network |
CN111110230A (en) * | 2020-01-09 | 2020-05-08 | 燕山大学 | Motor imagery electroencephalogram feature enhancement method and system |
CN111950455A (en) * | 2020-08-12 | 2020-11-17 | 重庆邮电大学 | Motion imagery electroencephalogram characteristic identification method based on LFFCNN-GRU algorithm model |
CN113011239A (en) * | 2020-12-02 | 2021-06-22 | 杭州电子科技大学 | Optimal narrow-band feature fusion-based motor imagery classification method |
CN112528834A (en) * | 2020-12-08 | 2021-03-19 | 杭州电子科技大学 | Sub-band target alignment common space mode electroencephalogram signal cross-subject classification method |
CN112741637A (en) * | 2020-12-23 | 2021-05-04 | 杭州国辰迈联机器人科技有限公司 | P300 electroencephalogram signal extraction method, cognitive rehabilitation training method and system |
CN113576495A (en) * | 2021-07-19 | 2021-11-02 | 浙江迈联医疗科技有限公司 | Motor imagery evaluation method combined with EEG data quality |
CN113558644A (en) * | 2021-07-20 | 2021-10-29 | 陕西科技大学 | Emotion classification method, medium and equipment for 3D matrix and multidimensional convolution network |
CN113780134A (en) * | 2021-08-31 | 2021-12-10 | 昆明理工大学 | Motor imagery electroencephalogram decoding method based on ShuffleNet V2 network |
Non-Patent Citations (2)
Title |
---|
MIAO M,等: "Spatial-frequency feature learning and classification of motor imagery EEG based on deep convolution neural network", 《COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE》 * |
陆振宇,等: "基于多特征融合的运动想象脑电信号分类研究", 《现代计算机》 * |
Also Published As
Publication number | Publication date |
---|---|
CN114533083B (en) | 2023-12-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111012336B (en) | Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion | |
CN112120694B (en) | Motor imagery electroencephalogram signal classification method based on neural network | |
CN110353702A (en) | A kind of emotion identification method and system based on shallow-layer convolutional neural networks | |
CN113158793B (en) | Multi-class motor imagery electroencephalogram signal identification method based on multi-feature fusion | |
CN113065526B (en) | Electroencephalogram signal classification method based on improved depth residual error grouping convolution network | |
CN104771163A (en) | Electroencephalogram feature extraction method based on CSP and R-CSP algorithms | |
CN112633195B (en) | Myocardial infarction recognition and classification method based on frequency domain features and deep learning | |
CN113128552B (en) | Electroencephalogram emotion recognition method based on depth separable causal graph convolution network | |
CN112488002B (en) | Emotion recognition method and system based on N170 | |
CN112515685A (en) | Multi-channel electroencephalogram signal channel selection method based on time-frequency co-fusion | |
CN112541415B (en) | Brain muscle function network motion fatigue detection method based on symbol transfer entropy and graph theory | |
CN115795346A (en) | Classification and identification method of human electroencephalogram signals | |
CN115238796A (en) | Motor imagery electroencephalogram signal classification method based on parallel DAMSCN-LSTM | |
Geng et al. | A fusion algorithm for EEG signal processing based on motor imagery brain-computer interface | |
CN113128384B (en) | Brain-computer interface software key technical method of cerebral apoplexy rehabilitation system based on deep learning | |
CN116919422A (en) | Multi-feature emotion electroencephalogram recognition model establishment method and device based on graph convolution | |
CN113128353A (en) | Emotion sensing method and system for natural human-computer interaction | |
CN110321856B (en) | Time-frequency multi-scale divergence CSP brain-computer interface method and device | |
CN116236209A (en) | Method for recognizing motor imagery electroencephalogram characteristics of dynamics change under single-side upper limb motion state | |
CN116421200A (en) | Brain electricity emotion analysis method of multi-task mixed model based on parallel training | |
CN114533083A (en) | Motor imagery state identification method based on multi-fusion convolutional neural network | |
CN113662561B (en) | Electroencephalogram feature extraction method and device of subband cascade co-space mode | |
CN113017648B (en) | Electroencephalogram signal identification method and system | |
CN114847933A (en) | Myoelectric signal gesture recognition method and system based on full convolution residual error network | |
CN111990992A (en) | Electroencephalogram-based autonomous movement intention identification method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |