CN109726751A - Method based on depth convolutional neural networks identification brain Electrical imaging figure - Google Patents

Method based on depth convolutional neural networks identification brain Electrical imaging figure Download PDF

Info

Publication number
CN109726751A
CN109726751A CN201811574691.0A CN201811574691A CN109726751A CN 109726751 A CN109726751 A CN 109726751A CN 201811574691 A CN201811574691 A CN 201811574691A CN 109726751 A CN109726751 A CN 109726751A
Authority
CN
China
Prior art keywords
convolution
eeg
image
serial number
lead
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811574691.0A
Other languages
Chinese (zh)
Other versions
CN109726751B (en
Inventor
李明爱
韩健夫
杨金福
孙炎珺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201811574691.0A priority Critical patent/CN109726751B/en
Publication of CN109726751A publication Critical patent/CN109726751A/en
Application granted granted Critical
Publication of CN109726751B publication Critical patent/CN109726751B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses the methods based on depth convolutional neural networks identification brain Electrical imaging figure, carry out baseline to collected Mental imagery EEG signals and eliminate pretreatment;Every lead signals are divided into some time window, Fast Fourier Transform (FFT) is carried out to each window MI-EEG model respectively, it is subjected to inverse fast Fourier transform respectively, calculates its corresponding time-domain power value;The time-domain power value that each window obtains is averaged, time-domain power feature is obtained;Three band power features of extraction are subjected to interpolation imaging in data matrix, obtain the pseudo- RGB image of MI-EEG signal;DCNN modelling substitutes maximum pond layer progress Data Dimensionality Reduction using convolutional layer after every section of convolution at five sections of convolution;It is assessed on test set using trained DCNN model, completes class test.Advantage of the MI-EEG image in terms of feature representation cooperates the stronger 30 layers of DCNN of model capability of fitting, and for promoting the expression of MI-EEG signal characteristic, nicety of grading is of great significance.

Description

Method based on depth convolutional neural networks identification brain Electrical imaging figure
Technical field
The invention belongs to based on depth convolutional neural networks (Deep Convolutional Neural Network, DCNN Mental imagery EEG signals (MI-EEG) feature extraction and classifying field), and in particular to: it is based on Fast Fourier Transform (FFT) The feature extraction of (FastFourier Transform, FFT) and 2D lead interpolation of coordinate imaging method generate the pseudo- RGB of MI-EEG Triple channel characteristic pattern, and Feature Dimension Reduction and classification are carried out based on DCNN.
Background technique
Depth convolutional neural networks (DCNN) are a kind of feedforward neural networks, have local receptor field, convolution kernel weight total Enjoy, neuron nonlinear activation, convolution operation carry out the multinomial technology such as Data Dimensionality Reduction, be widely used in field of image recognition. The network to various dimensions characteristic carry out feature extraction, in terms of have very big advantage, show preferable depth Characteristic has stronger models fitting ability and stronger model generalization ability.Using the DCNN of multilayer convolutional coding structure to input Image carries out convolution operation, and image dimensionality reduction may be implemented and pixel is compressed, finally obtain the output probability value of correspondence image classification.
Due to including several neurons in each convolutional layer of DCNN, the spy to input data various dimensions can be completed at the same time Sign extracts.Carrying out " activation " output by multiple neuron nonlinear activation functions facilitates non-linear group of increase primitive character It closes, and the main feature information for embodying uneven class size can be given full expression to, uneven class size is contributed lesser secondary Characteristic information is inhibited.In addition, DCNN model structure adaptability is good, fitting performance to data and to the extensive of different data Ability is stronger, has unique advantage in terms of processing is with the MI-EEG signal of various dimensions feature.
Original MI-EEG signal is time domain discrete sequence, is reused after need to being converted into plan view image or data matrix DCNN is identified.The imaging method of MI-EEG signal is broadly divided into two classes: the first imaging method is by the signal of every lead It is divided into some time window, sequentially arranges all frequency domain characters to signal extraction frequency domain character in each time window, and along the x-axis direction Column, meanwhile, the characteristic information of each lead is stacked along the y-axis direction.Second of imaging method is to utilize cospace mode (Common Spatial Pattern, CSP) method carries out feature extraction to MI-EEG signal, over a plurality of bands presses signal The maximized direction projection of uneven class size.Concrete operations are to carry out multi-level decomposition to several frequency bands, are extracted using CSP filter Signal characteristic, the feature for then extracting each sub-filter are arranged along x-axis, and the feature that different sub-band filter extracts is along y Axis arrangement, forms data matrix.Correlative study shows that the identification of MI-EEG image formation figure is carried out based on DCNN to be achieved relatively High recognition accuracy, but still have the following problems:
(1) select frequency domain character extracting method when EEG signals are imaged, in feature extraction phases, to respectively leading MI- The square operation of EEG progress FFT, each Frequency point frequency spectrum modulus or mould cannot sufficiently show signal power spy as signal characteristic The numerical value difference of sign;
(2) frequency domain character or spatial feature of MI-EEG is utilized in brain Electrical imaging, but time-frequency characteristics do not obtain effective body It is existing, in addition, the unordered stacking of several leads or multiple subband features is formed imaging array, so that original BCI acquisition system is included Crosslinking electrode location information lose, this will all adversely affect identification;
(3) setting of the neuronal quantity of the number of plies of convolutional neural networks and every layer of convolution is very few, so that network capability of fitting Poor, Generalization Capability is not strong, is unfavorable for the various dimensions depth extraction of signal characteristic;And after convolution operation, using maximum pond Operation, which carries out Data Dimensionality Reduction, can give up 75% pixel, so that the information lost when handling high-order characteristic pattern is excessive, influence Classification results.
Summary of the invention
In view of the above shortcomings, the present invention improves existing EEG signals imaging method with DCNN structure, Propose a kind of MI-EEG recognition methods based on MI-EEG characteristic imaging figure and DCNN.It is specifically related to:
(1) after converting the signal into frequency domain, the frequency sequence another mistake for extracting the rhythm and pace of moving things related to Mental imagery transforms to time domain, Time-domain power value is calculated as feature, the numerical value difference being obviously improved between different classes of signal.
(2) using signal power value obtained by the above method as feature, in conjunction with the lead coordinate information of BCI acquisition system Feature is interpolated into the image of NxN pixel.While stick signal time-frequency characteristics, the space letter between lead is completely utilized Breath, improves feature representation effect of the original signal in imaging operation.
(3) using 6 sections of convolution totally 30 layers of depth convolutional neural networks structure.The increase of the number of plies of convolutional neural networks mentions Model performance has been risen, capability of fitting and the generalization ability to data are enhanced.Meanwhile eliminating the pond layer of convolutional neural networks Structure is replaced with step-length by 2 convolutional coding structure.Information loss can be reduced while increasing the stability of network training, promoted and known Other accuracy rate.
Therefore, the technical solution adopted by the present invention is the method for identifying brain Electrical imaging figure based on depth convolutional neural networks, Imagination movement EEG signals are pre-processed first, baseline elimination is carried out to it using tranquillization state data.By what is tested every time After every leads are divided into multiple windows, after the Fast Fourier Transform (FFT) for carrying out 8-13Hz, 13-21Hz, 21-30Hz respectively, Time-domain power value is extracted as signal time-frequency characteristics by Fourier inversion respectively again.The feature of extraction is acquired according to BCI System coordinates figure is interpolated into pixel grid by lead, obtains imagination motor message image.Image is input to for brain Exercise supervision training in the depth convolutional neural networks of electric signal optimization, using back-propagation algorithm to the weight of network neural member Parameter is adjusted, and network is enable preferably to be fitted input data distribution, general using network output image category after the completion of training Rate.
It is based on above-mentioned analysis, of the invention that the specific implementation steps are as follows:
S1 MI-EEG Signal Pretreatment;
S1.1 assumesThe EEG signals during the movement imagination of m lead acquisition are tested for i-th, In, m ∈ { 1,2,3 ..., NcIndicate to acquire the lead label of Mental imagery brain electricity task, NcRepresent lead number;I ∈ 1,2, 3 ..., Nm, NmIndicate acquisition experiment number;NsRepresent primary experiment sampling number.Then i-th acquisition experiment obtains eeg data For
S1.2 is directed to each leads tested every timeTranquillization data before using Mental imagery are joined as baseline Line is examined, baseline Processing for removing is carried out, obtains the signal for eliminating baseline
MI-EEG signal characteristic extracting methods of the S2 based on Fast Fourier Transform (FFT);
S2.1 is for the primary imagination exercise testing data after baseline elimination Each of which every lead signals is expressed as by sampling instant It is sequentially divided into NDA window, ND∈N+.If each series of windows is ThenThe sampling instant for including are as follows:
Wherein, m is lead serial number, and i is experiment serial number, and j is window serial number.
S2.2 is by each series of windows XD M, i, j0 is mended, X is madeD M, i, jSequence length reach NFFT=2K, to improve its number Resolution ratio, wherein K is positive integer,N after mending 0DA series of windows is denoted as
X of the S2.3 to each leadW M, i, jIt carries out Fast Fourier Transform (FFT) (Fast Fourier Transform, FFT), is grown Degree is NFFTFrequency domain sequence
S2.4 selects frequency range corresponding with the closely related α of imagination movement, beta response according to nervous physiology theory 8-30Hz, and X will be obtained thirdly be divided into 8~13Hz, 13~21Hz and 21~30Hz, tri- frequency rangeF M, i, jSon in each frequency range SequenceFor Frequency band serial number, NF, fIndicate every section of sequence length, calculating formula is as follows:
Wherein, FH, fFor the frequency band highest frequency, FL, fFor the frequency band low-limit frequency, fsFor EEG original signal samples Frequency.
S2.5 is by XF M, i, j, fIt carries out inverse fast Fourier transform (Inverse Fast Fourier Transform, IFFT)
Wherein, m is lead serial number, and i is experiment serial number, and j is window serial number, and f is band number.
Obtain three time domain sequences
S2.6 uses TfIndicate the sequence moment of each frequency band, then Each frequency band independently calculates draw performance number XP M, i, j, f, calculating formula is as follows:
Wherein, m is lead serial number, and i is experiment serial number, and j is window serial number, and f is band number, m ∈ 1,2,3 ..., Nc, i={ 1,2,3 ..., Nm, j={ 1,2,3 ..., nD, f ∈ { 1,2,3 }.
S2.7 is by each time domain sequences NDThe corresponding N generated of a windowDA average power content addition is averaged, and finally obtains one Three characteristic values in the secondary imagination every lead of exercise testing, XF M, i, f∈R1, m ∈ { 1,2,3 ..., Nc, i={ 1,2,3 ..., Nm, F ∈ { 1,2,3 }.Then the eeg data characteristic value of i-th acquisition experiment acquisition is
S3MI-EEG signal characteristic imaging method
S3.1 extracts N according to the coordinate information that BCI acquisition system coordinate diagram providescThe 2D coordinate points of lead.The N of acquisitioncIt leads Connection coordinate information is denoted as
S3.2 withFour point (x of maximum value, minimum value composition on x, y-coordinate axismax, ymax), (xmax, ymin), (xmin, ymax), (xmin, ymin) it is boundary, the grid system of 64*64 pixel resolution is established, G ∈ R is denoted as64 *64
S3.3 willAccording toCoordinate Information interpolation is mapped to G ∈ R64*64In grid system, forms three and contain the pseudo- RGB tri- of characteristic information Yu lead coordinate information Channel image Gf∈R64*64, f ∈ { 1,2,3 }.
Image characteristics extraction and classification method of the S4 based on deep learning
S4.1 constructs image feature extraction and classifying using DCNN and supervised learning (Supervised Learning) method Frame.Image input layer is Gf∈R64*64, f ∈ { 1,2,3 }, MI-EEG characteristic pattern, test includes puppet RGB triple channel figure every time Picture.
S4.2 input data GfFeature extraction is carried out by convolutional layer.Network uses 6 sections of convolution altogether, if every section of convolution includes Dry convolutional layer, one layer of convolutional coding structure include v neuron.The corresponding relationship of data input layer can be represented by the formula:
Wherein,Indicate e sections, l layers of convolution, v-th of neuron.E ∈ { 1,2,3 ..., 6 }, l ∈ { 1,2,3 ..., Ne,NeIndicate e sections of convolutional layer sums,Indicate e sections of l layers of neuron population.It is shown as input layer herein, therefore e =1, l=1.GfFor input signal,Indicate input signal GfwWith neuronThe weighted value of connection, NwFor convolution Core width, Sw=1 is the moving step length of convolution kernel over an input image,Indicate internal state, that is, bias of neuron, For the output of neuron.F (a) illustrates activation of the convolution kernel after calculating, and uses line rectification function (Rectified Linear Unit, ReLU) is calculated, and calculating formula is as follows:
F (a)=RELU (a)=max (0, a)
Convolution kernel moving step length for feature extraction is 1, does not change image pixel resolution ratio after convolution, by a secondary volume The input picture of product operation is denoted asWhereinFor e L layers of neuron population of section,For by the image resolution ratio after e sections l layers, being used herein as x indicates a upper network layer Output valve, bothFeature extraction, every section of S are carried out using 6 sections of convolution altogetherw=1 convolution does not change Image resolution ratio.
The last layer convolution kernel of every section of convolution of S4.3 is expressed asIts weight parameter isWherein Nw=2, Sw=2.ByAfter convolution, the length and width of image become the 1/2 of original length.Both Being used herein as l* indicates the last layer convolutional layer serial number of every section of convolution.
Convolutional layer structure is as shown in table 4.1:
4.1 convolution tier model architecture of table
Characteristic pattern of the S4.4 after five sections of convolution is denoted asUse the 6th section of Sw=2 convolution kernel It is rightConvolution is carried out, N is finally obtainedoCharacteristic pattern is opened to be denoted as NoEqual to categories of datasets quantity.
S4.5 pairsIt is handled using average pond layer (Average Pooling, AP), finally N can be obtained in every class dataoA output Gapo(x)∈R1×1, o ∈ { 1,2 ..., No, calculating formula is as follows:
The output valve Gap of S4.6 convolutional neural networkso(x)∈R1×1, o ∈ { 1,2 ..., NoBy normalization index letter Normalized probability value, i.e. P is calculated in number (Softmax function)o(x), o ∈ { 1,2 ..., No, calculating formula is as follows:
S4.7 obtains the probability distribution P of the MI-EEG characteristic pattern generic of inputo(x), o ∈ { 1,2 ..., NoAfter, make Cross entropy (Cross Entropy) is used to exercise supervision study as loss function, calculating formula is as follows:
Wherein, o is classification number, p (Gfw) be input picture generic probability distribution, provided by priori label information, Po(x) it is distributed and is obtained by DCNN output probability.
S4.8 makes network weight parameter gradients prolong the direction decline for minimizing loss function by the method for supervised learning, Until training is completed, calculating formula is as follows:
arg min[LossCF(x)],
Wherein, { 1,2,3 ..., 6 } e ∈, l ∈ { 1,2,3 ..., Ne,
S4.9 declines mode using the gradient of batch processing (Batching), and the image of each batch (Batch Size) acquires LossCF(x) it is summed up after, a subgradient is asked to network parameter, partial derivative is acquired according to chain rule, convolution kernel weight is joined Number carries out gradient updating, and calculating formula is as follows:
Wherein, e is convolution section serial number, and l is convolutional layer serial number, and v is neuron serial number, and η is learning rate, indicates a subgradient The speed of update.The probability distribution of fitting MI-EEG characteristic pattern is enabled the network to by the training and gradient updating of batch, thus The class probability distribution of given MI-EEG characteristic pattern can voluntarily be exported
In S4.10 test process, for the MI-EEG characteristic pattern Gftest of i-th experiment generationI, w∈R64×64, i ∈ 1, 2,3, Nm, w ∈ { 1,2,3 }, network can provide its corresponding class probability distribution PtestI, o(x), i ∈ 1,2, 3 ..., Nm, o ∈ { 1,2 ..., No, it takes maximum probability classification as characteristic pattern classification results, is denoted as Label testi(x), i ∈ { 1,2,3 ..., Nm}。Labeli(x), i ∈ { 1,2,3 ..., NmIt is authentic specimen label, use classification accuracy Accuracy (x) is used as evaluation index.Accuracy rate calculating formula is as follows:
Wherein, i is experiment serial number.
Compared with prior art, the invention has the following advantages that
(1) present invention extracts time-domain power feature by way of frequency domain inverse transformation, helps to show signal energy feature Difference is conducive to improve discrimination.It overcomes traditional mode recognition methods and is carrying out feature extraction phases frequency of use power spectrum Character numerical value difference is unobvious when as MI-EEG signal characteristic, the not high disadvantage of classification accuracy.
(2) during brain electric information is imaged in the present invention, when fully utilizing MI-EEG signal-frequency-sky spy Point is imaged by introducing BCI acquisition system original coordinates information, so that signal characteristic expression is more abundant.Moreover, interpolation The mode of imaging can adjust image resolution ratio by the structural parameters of convolutional neural networks, and overall structure adaptability is good, with identification Method matching degree is high.
(3) present invention convolutional neural networks are improved, using the convolution kernel with step-length instead of maximum pond layer after, The stability for increasing training process, reduces characteristic loss, the characteristic information of original image figure is utmostly utilized, in spy Sign is extracted and has larger performance boost in model generalization ability, and the identification field applied to Mental imagery EEG signals facilitates The promotion of discrimination.
The present invention is suitble to the BCI system of multi-lead, compound movement imagination task, more wide by providing for BCI technology Application prospect.Average classification discrimination is obtained using ten ten folding cross validations, demonstrates the correctness and validity of this method.
Detailed description of the invention
64 lead distribution map of Fig. 1 BCI2000 system.
The flow chart of Fig. 2 this method.
Specific embodiment
Specific experiment of the present invention is carried out in the Tensorflow frame under Ubuntu (64) operating system, convolution Neural metwork training is completed on GTX1080Ti video card in tall and handsome reach.
The MI-EEG data set that the present invention uses derives from the public database of 2000 acquisition system of BCI, is made by developer It is acquired and is completed with 64 crosslinking electrode caps of 2000 system of 10-10 lead BCI of international standard.Sample frequency is 160Hz.Head Cortical electrode position distribution is as shown in figure.
Each Therapy lasted 5s.0~1s is the quiescent condition phase, and a cross cursor occurs in screen, while issuing when t=0s Very brief alarm;1s~4s is the Mental imagery phase, occurs prompt above or below screen, if cursor is above, Then subject imagines bimanual movements;If cursor appears in lower section, subject imagines both feet movement.The imagination movement number of BCI 2000 Acquire the Mental imagery EEG signals of 109 subjects altogether according to collection, (appoint for totally 45 experiments by two kinds of Mental imageries by each subject Quantity of being engaged in is roughly equal), it is primary to test totally 800 sampled points.By screening to data, 4 data acquisition length are eliminated After spending inconsistent subject, 4702 groups of experiments of 105 subjects are obtained altogether, wherein imagination bimanual movements test 2351 groups, Imagine double-legged 2351 groups of exercise testing.
Based on above-mentioned Mental imagery eeg data collection, using algorithm flow shown in figure two, the present invention specifically implements to walk It is rapid as follows:
S1 MI-EEG Signal Pretreatment
S1.1 is tested according to every generic task class label information (imagination both hands o=0, imagine both feet o=1) extraction 105 The every type games of person imagine single experiment XM, i∈R64×800, wherein { 1,2,3 ..., 64 } m ∈, i ∈ { 1,2,3 ..., 4702 } are total Obtain 4702 groups of MI-EEG.
0~1s of experiment, the EEG signals for the quiescent condition phase that sequence length is 160 carry out MI-EEG every time for S1.2 interception Baseline correction obtains pretreated Mental imagery EEG signals, is denoted as X 'M, i∈R64×640
MI-EEG signal characteristic abstraction of the S2 based on Fast Fourier Transform (FFT)
Once test 640 sampled points are divided into 8 windows, each window X by S2.1 in orderD M, i, j∈R64×80
Each window is mended 0 by S2.2, and length is made to reach 1024.Window after mending 0 is XW M, i, j∈R64×1024
S2.3 is to XW M, i, j∈R64×1024It carries out Fast Fourier Transform (FFT) (FFT) and obtains the frequency domain sequence that length is 1024 XF M, i, j∈R64×1024
S2.4 is by XF M, i, j∈R64×1024Sampled point on 8~13Hz, 13~21Hz, 21~30Hz frequency band takes respectively Out, X is obtainedF M, i, j, 1∈R64×38, XF M, i, j, 2∈R64×52, XF M, i, j, 3∈R64×57
S2.5 is respectively to XF M, i, j, 1∈R64×38, XF M, i, j, 2∈R64×52, XF M, i, j, 3∈R64×57Carry out fast Fourier transforma (IFFT) is changed, X is obtainedI M, i, j, 1∈R64×38, XI M, i, j, 2∈R64×52XI M, i, j, 3∈R64×57
S2.6 to each time domain sequences do not have a sampled point it is squared after be averaged, obtain average power content XP M, i, j, f∈ R64, f={ 1,2,3 }.
The average power content of each 8 windows of sequence is averaged by S2.7, is obtained
The imaging of S3MI-EEG signal characteristic
S3.1 obtains 2D coordinate information M ∈ R according to BCI2000 system leads distribution map2×64
S3.2 takes its maximum in x, y-axis, four points of minimum value composition are boundary, establishes 64 × 64 pixel resolutions Grid system, G ∈ R64*64
S3.3 willLimit is according to M ∈ R2×64Interpolation is mapped to G ∈ R64×64In network, Obtain tri- channel image G of pseudo- RGBf∈R64*64, f ∈ { 1,2,3 }.
Image characteristics extraction and classification of the S4 based on deep learning
The input picture of S4.1 convolutional neural networks is puppet RGB triple channel MI-EEG characteristic pattern Gf∈R64*64, f ∈ 1,2, 3}。
S4.2 image is since input layer, and after the feature extraction of six sections of convolution and dimensionality reduction, resolution ratio is respectively With
S4.3 is by obtain two characteristic patternsAverage pondization processing is carried out, two outputs are obtained.
S4.4 is by above-mentioned output Gap1(x) and Gap2(x) calculated by Soft-max function, obtain two it is normalized general Rate value P1(x) and P2(x)。
S4.5 is according to P1(x)、P2(x) and image category prior information p (Gf) cross entropy Loss, is calculatedCF
In S4.6 training test process, training set and verifying collection use 9:1 ratio cut partition, and training group includes 4232 groups of data (2 groups of aliquant data point are in training group), test group include 470 groups of data.When training DCNN, Batch Size is set It is 256, learning rate is set as η=0.00001, is controlled using gradient descent procedures of the Adam optimizer to batch processing, makes ladder Degree prolongs cross entropy LossCFReduced direction change carries out gradient updating to neural network parameter by backpropagation.By step =30000 trained iteration, final LossCFStablize 0.35 or so, training is completed.
S4.7 carries out Performance Evaluation using the test data the set pair analysis model for containing 470 groups of data, flat after test ten times Equal accuracy rate is 95.5%.

Claims (5)

1. the method based on depth convolutional neural networks identification brain Electrical imaging figure, it is characterised in that: first to imagination movement brain electricity Signal is pre-processed, and carries out baseline elimination to it using tranquillization state data;The every leads tested every time are divided into more After a window, after the Fast Fourier Transform (FFT) for carrying out 8-13Hz, 13-21Hz, 21-30Hz respectively, then it is anti-by Fourier respectively Transformation extracts time-domain power value as signal time-frequency characteristics;The feature of extraction is inserted according to BCI acquisition system coordinate diagram by lead It is worth in pixel grid, obtains imagination motor message image;Image is input to the depth volume for EEG signals optimization Exercise supervision training in product neural network, is adjusted using weighting parameter of the back-propagation algorithm to network neural member, makes net Network can preferably be fitted input data distribution, export image category probability using network after the completion of training.
2. the method according to claim 1 based on depth convolutional neural networks identification brain Electrical imaging figure, it is characterised in that: The specific implementation steps are as follows:
S1 MI-EEG Signal Pretreatment;
S1.1 assumesThe EEG signals during the movement imagination of m lead acquisition are tested for i-th, wherein m ∈ { 1,2,3 ..., NcIndicate to acquire the lead label of Mental imagery brain electricity task, NcRepresent lead number;I ∈ 1,2,3 ..., Nm, NmIndicate acquisition experiment number;NsRepresent primary experiment sampling number;Then i-th acquisition experiment acquisition eeg data is
S1.2 is directed to each leads tested every timeTranquillization data before using Mental imagery as baseline reference line, Baseline Processing for removing is carried out, the signal for eliminating baseline is obtained
3. the method according to claim 1 based on depth convolutional neural networks identification brain Electrical imaging figure, it is characterised in that: MI-EEG signal characteristic extracting methods of the S2 based on Fast Fourier Transform (FFT);
S2.1 is for the primary imagination exercise testing data after baseline eliminationIt is every A every lead signals are expressed as by sampling instant It is sequentially divided into NDA window, ND∈N+;If each series of windows is ThenThe sampling instant for including are as follows:
Wherein, m is lead serial number, and i is experiment serial number, and j is window serial number;
S2.2 is by each series of windows0 is mended, is madeSequence length reach NFFT=2K, to improve its digital resolution Rate, wherein K is positive integer,N after mending 0DA series of windows is denoted as
X of the S2.3 to each leadW M, i, jFast Fourier Transform (FFT) is carried out, obtaining length is NFFTFrequency domain sequence
S2.4 selects frequency range 8- corresponding with the closely related α of imagination movement, beta response according to nervous physiology theory 30Hz, and X will be obtained thirdly be divided into 8~13Hz, 13~21Hz and 21~30Hz, tri- frequency rangeF M, i, jSub- sequence in each frequency range ColumnFor frequency Rate section serial number, NF, fIndicate every section of sequence length, calculating formula is as follows:
Wherein, FH, fFor the frequency band highest frequency, FL, fFor the frequency band low-limit frequency, fsFor EEG original signal samples frequency;
S2.5 is by XF M, i, j, fCarry out inverse fast Fourier transform
Wherein, m is lead serial number, and i is experiment serial number, and j is window serial number, and f is band number;
Obtain three time domain sequences
S2.6 uses TfIndicate the sequence moment of each frequency band, then Each frequency band independently calculates draw performance number XP M, i, j, f, calculating formula is as follows:
Wherein, m is lead serial number, and i is experiment serial number, and j is window serial number, and f is band number, m ∈ { 1,2,3 ..., Nc, i= { 1,2,3 ..., Nm, j=1,2,3 ... ND, f ∈ { 1,2,3 };
S2.7 is by each time domain sequences NDThe corresponding N generated of a windowDA average power content addition is averaged, and is finally obtained primary Imagine three characteristic values in the every lead of exercise testing, XF M, i, f∈R1, m ∈ { 1,2,3 ..., Nc, i={ 1,2,3 ..., Nm, f ∈ { 1,2,3 };Then the eeg data characteristic value of i-th acquisition experiment acquisition is
4. the method according to claim 1 based on depth convolutional neural networks identification brain Electrical imaging figure, it is characterised in that: S3 MI-EEG signal characteristic imaging method
S3.1 extracts N according to the coordinate information that BCI acquisition system coordinate diagram providescThe 2D coordinate points of lead;The N of acquisitioncLead is sat Mark information is denoted as
S3.2 withFour point (x of maximum value, minimum value composition on x, y-coordinate axismax, ymax), (xmax, ymin), (xmin, ymax), (xmin, ymin) it is boundary, the grid system of 64*64 pixel resolution is established, G ∈ R is denoted as64*64
S3.3 willAccording toCoordinate information Interpolation is mapped to G ∈ R64*64In grid system, forms three and contain the pseudo- RGB triple channel of characteristic information Yu lead coordinate information Image Gf∈R64*64, f ∈ { 1,2,3 }.
5. the method according to claim 1 based on depth convolutional neural networks identification brain Electrical imaging figure, it is characterised in that: Image characteristics extraction and classification method of the S4 based on deep learning
S4.1 constructs image feature extraction and classifying frame using DCNN and supervised learning (Supervised Learning) method Frame;Image input layer is Gf∈R64*64, f ∈ { 1,2,3 }, MI-EEG characteristic pattern, test includes puppet RGB triple channel image every time;
S4.2 input data GfFeature extraction is carried out by convolutional layer;Network uses 6 sections of convolution altogether, and every section of convolution includes several volumes Lamination, one layer of convolutional coding structure include v neuron;The corresponding relationship of data input layer can be represented by the formula:
Wherein,Indicate e sections, l layers of convolution, v-th of neuron;E ∈ { 1,2,3..., 6 }, l ∈ { 1,2,3..., Ne,NeIndicate e sections of convolutional layer sums,Indicate e sections of l layers of neuron population;It is shown as input layer herein, therefore e =1, l=1;GfFor input signal,Indicate input signal GfwWith neuronThe weighted value of connection, NwFor convolution Core width, Sw=1 is the moving step length of convolution kernel over an input image,Indicate internal state, that is, bias of neuron, For the output of neuron;F (a) illustrates activation of the convolution kernel after calculating, and uses line rectification function (Rectified Linear Unit, ReLU) is calculated, and calculating formula is as follows:
F (a)=RELU (a)=max (0, a)
Convolution kernel moving step length for feature extraction is 1, does not change image pixel resolution ratio after convolution, is grasped by a convolution The input picture of work is denoted asWhereinIt is e sections L layers of neuron population,For by the image resolution ratio after e sections l layers, being used herein as x indicates the output of a upper network layer Value, bothFeature extraction, every section of S are carried out using 6 sections of convolution altogetherw=1 convolution does not change image Resolution ratio;
The last layer convolution kernel of every section of convolution of S4.3 is expressed asIts weight parameter isWherein Nw=2, Sw= 2;ByAfter convolution, the length and width of image become the 1/2 of original length;Both It is used herein as l*Indicate the last layer convolutional layer serial number of every section of convolution;
Characteristic pattern of the S4.4 after five sections of convolution is denoted asUse the 6th section of Sw=2 convolution kernel pairConvolution is carried out, N is finally obtainedoCharacteristic pattern is opened to be denoted as NoEqual to categories of datasets quantity;
S4.5 pairsIt is handled using average pond layer (Average Pooling, AP), final every class N can be obtained in dataoA output Gapo(x)∈R1×1, o ∈ { 1,2 ..., No, calculating formula is as follows:
The output valve Gap of S4.6 convolutional neural networkso(x)∈R1×1, o ∈ { 1,2 ..., NoBy normalization exponential function Normalized probability value, i.e. P is calculated in (Softmax function)o(x), o ∈ { 1,2 ..., No, calculating formula is as follows:
S4.7 obtains the probability distribution P of the MI-EEG characteristic pattern generic of inputo(x), o ∈ { 1,2 ..., NoAfter, use friendship Fork entropy (Cross Entropy) exercises supervision study as loss function, and calculating formula is as follows:
Wherein, o is classification number, p (Gfw) be input picture generic probability distribution, provided by priori label information, Po(x) It is distributed and is obtained by DCNN output probability;
S4.8 makes network weight parameter gradients prolong the direction decline for minimizing loss function by the method for supervised learning, until Training is completed, and calculating formula is as follows:
arg min[LossCF(x)],
Wherein, { 1,2,3..., 6 } e ∈, l ∈ { 1,2,3..., Ne,
S4.9 declines mode using the gradient of batch processing (Batching), and the image of each batch (Batch Size) acquires LossCF(x) it is summed up after, a subgradient is asked to network parameter, partial derivative is acquired according to chain rule, convolution kernel weight is joined Number carries out gradient updating, and calculating formula is as follows:
Wherein, e is convolution section serial number, and l is convolutional layer serial number, and v is neuron serial number, and η is learning rate, indicates a gradient updating Speed;The probability distribution of fitting MI-EEG characteristic pattern is enabled the network to by the training and gradient updating of batch, so as to The class probability for voluntarily exporting given MI-EEG characteristic pattern is distributed
In S4.10 test process, for the MI-EEG characteristic pattern Gftest of i-th experiment generationI, w∈R64×64, i ∈ 1,2, 3 ..., Nm, w ∈ { 1,2,3 }, network can provide its corresponding class probability distribution PtestI, o(x), i ∈ 1,2,3 ..., Nm, o ∈ { 1,2 ..., No, it takes maximum probability classification as characteristic pattern classification results, is denoted as Label testi(x), i ∈ 1, 2,3 ..., Nm};Labeli(x), i ∈ { 1,2,3 ..., NmIt is authentic specimen label, use classification accuracy Accuracy (x) As evaluation index;Accuracy rate calculating formula is as follows:
Wherein, i is experiment serial number.
CN201811574691.0A 2018-12-21 2018-12-21 Method for recognizing electroencephalogram based on deep convolutional neural network Active CN109726751B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811574691.0A CN109726751B (en) 2018-12-21 2018-12-21 Method for recognizing electroencephalogram based on deep convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811574691.0A CN109726751B (en) 2018-12-21 2018-12-21 Method for recognizing electroencephalogram based on deep convolutional neural network

Publications (2)

Publication Number Publication Date
CN109726751A true CN109726751A (en) 2019-05-07
CN109726751B CN109726751B (en) 2020-11-27

Family

ID=66297806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811574691.0A Active CN109726751B (en) 2018-12-21 2018-12-21 Method for recognizing electroencephalogram based on deep convolutional neural network

Country Status (1)

Country Link
CN (1) CN109726751B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110141229A (en) * 2019-06-04 2019-08-20 吉林大学 A kind of portable brain electric imaging device and brain Electrical imaging optimization method
CN110495881A (en) * 2019-08-28 2019-11-26 南方科技大学 A kind of prediction technique of the direction of motion, device, equipment and storage medium
CN111528836A (en) * 2020-05-06 2020-08-14 北京工业大学 Brain function network feature extraction method based on dynamic directional transfer function
CN111582041A (en) * 2020-04-14 2020-08-25 北京工业大学 Electroencephalogram identification method based on CWT and MLMSFFCNN
CN111709267A (en) * 2020-03-27 2020-09-25 吉林大学 Electroencephalogram signal emotion recognition method of deep convolutional neural network
CN112057047A (en) * 2020-09-11 2020-12-11 首都师范大学 Device for realizing motor imagery classification and hybrid network system construction method thereof
CN112244878A (en) * 2020-08-31 2021-01-22 北京工业大学 Method for identifying key frequency band image sequence by using parallel multi-module CNN and LSTM
CN112370017A (en) * 2020-11-09 2021-02-19 腾讯科技(深圳)有限公司 Training method and device of electroencephalogram classification model and electronic equipment
CN112712124A (en) * 2020-12-31 2021-04-27 山东奥邦交通设施工程有限公司 Multi-module cooperative object recognition system and method based on deep learning
CN112884062A (en) * 2021-03-11 2021-06-01 四川省博瑞恩科技有限公司 Motor imagery classification method and system based on CNN classification model and generation countermeasure network
CN112932504A (en) * 2021-01-16 2021-06-11 北京工业大学 Dipole imaging and identifying method
CN112932503A (en) * 2021-01-16 2021-06-11 北京工业大学 Motor imagery task decoding method based on 4D data expression and 3DCNN
CN113221968A (en) * 2021-04-23 2021-08-06 北京科技大学 Method and device for diagnosing running state of rubber belt conveyor
WO2021164349A1 (en) * 2020-02-21 2021-08-26 乐普(北京)医疗器械股份有限公司 Blood pressure prediction method and apparatus based on photoplethysmography signal
CN115018734A (en) * 2022-07-15 2022-09-06 北京百度网讯科技有限公司 Video restoration method and training method and device of video restoration model
CN115359497A (en) * 2022-10-14 2022-11-18 景臣科技(南通)有限公司 Call center monitoring alarm method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339455A (en) * 2008-08-07 2009-01-07 北京师范大学 Brain machine interface system based on human face recognition specific wave N170 component
CN102715903A (en) * 2012-07-09 2012-10-10 天津市人民医院 Method for extracting electroencephalogram characteristic based on quantitative electroencephalogram
CN104700119A (en) * 2015-03-24 2015-06-10 北京机械设备研究所 Brain electrical signal independent component extraction method based on convolution blind source separation
CN107844755A (en) * 2017-10-23 2018-03-27 重庆邮电大学 A kind of combination DAE and CNN EEG feature extraction and sorting technique
CN107958213A (en) * 2017-11-20 2018-04-24 北京工业大学 A kind of cospace pattern based on the medical treatment of brain-computer interface recovering aid and deep learning method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339455A (en) * 2008-08-07 2009-01-07 北京师范大学 Brain machine interface system based on human face recognition specific wave N170 component
CN102715903A (en) * 2012-07-09 2012-10-10 天津市人民医院 Method for extracting electroencephalogram characteristic based on quantitative electroencephalogram
CN104700119A (en) * 2015-03-24 2015-06-10 北京机械设备研究所 Brain electrical signal independent component extraction method based on convolution blind source separation
CN107844755A (en) * 2017-10-23 2018-03-27 重庆邮电大学 A kind of combination DAE and CNN EEG feature extraction and sorting technique
CN107958213A (en) * 2017-11-20 2018-04-24 北京工业大学 A kind of cospace pattern based on the medical treatment of brain-computer interface recovering aid and deep learning method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
HUAIYU XU,等: "Feature Extraction and Classification of EEG for Imaging", 《PROCEEDINGS OF 2009 2ND IEEE INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND INFORMATION TECHNOLOGY VOL.4》 *
戴若梦: "基于深度学习的运动想象脑电分类", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》 *
李明爱,等: "想象左右手运动的脑电特征提取及分类研究", 《中国生物医学工程学报》 *
胡凯: "基于运动想象的在线脑机接口的初步研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110141229A (en) * 2019-06-04 2019-08-20 吉林大学 A kind of portable brain electric imaging device and brain Electrical imaging optimization method
CN110141229B (en) * 2019-06-04 2023-05-09 吉林大学 Portable electroencephalogram imaging equipment and electroencephalogram imaging optimization method
CN110495881A (en) * 2019-08-28 2019-11-26 南方科技大学 A kind of prediction technique of the direction of motion, device, equipment and storage medium
WO2021164349A1 (en) * 2020-02-21 2021-08-26 乐普(北京)医疗器械股份有限公司 Blood pressure prediction method and apparatus based on photoplethysmography signal
CN111709267A (en) * 2020-03-27 2020-09-25 吉林大学 Electroencephalogram signal emotion recognition method of deep convolutional neural network
CN111709267B (en) * 2020-03-27 2022-03-29 吉林大学 Electroencephalogram signal emotion recognition method of deep convolutional neural network
CN111582041A (en) * 2020-04-14 2020-08-25 北京工业大学 Electroencephalogram identification method based on CWT and MLMSFFCNN
CN111582041B (en) * 2020-04-14 2023-06-09 北京工业大学 Brain electricity identification method based on CWT and MLMSFFCNN
CN111528836A (en) * 2020-05-06 2020-08-14 北京工业大学 Brain function network feature extraction method based on dynamic directional transfer function
CN111528836B (en) * 2020-05-06 2023-04-28 北京工业大学 Brain function network feature extraction method based on dynamic directional transfer function
CN112244878A (en) * 2020-08-31 2021-01-22 北京工业大学 Method for identifying key frequency band image sequence by using parallel multi-module CNN and LSTM
CN112244878B (en) * 2020-08-31 2023-08-04 北京工业大学 Method for identifying key frequency band image sequence by using parallel multi-module CNN and LSTM
CN112057047A (en) * 2020-09-11 2020-12-11 首都师范大学 Device for realizing motor imagery classification and hybrid network system construction method thereof
CN112370017A (en) * 2020-11-09 2021-02-19 腾讯科技(深圳)有限公司 Training method and device of electroencephalogram classification model and electronic equipment
CN112712124A (en) * 2020-12-31 2021-04-27 山东奥邦交通设施工程有限公司 Multi-module cooperative object recognition system and method based on deep learning
CN112932504A (en) * 2021-01-16 2021-06-11 北京工业大学 Dipole imaging and identifying method
CN112932503A (en) * 2021-01-16 2021-06-11 北京工业大学 Motor imagery task decoding method based on 4D data expression and 3DCNN
CN112932504B (en) * 2021-01-16 2022-08-02 北京工业大学 Dipole imaging and identifying method
CN112884062A (en) * 2021-03-11 2021-06-01 四川省博瑞恩科技有限公司 Motor imagery classification method and system based on CNN classification model and generation countermeasure network
CN112884062B (en) * 2021-03-11 2024-02-13 四川省博瑞恩科技有限公司 Motor imagery classification method and system based on CNN classification model and generated countermeasure network
CN113221968A (en) * 2021-04-23 2021-08-06 北京科技大学 Method and device for diagnosing running state of rubber belt conveyor
CN115018734A (en) * 2022-07-15 2022-09-06 北京百度网讯科技有限公司 Video restoration method and training method and device of video restoration model
CN115018734B (en) * 2022-07-15 2023-10-13 北京百度网讯科技有限公司 Video restoration method and training method and device of video restoration model
CN115359497A (en) * 2022-10-14 2022-11-18 景臣科技(南通)有限公司 Call center monitoring alarm method and system

Also Published As

Publication number Publication date
CN109726751B (en) 2020-11-27

Similar Documents

Publication Publication Date Title
CN109726751A (en) Method based on depth convolutional neural networks identification brain Electrical imaging figure
CN108491077B (en) Surface electromyographic signal gesture recognition method based on multi-stream divide-and-conquer convolutional neural network
CN106874956B (en) The construction method of image classification convolutional neural networks structure
CN106782602B (en) Speech emotion recognition method based on deep neural network
CN110069958B (en) Electroencephalogram signal rapid identification method of dense deep convolutional neural network
CN105426842B (en) Multiclass hand motion recognition method based on support vector machines and surface electromyogram signal
CN110163180A (en) Mental imagery eeg data classification method and system
CN106682616A (en) Newborn-painful-expression recognition method based on dual-channel-characteristic deep learning
CN106909784A (en) Epileptic electroencephalogram (eeg) recognition methods based on two-dimentional time-frequency image depth convolutional neural networks
CN108537271A (en) A method of resisting sample is attacked based on convolution denoising self-editing ink recorder defence
CN109299751B (en) EMD data enhancement-based SSVEP electroencephalogram classification method of convolutional neural model
CN108648191A (en) Pest image-recognizing method based on Bayes's width residual error neural network
CN108960289B (en) Medical image classification device and method
CN110353675A (en) The EEG signals emotion identification method and device generated based on picture
CN109190643A (en) Based on the recognition methods of convolutional neural networks Chinese medicine and electronic equipment
CN109598222B (en) EEMD data enhancement-based wavelet neural network motor imagery electroencephalogram classification method
CN110399846A (en) A kind of gesture identification method based on multichannel electromyography signal correlation
CN108363969B (en) Newborn pain assessment method based on mobile terminal
CN108363979A (en) Neonatal pain expression recognition method based on binary channels Three dimensional convolution neural network
CN110399821A (en) Customer satisfaction acquisition methods based on facial expression recognition
CN112022153B (en) Electroencephalogram signal detection method based on convolutional neural network
CN110309811A (en) A kind of hyperspectral image classification method based on capsule network
CN104298974A (en) Human body behavior recognition method based on depth video sequence
CN108364662A (en) Based on the pairs of speech-emotion recognition method and system for differentiating task
CN104573699B (en) Trypetid recognition methods based on middle equifield intensity magnetic resonance anatomy imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant