CN111582041B - Brain electricity identification method based on CWT and MLMSFFCNN - Google Patents

Brain electricity identification method based on CWT and MLMSFFCNN Download PDF

Info

Publication number
CN111582041B
CN111582041B CN202010291359.4A CN202010291359A CN111582041B CN 111582041 B CN111582041 B CN 111582041B CN 202010291359 A CN202010291359 A CN 202010291359A CN 111582041 B CN111582041 B CN 111582041B
Authority
CN
China
Prior art keywords
convolution
matrix
frequency
lead
mlmsffcnn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010291359.4A
Other languages
Chinese (zh)
Other versions
CN111582041A (en
Inventor
李明爱
韩健夫
杨金福
孙炎珺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202010291359.4A priority Critical patent/CN111582041B/en
Publication of CN111582041A publication Critical patent/CN111582041A/en
Application granted granted Critical
Publication of CN111582041B publication Critical patent/CN111582041B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention discloses an electroencephalogram identification method based on CWT and MLMSFFCNN, which comprises the steps of carrying out CWT on each lead motor imagery electroencephalogram signal to obtain a time-frequency matrix of each lead; then, intercepting data of 8-30Hz frequency band of the signal time-frequency matrix, and equally dividing the data into three sub-matrixes along a frequency axis; summing the three sub-matrixes according to columns respectively to obtain three sub-sequences, and dividing each sub-sequence into three windows along a time axis; constructing an MI-EEG signal composite feature matrix by combining the lead coordinate information of the BCI acquisition system; MLMSFFCNN realizes feature fusion and multi-resolution calculation through the splicing of the output features of each stage of convolution segment and the multi-branch structure of each stage of convolution; and performing supervision training on the MI-EEG composite feature matrix by using MLMSFFCNN, and performing ten-fold cross validation to obtain a final classification result. The method can fully extract the time, frequency and space domain characteristic information of the signals through the characteristic fusion capability and the multi-resolution computing capability of MLMSFFCNN, and has important significance for improving the multi-domain characteristic expression and the classification precision of MI-EEG signals.

Description

Brain electricity identification method based on CWT and MLMSFFCNN
Technical Field
The invention relates to a motor imagery electroencephalogram (Motor Imagery Electroencephalogram, MI-EEG) identification method based on continuous wavelet transform (Continue Wavelet Transform, CWT) and a Multi-level Multi-scale feature fusion convolutional neural network (Multi-level Multi-scale Feature Fusion Convolutional Neural Network, MLMSFFCNN). The method specifically relates to the following steps: and (3) extracting time-frequency characteristics of the MI-EEG signals based on CWT, generating a composite characteristic matrix by combining lead coordinate projection, and realizing time-frequency-space multi-domain characteristic fusion and classification by adopting MLMSFFCNN.
Background
With the development of deep learning techniques, the identification of MI-EEG signals using deep learning techniques has received attention for its great potential. The (Convolutional Neural Network, CNN) bionic receptive field mechanism of the convolutional neural network can fully extract the change of the MI-EEG signal in local space, and has unique advantages in the aspect of processing the MI-EEG signal with multiple leads. In the existing CNN-based method, the input of the network is a time-frequency matrix generated after each lead signal is subjected to time-frequency transformation, or a data matrix formed by further stacking time-frequency matrices generated by different leads according to a sequence specified by a person. In the method, although the time-frequency information of the MI-EEG signals is effectively extracted, the relative positions among leads cannot be well reserved in a feature matrix generation mode, and the spatial distribution characteristic of the MI-EEG signals is not met; meanwhile, CNN used by the methods only comprises a convolution kernel of one scale, so that the network cannot perform multi-resolution calculation on the feature matrix of the MI-EEG signal, and the loss of spatial domain local correlation information is easy to cause; in addition, the traditional sequential CNN only considers the calculation result of the last layer of convolution when outputting the classification result, and can not fully utilize the convolution characteristics generated in the middle process, and the comprehensive reasoning capability of the global space characteristics and the abstract space characteristics is lacking. Thus, the recognition capability of CNN for MI-EEG is limited.
Aiming at the problems, the method and the device realize extraction and fusion of the time, frequency and space domain features of the MI-EEG signals based on CWT and MLMSFFCNN, so that the identification accuracy of the MI-EEG signals is improved.
Disclosure of Invention
Aiming at the defects of the existing method, the invention aims to solve the technical problems that: an MI-EEG signal identification method based on continuous wavelet transformation and multi-stage multi-scale feature fusion convolutional neural network is provided.
The method specifically relates to the following steps:
(1) The time-frequency matrix of each lead EEG signal is calculated using a continuous wavelet transform, the partial information of the 8-30Hz band most relevant to motor imagery is extracted, and divided equally into 9 sub-regions of 3 x 3. And calculating the average value of wavelet coefficient squares of all points in each subarea as the time-frequency characteristic value of the subarea, so that each guide EEG information obtains a 3 multiplied by 3 time-frequency characteristic matrix, and the time-frequency information of each guide EEG is orderly reserved.
(2) And interpolating the 3 multiplied by 3 time-frequency characteristic matrix of all the leads to corresponding lead coordinates of a two-dimensional acquisition system, so that a composite characteristic matrix with N multiplied by N dimensions is obtained in each motion imagination experiment. The time-frequency characteristics of each lead EEG are accurately embodied on the spatial positions of the leads, and a composite characteristic matrix with time domain, frequency domain and space domain multi-dimensional information is finally generated.
(3) MLMSFFCNN is used for multi-resolution feature fusion and classification. Firstly, a multi-stage convolutional neural network structure is designed, and calculation results output by all stages of convolutional networks are spliced to realize fusion of global features and abstract features. Secondly, designing each stage of convolution neural network with a multi-branch structure on the basis of the sequential convolution neural network, and simultaneously carrying out feature extraction on the composite feature matrix by adopting three groups of multi-scale convolution kernels of 1 multiplied by 1,2 multiplied by 2 and 3 multiplied by 3, thereby increasing the multi-resolution analysis capability of the model on the composite feature matrix in local space. MLMSFFCNN is beneficial to improving the classification performance of the model.
In summary, the technical route of the invention is as follows: first, CWT changes are performed on each lead raw MI-EEG signal, and a feature matrix of the 8-30Hz band is extracted. The matrix is equally divided into 3 x 3 sub-regions, and the average value of the square of the wavelet coefficients in each sub-region is calculated as the sub-region feature, so that 3 x 3 feature values are obtained for each lead of MI-EEG data. And further, interpolating the 3X 3 feature matrix obtained by each lead to the corresponding lead position in the plane coordinates of the acquisition system by adopting a cubic surface interpolation method to obtain a 64X 64 composite feature matrix containing time-frequency features and space information. Finally, multi-level multi-scale characteristics and a close-fitting neural network are used for realizing extraction of multi-resolution characteristics of a composite characteristic matrix and organic fusion of global characteristics and abstract characteristics, and the identification effect of MI-EEG is effectively improved.
1 MI-EEG composite feature matrix computation based on continuous wavelet transform
1.1 hypothesis
Figure BDA0002450515960000021
Electroencephalogram signals during motor imagery acquired for the mth lead of the ith experiment, where m e {1,2,3,., N } c Lead label for acquiring motor imagery electroencephalogram task, N c Representing the number of leads; i.epsilon. {1,2,3, …, N m },N m Representing the collection experiment times; k e {1,2,3,., n. s },N s Representing the number of samples taken from one experiment. Then the ith acquisition experiment obtains the electroencephalogram data as +.>
Figure BDA0002450515960000022
1.2 pairs of
Figure BDA0002450515960000023
CWT was performed and the wavelet basis function was chosen for cgau8. Assume that the original data sampling frequency is f s The wavelet transformation frequency axis variable is j, and the wavelet physical frequency is N f =65, the transformed time-frequency matrix is +.>
Figure BDA0002450515960000024
Figure BDA0002450515960000025
1.3 in matrix according to MI activation characteristics
Figure BDA0002450515960000026
Intercepting 8-30Hz time-frequency matrix, namely, corresponding to j E {3,4, …,14} partial data, which is recorded as +.>
Figure BDA0002450515960000027
Here j e {1,2, …,12}.
1.4X e j,k Equally divided into three sub-matrices along the vertical axis (frequency),
Figure BDA0002450515960000031
summing up the 4 eigenvalues of a column in each sub-matrix, thus obtaining three sub-sequences, denoted +.>
Figure BDA00024505159600000319
To->
Figure BDA0002450515960000032
For example, a->
Figure BDA0002450515960000033
1.5 each subsequence
Figure BDA0002450515960000034
Dividing the time axis k equally into 3 windows to obtain a subsequence of three windows, designated +.>
Figure BDA0002450515960000035
1.6 for each sub-sequence of windows, taking the square of the modulus of each wavelet coefficient, and then taking the sub-sequence of each window
Figure BDA0002450515960000036
Summing the values and averaging to obtain +.>
Figure BDA0002450515960000037
To->
Figure BDA0002450515960000038
For example, a->
Figure BDA0002450515960000039
Figure BDA00024505159600000310
1.7 extraction of N from a lead distribution plan in a dataset c Plane coordinate information of the leads, noted as
Figure BDA00024505159600000311
Figure BDA00024505159600000312
1.8 at C n,m Each coordinate point is expanded by taking the coordinates in the coordinate system as the centerIs a 3X 3 coordinate matrix, and the expanded coordinate system is recorded as
Figure BDA00024505159600000313
1.9 will
Figure BDA00024505159600000314
The total 9 eigenvalues are arranged into a 3 x 3 time-frequency eigenvalue matrix according to C' v,m Performing cubic surface interpolation to arrange the cubic surfaces in a 64×64 matrix according to the lead positions, and marking as G E R 64×64 . The total input data is +.>
Figure BDA00024505159600000315
Figure BDA00024505159600000316
2 recognition of wavelet transform composite feature matrix using MLMSFFCNN
2.1 MLMSFFCNN comprises five convolutions in total, each of which is identical in structure and comprises 3 convolved branch structures. Using S at the end of each convolution branch w The downsampling convolution layer of =2 changes the CNN feature matrix dimension to half of the original. Finally, the outputs of the three convolution branches are summed to obtain the output of the convolution, which is expressed as
Figure BDA00024505159600000317
Where u is the sequence number of the convolution progression, u= {1,2,3,4,5}. G u Continuing as an input matrix to the u+1 level convolution. Will G u Flattened into a long vector, a fully connected layer of length 128 is connected, the output of which is denoted as V u ∈R 1×128 ,V u Vectors for feature fusion extracted for each convolutional layer.
Each level of convolution comprises a three-branch structure, each branch comprising a number of convolution layers, each convolution layer comprising a number of convolution kernels. The convolution sizes of the various branches for feature extraction are different.
The input of the convolution kernel is subjected to linear function operation and then nonlinear output, and the corresponding relation of the input and the output can be expressed by the following formula:
Figure BDA00024505159600000318
Figure BDA0002450515960000041
wherein x is the input signal,
Figure BDA0002450515960000042
a weight value representing the connection of the input signal x and the neuron, and the convolution kernel scale is N w ×N w ,N w ={1,2,3},S w = {1,2} is the moving step of the convolution kernel on the input matrix, b represents the offset value of the internal state of the neuron, and y is the output of the neuron. f (a) is a nonlinear activation function, calculated using a linear rectification function (Rectified Linear Unit, reLU) as follows:
f(a)=RELU(a)=max(0,a)
the configurations of the convolutions of each stage and the convolutions branches are shown in table 1.
TABLE 1 convolution structure parameters at each level
Figure BDA0002450515960000043
2.2 full connection layer V convolving five stages u Fusion is carried out for final classification to obtain a fusion full-connection layer V epsilon R 1×640 For final classification, u= {1,2,3,4,5}. After that, a full connection layer with length of 256 is connected f ∈R 1×256 Then connecting to classification nodes, wherein the number of the classification nodes is N of the data set categories o
2.3 Output value O of CNN c ∈R 1×1 ,c∈{1,2,...,N o Calculation of normalized exponential function (Softmax function) to obtain normalized probability value, i.e. P c (x),c∈{1,2,...,N o The equation is as follows:
Figure BDA0002450515960000044
2.4, the classification accuracy is obtained by adopting ten-fold cross validation in the test process.
Compared with the prior art, the invention has the following advantages:
(1) The invention extracts the time-frequency characteristic matrix by carrying out continuous wavelet transformation on the signals, has self-adaption in the time-frequency conversion process, and is more suitable for processing signals containing mutation such as MI-EEG. 9 characteristic points are extracted from the continuous wavelet transformation matrix of each lead, and time-frequency information is reserved. The characteristic interpolation of all leads is utilized as a composite characteristic matrix, the time-frequency characteristic is placed at the correct lead position, the defect of the existing time-frequency extraction method in the reaction airspace information is overcome, and a foundation is laid for the extraction of the space information by the follow-up neural network.
(2) According to the method, MLMSFFCNN adopts a cascade structure, each stage of convolution network comprises a multi-branch convolution structure, multi-resolution calculation and feature extraction are achieved by using convolution check features of different scales, and multi-resolution feature fusion is conducted at the tail end of each stage of convolution. Therefore, local information with different resolutions in the composite feature matrix based on the coordinate position can be fully utilized by the CNN, so that the recognition rate of the electroencephalogram signals with spatial distribution characteristics such as MI-EEG can be improved. In addition, MLMSFFCNN can splice the feature matrices of the convolution outputs of each stage into one fused feature vector, as opposed to the sequential CNN, which can only utilize the endmost computation. This allows both the global features near the input and the high-dimensional features near the output to be used for classification after fusion. The intermediate convolution calculation result can be fully utilized, the MLMSFFCNN is greatly improved in the utilization of spatial domain information and the decoding capability of the model, and the method is better suitable for the identification of signals with spatial distribution characteristics such as MI-EEG.
The method is suitable for electroencephalogram identification and classification tasks with spatial distribution characteristics, and can provide wide prospects for the research in the field of BCI.
Drawings
FIG. 1 is a flow chart of composite feature matrix generation.
FIG. 2 is a flow chart of composite feature matrix generation.
FIG. 3 is a flow chart of composite feature matrix generation.
Detailed Description
The neural network structure realization and the back propagation function are carried out in a TensorFlow framework, and the NVIDIA GTX1080Ti display card is used for completing the reasoning calculation in the neural network training.
The disclosed data set adopted by the invention is derived from a No. 2a data set of the 2008 BCI competition, and is acquired by using a 22 lead of a standard 10-20 system, wherein the sampling frequency is 250Hz. The experiment included class 4 motor imagery tasks for 9 subjects, imagining left hand, right hand, foot and tongue, respectively. Each experiment time is 7.5 seconds, a cross prompt is given for the first 2 seconds, a imagination movement task mark appears in the following 1 second, and the imagination movement period lasts for 3 seconds and then enters a rest period of 1.5 seconds. The experiment takes 4s data from the start of the cross prompt to the end of the motor imagery period as experimental data.
Each subject provides a training set and a testing set, 288 groups of experiments are provided, the method adopts ten-fold cross validation for testing, the original training set and the original testing machine are combined, the data volume of a single subject is 576 groups, the whole data volume is 5184 groups, and the number of four types of tasks is equal and is 1296 groups.
Based on the motor imagery electroencephalogram data set, a composite feature matrix generation algorithm is shown in fig. 1, and the specific implementation steps of the method are as follows:
1 MI-EEG Signal pretreatment
1.1 extracting brain electrical signals in the imagination movement period, wherein the experiment is 3-7 seconds, corresponding to N s =1000. Single experiment x r m,i ∈R 22×1000 Where m e {1,2,3,..22 }, i e {1,2,3, …,5184}.
1.2 for one experiment, the signal of each lead is obtained after continuous wavelet transformationTo a time-frequency matrix, X w j,k ∈C 65×1000
1.3 from X w j,k Extracting 8-30Hz data, intercepting a time-frequency matrix with coordinates of 3-14 on a frequency axis,
Figure BDA0002450515960000061
Figure BDA0002450515960000062
1.4 will
Figure BDA0002450515960000063
Three sub-matrices are trisected along the frequency axis>
Figure BDA0002450515960000064
Summing each sub-matrix by column to obtain three one-dimensional sub-sequences +.>
Figure BDA0002450515960000065
1.5 equally dividing the three sub-sequences into three windows, the window length in this dataset is taken 333 (discarding the last point data), resulting in
Figure BDA0002450515960000066
1.6 for windows respectively
Figure BDA0002450515960000067
According to the formula->
Figure BDA0002450515960000068
The modulus length of each complex number is calculated and then the square is taken, and then the average value in each window is calculated. Thus, each subsequence gets three features, denoted as
Figure BDA0002450515960000069
Figure BDA00024505159600000610
1.7 acquiring coordinates C from the lead coordinate distribution map provided by the BCI Competition IV-2a dataset n,m ∈R 2×22
1.8 at C n,m Each point in the array is taken as the center, and is expanded into a rectangular lattice of 3 multiplied by 3 to obtain C' v,m ∈R 2×198
1.9 according to C' v,m ∈R 2×198 Coordinates are to
Figure BDA00024505159600000611
Performing cubic surface interpolation by using 9 feature values, and setting the dimension of a matrix to be 64 multiplied by 64 to obtain G epsilon R 64*64
2, using MLMSFFCNN to identify the wavelet transform matrix, the cascade structure is shown in fig. 2.
2.1 taking one calculation as an example, G.epsilon.R 64×64 The input MLMSFFCNN, MLMSFFCNN comprises five concatenated convolutions, each of which has the same convolution structure, as shown in fig. 2.
2.2 output dimensions of the input matrix in five-stage convolution are G in turn 1 ∈R 32×32 ,G 2 ∈R 16×16 ,G 3 ∈R 8×8 ,G 4 ∈R 4×4 ,G 5 ∈R 2×2 . All-connected layer dimension of each level convolution is V 1 ∈R 1×128 ,V 2 ∈R 1×128 ,V 3 ∈R 1×128 ,V 4 ∈R 1×128 ,V 5 ∈R 1×128
2.3 splicing the 5 full connection layers to obtain V E R 1×640 Connect another full connection layer V f ∈R 1×256 And then output to the full connection layer with 4 nodes.
2.4 calculating the output data of the 4 nodes by using the Softmax function to obtain normalized probability P c (x),c∈{1,2,3,4}。
2.5 in the test procedure, ten fold cross validation was used. Averaging the ten-fold recognition accuracy to obtain
Figure BDA00024505159600000612
/>

Claims (2)

1. The brain electricity identification method based on CWT and MLMSFFCNN is characterized in that: firstly, carrying out CWT change on each lead original MI-EEG signal, and extracting a characteristic matrix of 8-30Hz frequency band; dividing the characteristic matrix into 3X 3 subareas, and calculating the average value of wavelet coefficient squares in each subarea as subarea characteristics, so that 3X 3 characteristic values are obtained for each lead MI-EEG data; further, interpolating the 3×3 feature matrix obtained by each lead to a corresponding lead position in the plane coordinates of the acquisition system by using a cubic surface interpolation method to obtain a 64×64 composite feature matrix containing time-frequency features and space information; finally, a multi-level multi-scale feature fusion rolling neural network is used for realizing the extraction of multi-resolution features of a composite feature matrix and the organic fusion of global features and abstract features, so that the MI-EEG recognition effect is improved;
identifying the wavelet transformed composite feature matrix using MLMSFFCNN includes the steps of,
s2.1 MLMSFFCNN comprises five stages of convolution, wherein each stage of convolution has the same structure and comprises 3 convolution branch structures; using S at the end of each convolution branch w The downsampling convolution layer of=2 changes the CNN feature matrix dimension to half of the original; finally, the outputs of the three convolution branches are summed to obtain the output of the convolution, which is expressed as
Figure FDA0004132590850000011
Where u is the sequence number of the convolution progression, u= {1,2,3,4,5}; g u Continuing to serve as an input matrix of the u+1-level convolution; will G u Flattened into a long vector, a fully connected layer of length 128 is connected, the output of which is denoted as V u ∈R 1×128 ,V u Vectors for feature fusion extracted for each convolutional layer;
each level of convolution comprises three branch structures, each branch comprises a plurality of convolution layers, and each convolution layer comprises a plurality of convolution kernels; the convolution sizes of the branches for feature extraction are different;
the input of the convolution kernel is subjected to linear function operation and then nonlinear output, and the corresponding relation of the input and the output can be expressed by the following formula:
Figure FDA0004132590850000012
Figure FDA0004132590850000013
wherein x is the input signal,
Figure FDA0004132590850000014
S w a weight value representing the connection of the input signal x and the neuron, and the convolution kernel scale is N w ×N w ,N w ={1,2,3},S w = {1,2} is the moving step length of the convolution kernel on the input matrix, b represents the offset value of the internal state of the neuron, and y is the output of the neuron; f (a) is a nonlinear activation function and is calculated by using a linear rectification function, wherein the calculation formula is as follows:
f(a)=RELU(a)=max(0,a)
configuration of each level of convolution and convolution branches;
s2.2 full connection layer V convolving five stages u Fusion is carried out for final classification to obtain a fusion full-connection layer V epsilon R 1 ×640 For final classification, u= {1,2,3,4,5}; after that, a full connection layer with length of 256 is connected f ∈R 1×256 Then connecting to classification nodes, wherein the number of the classification nodes is N of the data set categories o
S2.3 output value O of CNN c ∈R 1×1 ,c∈{1,2,…,N o Calculating normalized exponential function to obtain normalized probability value, namely P c (x),c∈{1,2,…,N o The equation is as follows:
Figure FDA0004132590850000021
and S2.4, the classification accuracy is obtained by adopting ten-fold cross validation in the test process.
2. The CWT and MLMSFFCNN based electroencephalogram identification method of claim 1, wherein: MI-EEG composite feature matrix calculation based on continuous wavelet transformation comprises the following steps:
s1.1 hypothesis
Figure FDA0004132590850000022
Electroencephalogram signals during motor imagery acquired for the mth lead of the ith experiment, where m e {1,2,3, …, N c Lead label for acquiring motor imagery electroencephalogram task, N c Representing the number of leads; i.epsilon. {1,2,3, …, N m },N m Representing the collection experiment times; k is {1,2,3, …, N s },N s Representing the number of sampling points of one experiment; then the ith acquisition experiment obtains the electroencephalogram data as +.>
Figure FDA0004132590850000023
S1.2 pair
Figure FDA0004132590850000024
CWT is carried out, and a wavelet basis function is selected to be cgau8; assume that the original data sampling frequency is f s The wavelet transformation frequency axis variable is j, and the wavelet physical frequency is N f =65, the transformed time-frequency matrix is then obtained as
Figure FDA0004132590850000025
S1.3 according to the activation characteristics of MI in the matrix +.>
Figure FDA0004132590850000026
Intercepting 8-30Hz time-frequency matrix, namely, corresponding to j E {3,4, …,14} partial data, which is recorded as +.>
Figure FDA0004132590850000027
Here j e {1,2, …,12};
S1.4X e j,k Equally divided into three sub-matrices along the longitudinal axis,
Figure FDA0004132590850000028
summing up the 4 eigenvalues of a column in each sub-matrix, thus obtaining three sub-sequences, denoted +.>
Figure FDA0004132590850000029
S1.5 Each subsequence
Figure FDA00041325908500000210
Dividing the time axis k into 3 windows equally to obtain subsequences of the three windows, which are recorded as
Figure FDA00041325908500000211
S1.6 for each window subsequence, taking the square after the modulus of each wavelet coefficient is obtained, and then putting each window subsequence
Figure FDA00041325908500000212
Summing the values and averaging to obtain +.>
Figure FDA00041325908500000213
S1.7 extracting N from the lead distribution plan in the dataset c Plane coordinate information of the leads, noted as
Figure FDA00041325908500000214
Figure FDA00041325908500000215
S1.8 with C n,m Each coordinate point is expanded into a 3X 3 coordinate matrix by taking the coordinates in the coordinate system as the center, and after expansionIs marked as the coordinate system of
Figure FDA00041325908500000216
S1.9 will
Figure FDA00041325908500000217
The total 9 eigenvalues are arranged into a 3 x 3 time-frequency eigenvalue matrix according to C' v,m Performing cubic surface interpolation to arrange the cubic surfaces in a 64×64 matrix according to the lead positions, and marking as G E R 64×64 The method comprises the steps of carrying out a first treatment on the surface of the The total input data is
Figure FDA0004132590850000031
Figure FDA0004132590850000032
/>
CN202010291359.4A 2020-04-14 2020-04-14 Brain electricity identification method based on CWT and MLMSFFCNN Active CN111582041B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010291359.4A CN111582041B (en) 2020-04-14 2020-04-14 Brain electricity identification method based on CWT and MLMSFFCNN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010291359.4A CN111582041B (en) 2020-04-14 2020-04-14 Brain electricity identification method based on CWT and MLMSFFCNN

Publications (2)

Publication Number Publication Date
CN111582041A CN111582041A (en) 2020-08-25
CN111582041B true CN111582041B (en) 2023-06-09

Family

ID=72116631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010291359.4A Active CN111582041B (en) 2020-04-14 2020-04-14 Brain electricity identification method based on CWT and MLMSFFCNN

Country Status (1)

Country Link
CN (1) CN111582041B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112244878B (en) * 2020-08-31 2023-08-04 北京工业大学 Method for identifying key frequency band image sequence by using parallel multi-module CNN and LSTM
CN112515685B (en) * 2020-11-10 2023-03-24 上海大学 Multi-channel electroencephalogram signal channel selection method based on time-frequency co-fusion
CN112932505B (en) * 2021-01-16 2022-08-09 北京工业大学 Symbol transfer entropy and brain network characteristic calculation method based on time-frequency energy
CN113780134B (en) * 2021-08-31 2023-05-02 昆明理工大学 Motor imagery brain electrolysis code method based on SheffleNetV 2 network
CN114818837B (en) * 2022-06-29 2022-10-14 电子科技大学 Electroencephalogram signal intelligent processing circuit based on multistage neural network and block calculation
CN117290709B (en) * 2023-11-27 2024-02-02 小舟科技有限公司 Method, system, device and storage medium for continuous dynamic intent decoding

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104887224A (en) * 2015-05-29 2015-09-09 北京航空航天大学 Epileptic feature extraction and automatic identification method based on electroencephalogram signal
CN109726751A (en) * 2018-12-21 2019-05-07 北京工业大学 Method based on depth convolutional neural networks identification brain Electrical imaging figure
CN109965869A (en) * 2018-12-16 2019-07-05 北京工业大学 MI-EEG recognition methods based on brain source domain space
CN110693493A (en) * 2019-10-12 2020-01-17 北京工业大学 Epilepsy electroencephalogram prediction method based on convolution and recurrent neural network combined time multiscale

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104887224A (en) * 2015-05-29 2015-09-09 北京航空航天大学 Epileptic feature extraction and automatic identification method based on electroencephalogram signal
CN109965869A (en) * 2018-12-16 2019-07-05 北京工业大学 MI-EEG recognition methods based on brain source domain space
CN109726751A (en) * 2018-12-21 2019-05-07 北京工业大学 Method based on depth convolutional neural networks identification brain Electrical imaging figure
CN110693493A (en) * 2019-10-12 2020-01-17 北京工业大学 Epilepsy electroencephalogram prediction method based on convolution and recurrent neural network combined time multiscale

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MING-AI LI 等.A Novel MI-EEG Imaging With the Location Information of Electrodes.《IEEE》.2019,3197-3122. *
李明爱 等.基于小波包和深度信念网络的脑电特征提取方法.《电子测量与仪器学报》.2018,第32卷(第32期),111-118. *

Also Published As

Publication number Publication date
CN111582041A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
CN111582041B (en) Brain electricity identification method based on CWT and MLMSFFCNN
WO2021082480A1 (en) Image classification method and related device
CN111950455B (en) Motion imagery electroencephalogram characteristic identification method based on LFFCNN-GRU algorithm model
CN111461201A (en) Sensor data classification method based on phase space reconstruction
CN104077742B (en) Human face sketch synthetic method and system based on Gabor characteristic
CN114119689B (en) Multi-modal medical image unsupervised registration method and system based on deep learning
CN116612335B (en) Few-sample fine-granularity image classification method based on contrast learning
CN115423739A (en) SimpleBaseline-based method for detecting key points of teleoperation mechanical arm
CN114519868A (en) Real-time bone key point identification method and system based on coordinate system regression
CN115965864A (en) Lightweight attention mechanism network for crop disease identification
CN115937693A (en) Road identification method and system based on remote sensing image
CN115170622A (en) Transformer-based medical image registration method and system
CN115171052A (en) Crowded crowd attitude estimation method based on high-resolution context network
CN113705394A (en) Behavior identification method combining long and short time domain features
CN117635563A (en) Multi-mode MRI brain tumor image segmentation method based on modal cross attention
CN116883364A (en) Apple leaf disease identification method based on CNN and Transformer
CN113887656B (en) Hyperspectral image classification method combining deep learning and sparse representation
CN100446034C (en) Image elastic registrating method based on limited sampling global optimisation
CN115937910A (en) Palm print image identification method based on small sample measurement network
CN115017366A (en) Unsupervised video hash retrieval method based on multi-granularity contextualization and multi-structure storage
CN113192076B (en) MRI brain tumor image segmentation method combining classification prediction and multi-scale feature extraction
CN114764575A (en) Multi-modal data classification method based on deep learning and time sequence attention mechanism
CN115223190A (en) Posture estimation method and system based on human body structure guide learning network
CN114495163A (en) Pedestrian re-identification generation learning method based on category activation mapping
CN113743363B (en) Shielded target identification method based on small sample of unmanned aerial vehicle system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant