CN116250846A - Multi-branch motor imagery electroencephalogram signal feature fusion classification method based on data conversion - Google Patents

Multi-branch motor imagery electroencephalogram signal feature fusion classification method based on data conversion Download PDF

Info

Publication number
CN116250846A
CN116250846A CN202310249522.4A CN202310249522A CN116250846A CN 116250846 A CN116250846 A CN 116250846A CN 202310249522 A CN202310249522 A CN 202310249522A CN 116250846 A CN116250846 A CN 116250846A
Authority
CN
China
Prior art keywords
network
branch
classification
characteristic
motor imagery
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310249522.4A
Other languages
Chinese (zh)
Inventor
万金鹏
李宏亮
崔建华
王世森
何乃宇
周毓轩
孟凡满
吴庆波
许林峰
潘力立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202310249522.4A priority Critical patent/CN116250846A/en
Publication of CN116250846A publication Critical patent/CN116250846A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • A61B5/726Details of waveform analysis characterised by using transforms using Wavelet transforms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Public Health (AREA)
  • Psychiatry (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Pathology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Physiology (AREA)
  • Psychology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Dermatology (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Fuzzy Systems (AREA)
  • Neurosurgery (AREA)
  • Neurology (AREA)
  • Human Computer Interaction (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention provides a multi-branch motor imagery electroencephalogram characteristic fusion classification method based on data conversion, which aims at converting electroencephalogram data into different input formats on the basis of expanding the width of a network structure, namely network branches, uses a plurality of branch networks for processing, uses a gram angle field as a new data format input network after conversion, provides richer characteristics compared with a deep separation convolution and a time-frequency chart, is beneficial to improving the integrity of characteristic extraction, ensures that the obvious characteristics among different network branches are different, and the extracted characteristics are mutually complementary. Conversion to a different data format facilitates training the network to learn different types of features. Meanwhile, the constraints of big tasks, small tasks and other tasks in the classification tasks are used, namely, different task targets of the network are used for realizing multiple constraints, the network is facilitated to extract the characteristics with higher universality and more comprehensiveness, and a better motor imagery electroencephalogram signal classification effect is obtained.

Description

Multi-branch motor imagery electroencephalogram signal feature fusion classification method based on data conversion
Technical Field
The invention relates to brain wave signal characteristic extraction and classification technology, in particular to data conversion and multi-branch characteristic fusion and classification technology in motor imagery brain wave classification.
Background
The brain-computer interface BCI provides a new way for human-computer interaction by analyzing the electrical signals generated by the brain and converting the electrical signals into actual commands. Motor imagery brain wave signals are widely used in BCI research, which helps control devices outside the human body. By correctly decoding the brain electrical signals related to motor imagery, a patient suffering from motion diseases can correctly control the exoskeleton, wheelchair equipment and the like, and can also be applied to control external robots and intelligent automobiles. It is significant to correctly decode the motor imagery brain wave signals and to improve the classification accuracy of the motor imagery signals.
Brain wave signals are affected by external noise, self myoelectric noise and the like, have very low signal to noise ratio, and are an important component of BCI technology in terms of correct classification from brain wave signals. The conventional method mainly focuses on the time domain, the frequency domain and the spatial domain for processing. The frequency spectrum characteristics of the signals are classified on the frequency domain by finding the statistical properties of the waveforms of the signals on the time domain. In the time-frequency domain, local characteristic scale decomposition LCD, discrete wavelet transform DWT, flexible analysis wavelet transform FAWT and the like are adopted, and the space domain analysis adopts methods of general space mode CSP, filter bank general space mode FBCSP and the like. The machine learning algorithm mainly used comprises linear discriminant analysis LDA, support vector machine SVM and the like. These feature extraction and classification methods are discontinuous and require manual feature screening, require prior knowledge, and thus the extracted features are often not comprehensive enough, work-load-intensive, and classification is not accurate enough.
Along with the development of deep learning, many researchers in the field of BCI are inspired, and various deep learning methods are applied to the field of brain waves, so that the characteristics can be adaptively selected and extracted by means of a deep learning network, and the dependence on priori knowledge and manual screening is reduced. Neural networks of various architectures have been used for feature extraction of brain waves. Including the use of EEGNet with depth separable convolution, processing recurrent neural networks such as LSTM with time series, and a method of converting motor imagery brain wave signals into spectral images and feeding into CNN networks for processing.
Because of the low signal-to-noise ratio of brain wave data and the small amount of brain wave data, the overfitting condition easily occurs when training is performed by using a deep learning network, and a shallow neural network is often used. Meanwhile, as different brain wave feature extraction networks use different input formats for processing, the extracted features are more biased to features in a single direction. In the current motor imagery brain wave classification task, tasks of various limbs are generally classified, and four tasks including a left hand, a right hand, a tongue and two feet are commonly used, or various different tasks of a single limb are classified, such as fist making and palm opening of a single hand and palm swinging of the single hand and the left hand and the right hand.
Disclosure of Invention
Aiming at the condition that the depth of the existing motor imagery brain wave deep learning network is shallower, the invention provides a motor imagery brain wave signal characteristic fusion classification method for increasing network branches and increasing network target tasks based on data conversion, and the invention can resist the condition of over fitting to a certain extent and improve the accuracy by expanding the network structure in terms of width.
The technical scheme adopted for solving the technical problems is that the multi-branch motor imagery electroencephalogram signal feature fusion classification method based on data conversion comprises the following steps:
pretreatment: preprocessing brain wave signals to obtain multichannel brain wave time sequence signals, and simultaneously carrying out a first branch processing step, a second branch processing step and a third branch processing step in three branch networks of the multichannel brain wave time sequence signals;
a first branch processing step: performing time sequence-based depth separable convolution on the electroencephalogram time sequence signals of each channel to generate a first branch characteristic spectrum, and entering a fusion step;
a second branch treatment step: performing wavelet transformation on electroencephalogram time sequence signals of all channels to obtain a time-frequency image, superposing the time-frequency image to obtain a multi-channel two-dimensional image, sending the two-dimensional image into a convolutional encoding network based on VGG-13 to generate a second branch characteristic spectrum, inputting the second branch characteristic spectrum into a convolutional decoding network which is symmetrical to the convolutional encoding network and is based on VGG-13 deconvolution to generate a corresponding time-frequency image while entering a fusion step, and then solving loss between the time-frequency image generated by the convolutional decoding network and the time-frequency image input into the convolutional encoding network to restrict the training process of the second branch processing of the motor imagery electroencephalogram characteristic classifying network;
a third branch processing step: building a gram angle field for the electroencephalogram time sequence signals of each channel, superposing the obtained gram angle fields of each channel to obtain a multi-channel two-dimensional image, sending the two-dimensional image into a convolutional coding network based on VGG-13 to generate a third branch characteristic spectrum, inputting the third branch characteristic spectrum into a convolutional decoding network which is symmetrical to the convolutional coding network and is based on VGG-13 deconvolution to generate a corresponding gram angle field, and solving loss between the gram angle field generated by the convolutional decoding network and the gram angle field input to the convolutional coding network to restrict the training process of the third branch processing of the motor imagery electroencephalogram signal characteristic classification network;
and (3) a fusion step: the generated first branch characteristic spectrum, second branch characteristic spectrum and third branch characteristic spectrum are respectively sent into a channel attention module to generate a first branch attention heat map, a second branch attention heat map and a third branch attention heat map, the attention heat maps of the three branches are correspondingly multiplied with the characteristic spectrums of the respective branches to obtain new characteristic spectrums of the three branches, and the new characteristic spectrums of the three branches are connected and then enter a fine classification step and a large classification step simultaneously;
and a fine classification step: after the feature spectrum after connection treatment is sent into two full-connection layers and a Softmax layer for fine classification, the Softmax layer outputs finely divided action classes, and the fine classification loss is solved to restrict the fine classification training process of the motor imagery electroencephalogram feature classification network;
a large classification step: after the feature spectrum after connection processing is sent into two layers of full-connection layers and a Softmax layer for large classification, the Softmax layer outputs two classification results in a left-hand action class or a right-hand action class, and the large classification loss is solved to restrict the large classification training process of the motor imagery electroencephalogram feature classification network;
the testing steps are as follows: and using the motor imagery electroencephalogram signal characteristic classification network obtained after training to classify motor imagery electroencephalogram signals.
The task targets for classifying the motor imagery brain wave signals are often of the types which are correctly corresponding to the output, the task targets are added to help the deep learning network to extract more features, the features among different tasks can be mutually complemented, and the classification performance is improved. The invention aims to expand the width of a network structure, namely on the basis of network branches, convert brain wave data into different input formats, and use a plurality of branch networks for processing, thereby being beneficial to improving the integrity of feature extraction, leading the obvious features of different network branches to be different and leading the extracted features to be mutually complementary. Conversion to a different data format facilitates training the network to learn different types of features. Meanwhile, the constraint of big tasks and small tasks in classified tasks and other tasks is used, namely, multiple constraints are realized by using different task targets of a network. Different task targets are set when the brain wave data is used for extracting the features, so that the extracted features are not limited to classification tasks, the defect that the characteristics extracted by the classification tasks are not comprehensive enough is overcome, and the network is facilitated to extract the features with higher universality and more comprehensive.
The invention has the beneficial effects that the glamer angle field is used as a new converted data format input network, so that richer features are provided compared with the deep separation convolution and time-frequency diagram, and the robustness of the extracted features is improved; meanwhile, the loss function of extracting features and reconstructing in different data formats and the loss function of classifying large tasks and subdivision tasks in specific motion classification tasks simultaneously are used, so that a better classification effect is achieved.
Drawings
FIG. 1 is a flow chart of an embodiment.
Detailed Description
In the case of studying a plurality of different tasks of different limbs, only the subdivided tasks are generally classified, and the self-test dataset used in the embodiment includes 6 motions for the left hand and the right hand, and six motions are divided into two large categories of the left hand and the right hand and 6 more subdivided small categories, such as fist making and diastole for the left hand, fist making and diastole for the right hand, left hand palm left-right swing and the like. Here the loss function: the loss function for both large and fine classifications contains cross entropy loss of the constraint classification class, the large classification loss also contains center loss for use of the large classification potential features; the loss function for image (glamer angle field, time-frequency plot) reconstruction is mse loss.
The embodiment adopts three branches to extract the brain electrical characteristics.
The first branch adopts a depth separable convolution network based on EEGNet, and directly performs feature extraction on the original input brain electrical signal;
the second branch uses wavelet transformation to convert brain wave signals into time-frequency domain images, and uses VGG-13 based network to perform feature extraction;
the third branch converts brain wave signals into a glabram angle field GAF image, and uses a VGG-13 based network for feature extraction.
After the brain wave signals are converted into different types of data, feature extraction is performed, and direct processing of original input is used, so that the network learns in a more comprehensive signal representation.
The method comprises the steps of performing feature extraction by using an EEGNet depth separable convolution mode in a first branch, firstly, respectively and simultaneously convolving a single channel time sequence signal to obtain time sequence information on each channel, then, performing depth convolution, independently connecting to each feature map, and finally, performing convolution fusion information on all feature maps, wherein the extracted feature spectrum can be regarded as a shallow feature visually shown by brain wave signals. The obtained characteristic spectrum needs to be passed through a channel attention module to generate an attention heat map, and the attention heat map is multiplied by the characteristic spectrum to obtain a corresponding new characteristic spectrum.
In the second branch, the brain wave signal is converted into a time-frequency domain image WT by using wavelet transformation x Namely, wavelet transformation is performed on each channel of the input brain wave signal, and continuous wavelet transformation is as follows:
Figure BDA0004127316080000041
where a is the scale factor, τ is used to reflect the displacement, x (t) is the input brain wave signal,
Figure BDA0004127316080000042
is a basic wavelet function +.>
Figure BDA0004127316080000043
Representing the complex conjugate of the basic wavelet function.
Decomposition is done using discrete wavelet transforms:
C j+1 (n)=∑ k∈Z h(k-2n)C j (k)
D j+1 (n)=∑ k∈Z g(k-2n)C j (k)
(2)
wherein h (n) and g (n) are a pair of complementary conjugate filters determined by a wavelet function, h (n) is a low-pass filter, g (n) is a high-pass filter, j is a scale of wavelet decomposition, k is a stride of the wavelet filter, n is a position of data in the sequence, Z is a wavelet filter kernel size, C j And D j Is the approximate and detailed part of the electroencephalogram data on scale j.
Wavelet transformation is carried out on brain wave signals of m channels to obtain m two-dimensional time-frequency images, images of 3-40Hz parts are intercepted from corresponding images, the sizes of the images are readjusted to 224 x 224, and then the m images are overlapped to be used as 224 x m images.
The image is represented in a form of combining time domain and frequency domain relative to the data in the original brain wave signal format, and the image feature extraction network can be used for extracting the features of the time domain and the frequency domain in the brain wave signal.
The image is sent to a convolutional coding network based on VGG-13 to obtain a corresponding characteristic spectrum, and a new characteristic spectrum is obtained after the image passes through a channel attention module. Meanwhile, the original characteristic spectrum needs to be sent into a deconvolution network which is approximately symmetrical to the convolution coding network and is used for generating an original time-frequency domain image. Such constraints may enable the intermediately generated feature spectrum to fully contain important features in the time-frequency domain image.
The third branch converts the EEG signal into a gram angle GAF, x i Is the ith value in the brain wave signal of a certain channel, firstly normalizes the brain wave signal to the interval [ -1,1 []Wherein x is obtained i Corresponding specification value
Figure BDA0004127316080000051
Converting normalized values to polar coordinates i
Figure BDA0004127316080000053
Solving the corresponding gram angle field of each channel:
Figure BDA0004127316080000052
wherein T represents the total number of data points in a single channel, obtaining m two-dimensional images after computing the glamer angle field of brain wave signals of m channels, readjusting the image size to 224 x 224, and then superposing the m images to obtain a 224 x m image.
The image has the advantages that the time and space information in the original brain wave channel signal is reserved, and the image convolution can be used for extracting the part of characteristics.
The image is sent to a convolutional coding network based on VGG-13 to obtain a corresponding characteristic spectrum, and a new characteristic spectrum is obtained after the image passes through a channel attention module. At the same time, the original characteristic spectrum needs to be sent into a deconvolution network which is approximately symmetrical to the convolution coding network and is used for generating an original gram angle field image. Such constraints may be such that the intermediately generated feature spectrum fully contains important features in the glatiramer angle field image.
The new feature spectrums obtained by the three branches are classified after being connected and respectively pass through the two branch full-connection layers and the softmax layers, wherein the first group is classified into a left-right hand action class, and the second group is classified into a finer class.
The implementation of the embodiment on a Pytorch framework, as shown in fig. 1, mainly comprises several steps: and performing depth separable convolution to extract a characteristic spectrum, performing wavelet transformation to convert into a time-frequency diagram and extract the characteristic spectrum, performing gram angle field transformation and extract the characteristic spectrum, merging the characteristic spectrum to perform classification tasks, and performing decoding by a decoding network to generate an original input signal from the characteristic spectrum.
The training process of the motor imagery electroencephalogram characteristic classification network comprises the following steps:
inputting the preprocessed brain wave signals into three branch networks, and respectively and simultaneously performing the step (2), the step (3) and the step (4);
(2) Performing time sequence-based depth separable convolution on each time channel of the brain wave signal to generate a characteristic spectrum, and entering a step (5);
(3) Performing wavelet transformation on each channel brain wave time sequence channel to obtain a time-frequency image, superposing the time-frequency images to obtain a two-dimensional image of m channels, sending the two-dimensional image into a convolutional coding network based on VGG-13 to generate a characteristic spectrum, inputting the characteristic spectrum into a convolutional decoding network which is symmetrical to the convolutional coding network and is based on VGG-13 deconvolution to generate a corresponding time-frequency image while entering step (5), and obtaining a loss function loss between the time-frequency image generated by the convolutional decoding network and the time-frequency image input into the convolutional coding network; the loss function is a mse loss function:
Figure BDA0004127316080000061
wherein u is i And f (v) i ) The true value and the generated value of the ith pixel point of the image are respectively represented, and o is the number of the pixel points.
(4) Building a gram angle field for each channel brain wave time sequence channel, superposing the obtained gram angle fields of each channel to form a two-dimensional image of m channels, sending the two-dimensional image into a convolutional coding network based on VGG-13 to generate a characteristic spectrum, inputting the characteristic spectrum into a convolutional decoding network which is symmetrical to the convolutional coding network and is based on VGG-13 deconvolution to generate a corresponding gram angle field at the same time, and solving a loss function loss between the gram angle field generated by the convolutional decoding network and the gram angle field input into the convolutional coding network;
(5) Sending the generated characteristic spectrum into a channel attention module to generate an attention heat map, multiplying the attention heat map by the characteristic spectrum to obtain a new characteristic spectrum, connecting the new characteristic spectrums of all branches, and then inputting the new characteristic spectrums into the step (6) and the step (7) respectively;
(6) After the feature spectrum after the connection processing is sent into the two full connection layers and the Softmax layer for fine classification, the Softmax layer outputs fine classification action types, such as left hand palm moves up and down, right hand palm moves up and down, left hand fist making and relaxing, right hand fist making and relaxing, left hand palm swings left and right through wrist, right hand palm swings left and right through wrist, and fine classification loss function loss is calculated; the loss function is a cross entropy loss function:
Figure BDA0004127316080000071
wherein S represents the number of subdivision action categories, c represents a specific subdivision action category, i represents the ith brain wave data, p ic Representing the probability that the ith brain wave data is predicted as category c, y ic As a sign function, 1 is taken when the actual category of the ith brain wave data is c, otherwise 0 is taken.
(7) After the feature spectrum after connection processing is sent into two layers of full-connection layers and a Softmax layer for large classification, the Softmax layer outputs classification results of two classes in a left-hand action class or a right-hand action class, and a large classification loss function loss is calculated; wherein the input features also require a center loss function after a layer of full connection:
Figure BDA0004127316080000072
wherein b is the size of the batch size when training the network, i is the ith training data in the same batch, fv i For the high-level feature vector of the ith data after a layer of full connection, ci is the true category of the input ith data, cv ei And the characteristic center vector representing the ith data true class.
(8) The weight of the overall loss function is subjected to different values according to different training tasks by restricting different loss functions of the training process through different loss functions loss of the four tasks, and a motor imagery electroencephalogram characteristic classification network obtained after training is used for motor imagery electroencephalogram classification.

Claims (1)

1. The multi-branch motor imagery electroencephalogram signal characteristic fusion classification method based on data conversion is characterized by comprising the following steps of:
pretreatment: preprocessing brain wave signals to obtain multichannel brain wave time sequence signals, and simultaneously carrying out a first branch processing step, a second branch processing step and a third branch processing step in three branch networks of the multichannel brain wave time sequence signals;
a first branch processing step: performing time sequence-based depth separable convolution on the electroencephalogram time sequence signals of each channel to generate a first branch characteristic spectrum, and entering a fusion step;
a second branch treatment step: performing wavelet transformation on electroencephalogram time sequence signals of all channels to obtain a time-frequency image, superposing the time-frequency image to obtain a multi-channel two-dimensional image, sending the two-dimensional image into a convolutional encoding network based on VGG-13 to generate a second branch characteristic spectrum, inputting the second branch characteristic spectrum into a convolutional decoding network which is symmetrical to the convolutional encoding network and is based on VGG-13 deconvolution to generate a corresponding time-frequency image while entering a fusion step, and then solving loss between the time-frequency image generated by the convolutional decoding network and the time-frequency image input into the convolutional encoding network to restrict the training process of the second branch processing of the motor imagery electroencephalogram characteristic classifying network;
a third branch processing step: building a gram angle field for the electroencephalogram time sequence signals of each channel, superposing the obtained gram angle fields of each channel to obtain a multi-channel two-dimensional image, sending the two-dimensional image into a convolutional coding network based on VGG-13 to generate a third branch characteristic spectrum, inputting the third branch characteristic spectrum into a convolutional decoding network which is symmetrical to the convolutional coding network and is based on VGG-13 deconvolution to generate a corresponding gram angle field, and solving loss between the gram angle field generated by the convolutional decoding network and the gram angle field input to the convolutional coding network to restrict the training process of the third branch processing of the motor imagery electroencephalogram signal characteristic classification network;
and (3) a fusion step: the generated first branch characteristic spectrum, second branch characteristic spectrum and third branch characteristic spectrum are respectively sent into a channel attention module to generate a first branch attention heat map, a second branch attention heat map and a third branch attention heat map, the attention heat maps of the three branches are correspondingly multiplied with the characteristic spectrums of the respective branches to obtain new characteristic spectrums of the three branches, and the new characteristic spectrums of the three branches are connected and then enter a fine classification step and a large classification step simultaneously;
and a fine classification step: after the feature spectrum after connection treatment is sent into two full-connection layers and a Softmax layer for fine classification, the Softmax layer outputs finely divided action classes, and the fine classification loss is solved to restrict the fine classification training process of the motor imagery electroencephalogram feature classification network;
a large classification step: after the feature spectrum after connection processing is sent into two layers of full-connection layers and a Softmax layer for large classification, the Softmax layer outputs two classification results in a left-hand action class or a right-hand action class, and the large classification loss is solved to restrict the large classification training process of the motor imagery electroencephalogram feature classification network;
the testing steps are as follows: and using the motor imagery electroencephalogram signal characteristic classification network obtained after training to classify motor imagery electroencephalogram signals.
CN202310249522.4A 2023-03-15 2023-03-15 Multi-branch motor imagery electroencephalogram signal feature fusion classification method based on data conversion Pending CN116250846A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310249522.4A CN116250846A (en) 2023-03-15 2023-03-15 Multi-branch motor imagery electroencephalogram signal feature fusion classification method based on data conversion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310249522.4A CN116250846A (en) 2023-03-15 2023-03-15 Multi-branch motor imagery electroencephalogram signal feature fusion classification method based on data conversion

Publications (1)

Publication Number Publication Date
CN116250846A true CN116250846A (en) 2023-06-13

Family

ID=86679236

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310249522.4A Pending CN116250846A (en) 2023-03-15 2023-03-15 Multi-branch motor imagery electroencephalogram signal feature fusion classification method based on data conversion

Country Status (1)

Country Link
CN (1) CN116250846A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116842329A (en) * 2023-07-10 2023-10-03 湖北大学 Motor imagery task classification method and system based on electroencephalogram signals and deep learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116842329A (en) * 2023-07-10 2023-10-03 湖北大学 Motor imagery task classification method and system based on electroencephalogram signals and deep learning

Similar Documents

Publication Publication Date Title
Xia et al. A novel improved deep convolutional neural network model for medical image fusion
Karami et al. Persian sign language (PSL) recognition using wavelet transform and neural networks
Rahimian et al. Xceptiontime: independent time-window xceptiontime architecture for hand gesture classification
Shovon et al. Classification of motor imagery EEG signals with multi-input convolutional neural network by augmenting STFT
CN106981057B (en) RPCA-based NSST image fusion method
CN109299751B (en) EMD data enhancement-based SSVEP electroencephalogram classification method of convolutional neural model
CN109598222B (en) EEMD data enhancement-based wavelet neural network motor imagery electroencephalogram classification method
CN113951900B (en) Motor imagery intention recognition method based on multi-mode signals
Gan et al. Wavelet denoising algorithm based on NDOA compressed sensing for fluorescence image of microarray
CN111914925B (en) Patient behavior multi-modal perception and analysis system based on deep learning
CN109299647B (en) Vehicle control-oriented multitask motor imagery electroencephalogram feature extraction and mode recognition method
CN112528804A (en) Electromyographic signal noise reduction and classification method based on generation countermeasure network
CN116250846A (en) Multi-branch motor imagery electroencephalogram signal feature fusion classification method based on data conversion
Meng et al. A motor imagery EEG signal classification algorithm based on recurrence plot convolution neural network
Abdulrahman et al. Face recognition using enhancement discrete wavelet transform based on MATLAB
CN115238796A (en) Motor imagery electroencephalogram signal classification method based on parallel DAMSCN-LSTM
CN110717423A (en) Training method and device for emotion recognition model of facial expression of old people
CN115221969A (en) Motor imagery electroencephalogram signal identification method based on EMD data enhancement and parallel SCN
Li et al. The novel recognition method with optimal wavelet packet and LSTM based recurrent neural network
Altaheri et al. Dynamic convolution with multilevel attention for EEG-based motor imagery decoding
CN114005073B (en) Upper limb mirror image rehabilitation training and recognition method and device
Tang et al. A hybrid SAE and CNN classifier for motor imagery EEG classification
CN115908896A (en) Image identification system based on impulse neural network with self-attention mechanism
Ai et al. Flexible coding scheme for robotic arm control driven by motor imagery decoding
Essa et al. Features selection for estimating hand gestures based on electromyography signals

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination