CN115969381B - Electroencephalogram signal analysis method based on multi-band fusion and space-time transducer - Google Patents

Electroencephalogram signal analysis method based on multi-band fusion and space-time transducer Download PDF

Info

Publication number
CN115969381B
CN115969381B CN202211433136.2A CN202211433136A CN115969381B CN 115969381 B CN115969381 B CN 115969381B CN 202211433136 A CN202211433136 A CN 202211433136A CN 115969381 B CN115969381 B CN 115969381B
Authority
CN
China
Prior art keywords
time
space
attention
band
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211433136.2A
Other languages
Chinese (zh)
Other versions
CN115969381A (en
Inventor
张枢
史恩泽
康艳晴
武晋茹
喻四刚
王嘉琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202211433136.2A priority Critical patent/CN115969381B/en
Publication of CN115969381A publication Critical patent/CN115969381A/en
Application granted granted Critical
Publication of CN115969381B publication Critical patent/CN115969381B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention discloses an electroencephalogram signal analysis method based on multi-frequency band fusion and space-time converter, which comprises the steps of firstly converting preprocessed electroencephalogram signals into multi-frequency band images with corresponding time lengths, and reserving three-dimensional space information among sampling channels as much as possible; then adopting a frequency band attention module to fuse the characteristics for calculating the attention force diagram of the stacked multi-frequency band images and deducing the fused characteristic diagram; extracting space-time characteristics by using a time/space self-attention module, and using the space-time characteristics to characterize and distinguish the dynamic brain states of multiple frames and multiple areas; and finally learning the category information of the features through the multi-layer perceptron. The MEET model obtained by training can represent and analyze the multi-scale time sequence of the brain electrical signals of the human body. The invention can effectively improve the accuracy of the classification of the electroencephalogram signals and solve the classification task of the electroencephalogram signals to a certain extent.

Description

Electroencephalogram signal analysis method based on multi-band fusion and space-time transducer
Technical Field
The invention belongs to the technical field of electroencephalogram analysis, and particularly relates to an electroencephalogram analysis method.
Background
Electroencephalogram (Electroencephalography, EEG) is one of the most widely used and inexpensive neuroimaging techniques, requiring advanced and powerful learning algorithms for modeling and analysis. Based on the multi-scale nature of the EEG signal, it is crucial to introduce the multi-band concept into the design of a transducer architecture modeling the EEG signal. Existing work has been extensively studied for multi-band fusion of EEG signals based on traditional signal processing methods and deep learning methods, for example by processing the signals using filters of different frequency ranges and fusing the filtered signal bands in a feature space. However, the training process of the neural network model is usually slow and complex, and in addition to the noise limitation introduced by the model, it is difficult to effectively fuse all frequency bands of the electroencephalogram signals. Moreover, most of the prior deep learning models belong to a post-fusion strategy, and before fusion, the meaningful and distinguishing features are still represented by single frequency bands, which leads to redundancy of the model and loss of overall information.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an electroencephalogram signal analysis method (Multi-band EEG Transformer, MEET) based on Multi-band fusion and space-time conversion, which comprises the steps of firstly converting a preprocessed electroencephalogram signal into a Multi-band image with a corresponding time length, and reserving three-dimensional space information among sampling channels as far as possible; then adopting a frequency band attention module to fuse the characteristics for calculating the attention force diagram of the stacked multi-frequency band images and deducing the fused characteristic diagram; extracting space-time characteristics by using a time/space self-attention module, and using the space-time characteristics to characterize and distinguish the dynamic brain states of multiple frames and multiple areas; and finally learning the category information of the features through the multi-layer perceptron. The MEET model obtained by training can represent and analyze the multi-scale time sequence of the brain electrical signals of the human body. The invention can effectively improve the accuracy of the classification of the electroencephalogram signals and solve the classification task of the electroencephalogram signals to a certain extent.
The technical scheme adopted by the invention for solving the technical problems comprises the following steps:
Step 1: preprocessing an electroencephalogram signal into a multi-band image;
Downsampling the electroencephalogram signal with the time length of T to 200Hz, and decomposing into five frequency bands: delta (1-4 Hz), theta (4-8 Hz), alpha (8-14 Hz), beta (14-31 Hz) and Gamma (31-50 Hz); differential entropy is used as a feature extractor, differential entropy feature extraction is independently performed for each EEG channel on each of the five frequency bands, and then three-dimensional electrode coordinates are mapped to a two-dimensional plane using the AEP method, so that one-dimensional differential entropy feature vectors are recombined into a two-dimensional scatter diagram. Then, interpolating the scatter diagram by using a C-T method to generate a feature diagram with the resolution of 32 multiplied by 32; the feature maps on the five frequency bands are stacked together;
The input EEG data is represented as a four-dimensional feature tensor x i∈RH×W×5×T, where H x W is the resolution of the feature map;
step 2: fusion of multi-band characteristics;
three-dimensional multi-band feature tensor Two results were obtained at time i by maximum pooling and average pooling, respectively, denoted F avg and/>
F avg/Fmax is then fed into a multi-layer perceptron with two layers of weight sharing to generate a frequency band attention graph denoted as A avg/Amax, the first layer is composed ofThe number of neurons in the second layer is 5,r, which is the reduction rate, consisting of individual neurons and activated using the ReLU function; a avg and A max are combined through matrix corresponding element summation to generate a final frequency band attention diagram; the calculation formula is as follows:
Mc(F)=σ(MLP(AvgPool(xi))+MLP(MaxPool(xi)))
=σ(W1(W0(Favg))+W1(W0(Fmax)))
Where σ represents the activation function, x i is the input feature, and W 0∈R5/r×5 and W 1∈R5×5/r represent the parameter matrix of the multi-layer perceptron;
Step 3: extracting time sequence/space characteristics;
Learning time sequence dependence and spatial relation of complex EEG signals based on multi-frequency band fusion and a space-time transducer model MEET, wherein a time self-attention module learns time sequence dependence among different frames, and a spatial self-attention module learns spatial relation among different positions in the same frame; the calculation formula is as follows:
Learning time sequence dependence and spatial relation of complex EEG signals based on multi-frequency band fusion and a space-time transducer model MEET, wherein a time self-attention module learns time sequence dependence among different frames, and a spatial self-attention module learns spatial relation among different positions in the same frame; in the temporal self-attention module, tensor blocks at the same spatial position in consecutive t frames are grouped, zhang Liangkuai in each group are vectorized, and multi-head self-attention is calculated as query/key/value; the formulas for the two self-attention weights α on query (p, t) are as follows:
Wherein, l and a respectively represent the number of layers of the encoder and the serial number of the multi-head self-attention module, p and t respectively represent the position serial number and the time serial number of the query block, SM is a softmax activation function, q/k represents query/key, and correspondingly, the dimension of each attention head is D h; the calculation of the spatial attention module is based on the results of the temporal attention module;
Step 4: and (3) preprocessing the data set in the step (1), and inputting the preprocessed data set in the step (2) and the step (3) to obtain a final model output, namely a final classification result.
Preferably, the depth based on the multi-band fusion and the space-time transducer model MEET is 3, the hidden layer dimension is 768, and the multi-layer perceptron dimension is 3072.
Preferably, the depth based on the multi-band fusion and the space-time transducer model MEET is 6, the hidden layer dimension is 768, and the multi-layer perceptron dimension is 3072.
Preferably, the depth based on the multi-band fusion and the space-time transducer model MEET is 12, the hidden layer dimension is 1024, and the multi-layer perceptron dimension is 4096.
The beneficial effects of the invention are as follows:
1. The method has important significance for electroencephalogram analysis, the transducer is used as a backbone network to effectively model the electroencephalogram so as to distinguish brain states, and more importantly, the multi-band fusion strategy of the MEET can remarkably improve classification performance, and meanwhile compared with other advanced methods, consumed training resources are remarkably reduced.
2. The electroencephalogram signal classification technology plays an important role in the technical fields of electroencephalogram analysis, BCI and neuroscience. For the application of BCI, the real-time analysis of brain states is critical, MEET has pretraining and fine-tuning strategies, and the potential for realizing on-line brain state reasoning under the condition of limited time consumption and resource cost is great.
Drawings
Fig. 1 is a schematic diagram of a MEET model network structure according to the present invention.
Fig. 2 is a schematic diagram of a frequency band attention module and a temporal/spatial self-attention module according to the present invention.
Detailed Description
The invention will be further described with reference to the drawings and examples.
The invention provides an electroencephalogram signal analysis method based on multi-frequency band fusion and space-time transformation, which uses a model based on a depth self-attention transformation network to fuse a frequency band attention module and a time/space self-attention module to learn potential characteristic information.
As shown in fig. 1, an electroencephalogram signal analysis method based on multi-band fusion and space-time transducer comprises the following steps:
step 1: preprocessing an electroencephalogram signal into a multi-band image with corresponding time length;
As shown in the left side of fig. 1, after extracting the multi-band features for each band of the input electroencephalogram signal, the multi-band fusion module derives the features of the linear combination band learnable weights of the multi-bands using the band attention block. Downsampling the electroencephalogram signal with the time length of T to 200Hz, and decomposing into five frequency bands: delta (1-4 Hz), theta (4-8 Hz), alpha (8-14 Hz), beta (14-31 Hz), and Gamma (31-50 Hz). Differential entropy widely used in electroencephalogram analysis is adopted as a feature extractor, and after differential entropy feature extraction is independently performed on each EEG channel on each of five frequency bands, a AEP (Azimuthal Equidistant Projection) method is used for mapping three-dimensional electrode coordinates to a two-dimensional plane. Thus, the one-dimensional differential entropy feature vector is reorganized into a two-dimensional scatter plot. We then interpolate the scatter plot using a C-T scheme to generate a feature map with a resolution of 32 x 32. The feature maps on the five frequency bands are stacked together. Through the above steps, the input EEG data is represented as a four-dimensional feature tensor Where H W is the resolution of the feature map and T is the length of time of the feature map sequence;
step 2: fusion of multi-band characteristics;
three-dimensional multi-band feature tensor Two results were obtained at time i by maximum pooling (MaxPool) and average pooling (AvgPool), denoted as F avg and/>, respectivelyF avg/Fmax is then fed into a two-layer weight-shared multi-layer perceptron (multi-layer perceptron, MLP) to generate a frequency band attention map denoted as F avg/Amax. First layer by/>(R is the reduction rate) and is activated using the ReLU function, the number of neurons in the second layer is 5.A avg and a max are combined by matrix-corresponding element summation to produce the final frequency band attention map. The calculation formula is as follows:
Mc(F)=σ(MLP(AvgPool(xi))+MLP(MaxPool(xi)))
=σ(W1(W0(Favg))+W1(W0(Fmax))
Where σ represents the activation function, x i is the input feature, and W 0∈R5/r×5 and W 1∈R5×5/r represent the parameter matrix of the multi-layer perceptron;
Step 3: extracting time sequence/space characteristics;
learning time sequence dependence and spatial relation of complex EEG signals based on multi-frequency band fusion and a space-time transducer model MEET, wherein a time self-attention module learns time sequence dependence among different frames, and a spatial self-attention module learns spatial relation among different positions in the same frame; in the temporal self-attention module, tensor blocks at the same spatial position in consecutive t frames are grouped, zhang Liangkuai in each group are vectorized, and multi-head self-attention is calculated as query/key/value, the spatial self-attention module is similar to ViT model (Vision Transformer). The formulas for the two self-attention weights α on query (p, t) are as follows:
Where l and a represent the number of layers of the encoder and the serial number of the multi-head self-attention module, p and t represent the position serial number and the time serial number of the query block (query patch), SM is a softmax activation function, q/k represents the query/key, and correspondingly, the dimension of each attention head is D h. The calculation of the spatial attention module is based on the results of the temporal attention module.
Step 4: and (3) preprocessing the data set in the step (1), and inputting the preprocessed data set in the step (2) and the step (3) to obtain a final model output, namely a final classification result.
To cope with different scale tasks, three MEET model variants were designed. As shown in table 1 below, "depth" represents the number of layers of an EEG transducer encoder (including temporal and spatial self-attention modules). "time" means the time required to train the model 100 times on a task. In the following, MEET-Small was used for a large number of basic evaluations (including in-person and across-person experiments) due to its fast training speed and no significant decay in accuracy; the MEET-Base is used for a comparison experiment to confirm the structure and parameters of the model; MEET-Large is used to explore the upper limit of model learning ability.
TABLE 1
Specific examples:
1. preprocessing an electroencephalogram signal into a multi-band image with corresponding time length;
As shown in the left side of fig. 1, after extracting the multi-band features for each band of the input electroencephalogram signal, the multi-band fusion module derives the features of the linear combination band learnable weights of the multi-bands using the band attention block. Downsampling the electroencephalogram signal with the time length of T to 200Hz, and decomposing into five frequency bands: delta (1-4 Hz), theta (4-8 Hz), alpha (8-14 Hz), beta (14-31 Hz), and Gamma (31-50 Hz). Differential entropy widely used in electroencephalogram analysis is adopted as a feature extractor, and after differential entropy feature extraction is independently performed on each EEG channel on each of five frequency bands, a AEP (Azimuthal Equidistant Projection) method is used for mapping three-dimensional electrode coordinates to a two-dimensional plane. Thus, the one-dimensional differential entropy feature vector is reorganized into a two-dimensional scatter plot. We then interpolate the scatter plot using a C-T scheme to generate a feature map with a resolution of 32 x 32. The feature maps on the five frequency bands are stacked together. Through the above steps, the input EEG data is represented as a four-dimensional feature tensor Where H W is the resolution of the feature map and T is the length of time of the feature map sequence;
2. multiband feature fusion
Three-dimensional multi-band feature tensorTwo results were obtained at time i by maximum pooling (MaxPool) and average pooling (AvgPool), denoted as F avg and/>, respectivelyF avg/Fmax is then fed into a two-layer weight-shared multi-layer perceptron (multi-layer perceptron, MLP) to generate a band attention graph, denoted a avg/Amax. First layer by/>(R is the reduction rate) and is activated using the ReLU function, the number of neurons in the second layer is 5.A avg and a max are combined by matrix-corresponding element summation to produce the final frequency band attention map. The calculation formula is as follows:
Mc(F)=σ(MLP(AvgPool(xi))+MLP(MaxPool(xi)))
=σ(W1(W0(Favg))+w1(W0(Fmax)))
Where σ represents the activation function, x i is the input feature, and W 0∈R5/r×5 and W 1∈R5×5/r represent the parameter matrix of the multi-layer perceptron;
3. Temporal/spatial feature extraction
Learning time sequence dependence and spatial relation of complex EEG signals based on multi-frequency band fusion and a space-time transducer model MEET, wherein a time self-attention module learns time sequence dependence among different frames, and a spatial self-attention module learns spatial relation among different positions in the same frame; in the temporal self-attention module, tensor blocks at the same spatial position in consecutive t frames are grouped, zhang Liangkuai in each group are vectorized, and multi-head self-attention is calculated as query/key/value, the spatial self-attention module is similar to ViT model (Vision Transformer). The formulas for the two self-attention weights α on query (p, t) are as follows:
Where l and a represent the number of layers of the encoder and the serial number of the multi-head self-attention module, p and t represent the position serial number and the time serial number of the query block (query patch), SM is a softmax activation function, q/k represents the query/key, and correspondingly, the dimension of each attention head is D h. The calculation of the spatial attention module is based on the results of the temporal attention module.
4. Test phase
And (3) preprocessing the data set in the step (1), and inputting the preprocessed data set in the step (2) and the step (3) to obtain a final model output, namely a final classification result. Two published EEG data sets were used to evaluate MEET performance: SEED (three-classification task) and SEED-IV (four-classification task). For SEED, MEET-Small gave excellent classification results (average accuracy of 99.30%) on every individual. And still a very high accuracy (average accuracy 96.02%) is obtained for more difficult SEED-IV datasets. The test performance of various classification methods is counted and compared with the current most advanced model 4D-aNN. In the SEED experiment, the MEET-Small improves the average accuracy by 3.05%; the improvement amplitude of SEED-IV is larger, and the average accuracy is improved by 9.25%. At the same time, the standard deviation of the proposed MEET model is also minimal, which strongly demonstrates the stability and robustness of MEET.
The invention provides an electroencephalogram signal analysis method based on multi-band fusion and space-time transducer, which proves that the transducer is used as a backbone network to effectively model the electroencephalogram signal so as to distinguish brain states, and more importantly, the multi-band fusion strategy of MEET can obviously improve classification performance. The MEET-Small achieves the best results compared to most algorithms using RNN or RNN-based frameworks in combination with CNN/GNN, while requiring significantly less computational resources for training. This is due in large part to the proposed temporal/spatial self-attention module which learns the timing information of the EEG signal by modeling a bi-directional network. The time complexity of parallel computation is much lower than the serial computation method used by RNN. At the same time, MEET demonstrates that 5-band fusion is a better fusion strategy, based on its unique self-attention mechanism, can explain the relevant meaningful brain attention area. In addition, to further enhance brain state classification ability, MEET designed a pre-training strategy and evaluated to check the effectiveness of pre-training, and the results showed that 0.87% and 0.64% improvement compared to the initial training demonstrated that pre-trained MEET could significantly enhance the predictive ability of the model. In BCI using EEG, MEET can provide an efficient characterization learning architecture with higher brain state discrimination capability while spending less training resources. For the application of brain-controlled robots, the real-time analysis of brain states is critical, and MEET has pretraining and fine-tuning strategies, so that the potential for realizing online brain state reasoning under the condition of limited time consumption and resource cost is great. For applications using EEG for diagnosis of brain diseases, such as Attention Deficit Hyperactivity Disorder (ADHD), MEET may simulate brain waves and aid in analyzing the mechanisms of attention deficit. In addition, MEET can model and study interactions between abnormal and normal brain attention areas, which will provide great support for brain rehabilitation.

Claims (4)

1. An electroencephalogram signal analysis method based on multi-band fusion and space-time transducer is characterized by comprising the following steps:
Step 1: preprocessing an electroencephalogram signal into a multi-band image;
Downsampling the electroencephalogram signal with the time length of T to 200Hz, and decomposing into five frequency bands: delta (1-4 Hz), theta (4-8 Hz), alpha (8-14 Hz), beta (14-31 Hz) and Gamma (31-50 Hz); using differential entropy as a feature extractor, independently performing differential entropy feature extraction on each EEG channel on each of five frequency bands, and then mapping three-dimensional electrode coordinates to a two-dimensional plane by using an AEP method, so that one-dimensional differential entropy feature vectors are recombined into a two-dimensional scatter diagram; then, interpolating the scatter diagram by using a C-T method to generate a feature diagram with the resolution of 32 multiplied by 32; the feature maps on the five frequency bands are stacked together;
The input EEG data is represented as a four-dimensional feature tensor x i∈RH×W×5×T, where H x W is the resolution of the feature map;
step 2: fusion of multi-band characteristics;
three-dimensional multi-band feature tensor Two results were obtained at time i by maximum pooling and average pooling, respectively, denoted F avg and/>
F avg/Fmax is then fed into a multi-layer perceptron with two layers of weight sharing to generate a frequency band attention graph denoted as A avg/Amax, the first layer is composed ofThe number of neurons in the second layer is 5,r, which is the reduction rate, consisting of individual neurons and activated using the ReLU function; a avg and A max are combined through matrix corresponding element summation to generate a final frequency band attention diagram; the calculation formula is as follows:
Mc(F)=σ(MLP(AvgPool(xi))+MLP(MaxPool(xi)))
=σ(w1(W0(Favg))+W1(W0(Fmax)))
Where σ represents the activation function, x i is the input feature, and W 0∈R5/r×5 and W 1∈R5×5/r represent the parameter matrix of the multi-layer perceptron;
Step 3: extracting time sequence/space characteristics;
Learning time sequence dependence and spatial relation of complex EEG signals based on multi-frequency band fusion and a space-time transducer model MEET, wherein a time self-attention module learns time sequence dependence among different frames, and a spatial self-attention module learns spatial relation among different positions in the same frame; the calculation formula is as follows:
Learning time sequence dependence and spatial relation of complex EEG signals based on multi-frequency band fusion and a space-time transducer model MEET, wherein a time self-attention module learns time sequence dependence among different frames, and a spatial self-attention module learns spatial relation among different positions in the same frame; in the temporal self-attention module, tensor blocks at the same spatial position in consecutive t frames are grouped, zhang Liangkuai in each group are vectorized, and multi-head self-attention is calculated as query/key/value; the formulas for the two self-attention weights α on query (p, t) are as follows:
Wherein, l and a respectively represent the number of layers of the encoder and the serial number of the multi-head self-attention module, p and t respectively represent the position serial number and the time serial number of the query block, SM is a softmax activation function, q/k represents query/key, and correspondingly, the dimension of each attention head is D h; the calculation of the spatial attention module is based on the results of the temporal attention module;
Step 4: and (3) preprocessing the data set in the step (1), and inputting the preprocessed data set in the step (2) and the step (3) to obtain a final model output, namely a final classification result.
2. The method for analyzing electroencephalogram signals based on multi-band fusion and space-time transform according to claim 1, wherein the depth of the multi-band fusion and space-time transform model MEET is 3, the hidden layer dimension is 768, and the multi-layer perceptron dimension is 3072.
3. The method for analyzing electroencephalogram signals based on multi-band fusion and space-time transform according to claim 1, wherein the depth of the multi-band fusion and space-time transform model MEET is 6, the hidden layer dimension is 768, and the multi-layer perceptron dimension is 3072.
4. The method for analyzing the electroencephalogram signals based on the multi-band fusion and the space-time transducer according to claim 1, wherein the depth of the multi-band fusion and the space-time transducer model MEET is 12, the dimension of a hidden layer is 1024, and the dimension of a multi-layer perceptron is 4096.
CN202211433136.2A 2022-11-16 2022-11-16 Electroencephalogram signal analysis method based on multi-band fusion and space-time transducer Active CN115969381B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211433136.2A CN115969381B (en) 2022-11-16 2022-11-16 Electroencephalogram signal analysis method based on multi-band fusion and space-time transducer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211433136.2A CN115969381B (en) 2022-11-16 2022-11-16 Electroencephalogram signal analysis method based on multi-band fusion and space-time transducer

Publications (2)

Publication Number Publication Date
CN115969381A CN115969381A (en) 2023-04-18
CN115969381B true CN115969381B (en) 2024-04-30

Family

ID=85960118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211433136.2A Active CN115969381B (en) 2022-11-16 2022-11-16 Electroencephalogram signal analysis method based on multi-band fusion and space-time transducer

Country Status (1)

Country Link
CN (1) CN115969381B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021104099A1 (en) * 2019-11-29 2021-06-03 中国科学院深圳先进技术研究院 Multimodal depression detection method and system employing context awareness
CN113907706A (en) * 2021-08-29 2022-01-11 北京工业大学 Electroencephalogram seizure prediction method based on multi-scale convolution and self-attention network
CN114176607A (en) * 2021-12-27 2022-03-15 杭州电子科技大学 Electroencephalogram signal classification method based on visual Transformer
CN114298216A (en) * 2021-12-27 2022-04-08 杭州电子科技大学 Electroencephalogram vision classification method based on time-frequency domain fusion Transformer
CN115238731A (en) * 2022-06-13 2022-10-25 重庆邮电大学 Emotion identification method based on convolution recurrent neural network and multi-head self-attention

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021104099A1 (en) * 2019-11-29 2021-06-03 中国科学院深圳先进技术研究院 Multimodal depression detection method and system employing context awareness
CN113907706A (en) * 2021-08-29 2022-01-11 北京工业大学 Electroencephalogram seizure prediction method based on multi-scale convolution and self-attention network
CN114176607A (en) * 2021-12-27 2022-03-15 杭州电子科技大学 Electroencephalogram signal classification method based on visual Transformer
CN114298216A (en) * 2021-12-27 2022-04-08 杭州电子科技大学 Electroencephalogram vision classification method based on time-frequency domain fusion Transformer
CN115238731A (en) * 2022-06-13 2022-10-25 重庆邮电大学 Emotion identification method based on convolution recurrent neural network and multi-head self-attention

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于3D卷积神经网络的IR-BCI脑电视频解码研究;官金安;汪鹭汐;赵瑞娟;李东阁;吴欢;;中南民族大学学报(自然科学版);20191215(第04期);全文 *

Also Published As

Publication number Publication date
CN115969381A (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN109612708B (en) Power transformer on-line detection system and method based on improved convolutional neural network
CN112244873B (en) Electroencephalogram space-time feature learning and emotion classification method based on hybrid neural network
CN113887513B (en) Motor imagery electroencephalogram signal classification method based on parallel CNN-transducer neural network
CN112633195B (en) Myocardial infarction recognition and classification method based on frequency domain features and deep learning
CN112381008B (en) Electroencephalogram emotion recognition method based on parallel sequence channel mapping network
KR102318775B1 (en) Method for Adaptive EEG signal processing using reinforcement learning and System Using the same
CN114298216A (en) Electroencephalogram vision classification method based on time-frequency domain fusion Transformer
CN113749657A (en) Brain wave emotion recognition method based on multitask capsules
KR20200018868A (en) Method for Adaptive EEG signal processing using reinforcement learning and System Using the same
CN113052099B (en) SSVEP classification method based on convolutional neural network
CN115486857A (en) Motor imagery electroencephalogram decoding method based on Transformer space-time feature learning
Liu et al. A CNN-transformer hybrid recognition approach for sEMG-based dynamic gesture prediction
CN115381466A (en) Motor imagery electroencephalogram signal classification method based on AE and Transformer
CN114578967A (en) Emotion recognition method and system based on electroencephalogram signals
CN115221969A (en) Motor imagery electroencephalogram signal identification method based on EMD data enhancement and parallel SCN
CN117574059A (en) High-resolution brain-electrical-signal deep neural network compression method and brain-computer interface system
CN115346676A (en) Movement function reconstruction dynamic model construction method based on cortical muscle network
CN114145744B (en) Cross-equipment forehead electroencephalogram emotion recognition based method and system
CN114626607A (en) Traffic flow prediction method based on space-time diagram wavelet convolution neural network
CN117975565A (en) Action recognition system and method based on space-time diffusion and parallel convertors
CN115969381B (en) Electroencephalogram signal analysis method based on multi-band fusion and space-time transducer
Mehtiyev et al. Deepensemble: a novel brain wave classification in MI-BCI using ensemble of deep learners
CN117612710A (en) Medical diagnosis auxiliary system based on electroencephalogram signals and artificial intelligence classification
CN116369945A (en) Electroencephalogram cognitive recognition method based on 4D pulse neural network
CN115813409A (en) Ultra-low-delay moving image electroencephalogram decoding method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant