CN115969381A - Electroencephalogram signal analysis method based on multi-band fusion and space-time Transformer - Google Patents
Electroencephalogram signal analysis method based on multi-band fusion and space-time Transformer Download PDFInfo
- Publication number
- CN115969381A CN115969381A CN202211433136.2A CN202211433136A CN115969381A CN 115969381 A CN115969381 A CN 115969381A CN 202211433136 A CN202211433136 A CN 202211433136A CN 115969381 A CN115969381 A CN 115969381A
- Authority
- CN
- China
- Prior art keywords
- time
- space
- attention
- band
- self
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 33
- 238000004458 analytical method Methods 0.000 title claims abstract description 14
- 238000010586 diagram Methods 0.000 claims abstract description 17
- 238000000034 method Methods 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 12
- 230000004913 activation Effects 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 9
- 230000002123 temporal effect Effects 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 8
- 210000002569 neuron Anatomy 0.000 claims description 8
- 238000011176 pooling Methods 0.000 claims description 8
- 238000007781 pre-processing Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 5
- 230000009467 reduction Effects 0.000 claims description 4
- 239000013598 vector Substances 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims 1
- 210000004556 brain Anatomy 0.000 abstract description 19
- 238000012549 training Methods 0.000 abstract description 13
- 238000005070 sampling Methods 0.000 abstract description 2
- 238000000537 electroencephalography Methods 0.000 description 21
- 238000002474 experimental method Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000010223 real-time analysis Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 208000006096 Attention Deficit Disorder with Hyperactivity Diseases 0.000 description 1
- 208000014644 Brain disease Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000006735 deficit Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001727 in vivo Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000003278 mimic effect Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000002610 neuroimaging Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Landscapes
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
The invention discloses an electroencephalogram signal analysis method based on multi-band fusion and a space-time Transformer, which comprises the steps of firstly converting preprocessed electroencephalogram signals into multi-band images with corresponding time lengths, and reserving three-dimensional space information between sampling channels as much as possible; then, fusing features by adopting a frequency band attention module, calculating an attention diagram of the stacked multi-frequency band images and deducing a fused feature diagram; then, extracting space-time characteristics by using a time/space self-attention module, wherein the space-time characteristics are used for representing and distinguishing multi-frame multi-region dynamic brain states; and finally learning the class information of the features through a multi-layer perceptron. The MEET model obtained by training can represent and analyze the multi-scale time sequence of the brain electroencephalogram signals of the human brain. The invention can effectively improve the classification accuracy of the electroencephalogram signals and solve the classification task of the electroencephalogram signals to a certain extent.
Description
Technical Field
The invention belongs to the technical field of electroencephalogram analysis, and particularly relates to an electroencephalogram signal analysis method.
Background
Electroencephalography (EEG) is one of the most widely used and inexpensive neuroimaging techniques that requires advanced and powerful learning algorithms for modeling and analysis. It is crucial to introduce the multiband concept into the design of the transform architecture that models the EEG signal, according to the multi-scale nature of the EEG signal. Existing work has been extensively studied on multi-band fusion of EEG signals based on traditional signal processing methods and deep learning methods, e.g. by processing the signals using filters of different frequency ranges and fusing the filtered signal bands in feature space. However, the training process of the neural network model is usually slow and complex, and it is difficult to effectively fuse all frequency bands of the electroencephalogram signal due to the noise limitation introduced by the model. In addition, most of the previous deep learning models belong to a late-stage fusion strategy, before fusion, meaningful and distinguishable features are still represented by a single frequency band, which causes redundancy of the models and loss of overall information.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an electroencephalogram signal analysis method (Multi-band EEG Transformer, MEET) based on Multi-band fusion and space-time Transformer, firstly, the preprocessed electroencephalogram signal is converted into a Multi-band image with corresponding time length, and three-dimensional space information among sampling channels is reserved as far as possible; then, fusing features by adopting a frequency band attention module, calculating an attention diagram of the stacked multi-frequency band images and deducing a fused feature diagram; then, extracting space-time characteristics by using a time/space self-attention module, and using the space-time characteristics to characterize and distinguish multi-frame multi-region dynamic brain states; and finally learning the class information of the features through a multi-layer perceptron. The MEET model obtained by training can represent and analyze the multi-scale time sequence of the brain electroencephalogram signals of the human brain. The invention can effectively improve the classification accuracy of the electroencephalogram signals and solve the classification task of the electroencephalogram signals to a certain extent.
The technical scheme adopted by the invention for solving the technical problem comprises the following steps:
step 1: preprocessing the electroencephalogram signal into a multi-band image;
the electroencephalogram signal with the time length of T is down sampled to 200Hz and decomposed into five frequency bands: delta (1-4 Hz), theta (4-8 Hz), alpha (8-14 Hz), beta (14-31 Hz) and Gamma (31-50 Hz); differential entropy feature extraction is independently performed for each EEG channel on each of the five frequency bands using differential entropy as a feature extractor, and then three-dimensional electrode coordinates are mapped to a two-dimensional plane using the AEP method, so that one-dimensional differential entropy feature vectors are recombined into a two-dimensional scattergram. Then, interpolating the scatter diagram by using a C-T method to generate a feature diagram with the resolution of 32 multiplied by 32; the feature maps on the five frequency bands are stacked together;
input EEG data is represented as a four-dimensional feature tensor x i ∈R H×W×5×T Where H W is the resolution of the feature map;
and 2, step: fusing multi-frequency band features;
three-dimensional multi-band feature tensorTwo results are obtained at time i, denoted F, by maximum pooling and average pooling, respectively avg And &>
Then F is mixed avg /F max Feeding two layers of weight-shared multi-layer perceptron to generate frequency band attention diagram marked as A avg /A max The first layer is composed ofA number of neurons consisting and activated using the ReLU function, the number of neurons in the second layer being 5, r being the reduction rate; a. The avg And A max Summing and combining corresponding elements of the matrix to generate a final frequency band attention diagram; the calculation formula is as follows:
M c (F)=σ(MLP(AvgPool(x i ))+MLP(MaxPool(x i )))
=σ(W 1 (W 0 (F avg ))+W 1 (W 0 (F max )))
where σ denotes the activation function, x i Is an input feature, W 0 ∈R 5/r×5 And W 1 ∈R 5×5/r A parameter matrix representing a multi-layer perceptron;
and step 3: extracting time sequence/space characteristics;
learning the time sequence dependence and the spatial relationship of the complex EEG signal by using a MEET (spatial event energy) model based on multi-band fusion and a space-time Transformer model, wherein a time self-attention module learns the time sequence dependence among different frames, and a space self-attention module learns the spatial relationship among different positions in the same frame; the calculation formula is as follows:
learning the time sequence dependence and the spatial relationship of the complex EEG signal by using a MEET (spatial event energy) model based on multi-band fusion and a space-time Transformer model, wherein a time self-attention module learns the time sequence dependence among different frames, and a space self-attention module learns the spatial relationship among different positions in the same frame; in the time self-attention module, tensor blocks at the same spatial position in continuous t frames are grouped, the tensor blocks in each group are vectorized and used as query/key/value to calculate multi-head self-attention; the two self-attention weights α are calculated on query (p, t) as follows:
in the formula, l and a respectively represent the number of layers of an encoder and the serial number of a multi-head self-attention module, p and t respectively represent the position serial number and the time serial number of a query block, SM is a softmax activation function, q/k represents query/key, and correspondingly, the dimension of each attention head is D h (ii) a Space attention moduleIs based on the results of the temporal attention module;
and 4, step 4: and (4) preprocessing the data set in the step (1) and inputting the preprocessed data set into the step (2) and the step (3) to obtain final model output, namely a final classification result.
Preferably, the depth of the MEET based on the multi-band fusion and space-time Transformer model is 3, the hidden layer dimension is 768, and the multi-layer sensing machine dimension is 3072.
Preferably, the depth of the MEET based on the multi-band fusion and space-time Transformer model is 6, the hidden layer dimension is 768, and the multi-layer sensing machine dimension is 3072.
Preferably, the depth of the MEET based on the multi-band fusion and space-time Transformer model is 12, the dimension of the hidden layer is 1024, and the dimension of the multi-layer perceptron is 4096.
The invention has the following beneficial effects:
1. the invention has important significance for electroencephalogram signal analysis, can effectively model electroencephalogram signals to distinguish brain states by using a Transformer as a backbone network, and more importantly, the multi-band fusion strategy of MEET can obviously improve classification performance, and meanwhile, compared with other advanced methods, consumed training resources are obviously reduced.
2. The electroencephalogram signal classification technology has an important role in the technical fields of electroencephalogram analysis, BCI and neuroscience. For BCI application, real-time analysis of brain states is critical, MEET has pre-training and fine-tuning strategies, and the potential for realizing online brain state inference under limited time consumption and resource cost is great.
Drawings
Fig. 1 is a schematic diagram of a network structure of the MEET model of the present invention.
Fig. 2 is a schematic diagram of a frequency band attention module and a time/space self-attention module according to the present invention.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
The invention provides an electroencephalogram signal analysis method based on multi-band fusion and a space-time Transformer, and provides a model based on a depth self-attention transformation network, a frequency band attention module and a time/space self-attention module are fused, and potential characteristic information is learned.
As shown in fig. 1, a method for analyzing an electroencephalogram signal based on multi-band fusion and space-time Transformer includes the following steps:
step 1: preprocessing the electroencephalogram signal into a multiband image with corresponding time length;
as shown in the left side of FIG. 1, after extracting the multi-band feature for each segment of the input electroencephalogram signal, the multi-band fusion module uses the frequency band attention block to derive the feature of the learnable weight of the linear combination band of the multi-band. The electroencephalogram signal with the time length of T is down sampled to 200Hz and is decomposed into five frequency bands: delta (1-4 Hz), theta (4-8 Hz), alpha (8-14 Hz), beta (14-31 Hz), and Gamma (31-50 Hz). Differential entropy widely used in electroencephalogram analysis is adopted as a feature extractor, after differential entropy feature extraction is independently performed on each EEG channel on each of five frequency bands, three-dimensional electrode coordinates are mapped to a two-dimensional plane by using an AEP (orthogonal equivalent likelihood Projection) method. Thus, the one-dimensional differential entropy feature vectors are regrouped into a two-dimensional scattergram. We then interpolate the scatter plot using the C-T scheme to generate a feature map with a resolution of 32 x 32. The signatures over the five frequency bands are stacked. Through the above steps, the input EEG data is represented as a four-dimensional feature tensorWhere H W is the resolution of the feature map, T is the time length of the feature map sequence;
and 2, step: fusing multi-frequency band features;
three-dimensional multi-band feature tensorTwo results were obtained at time i, denoted as F, by maximum pooling (Maxpool) and average pooling (AvgPool), respectively avg And &>Then F is mixed avg /F max Feeding two layers of weight-shared multilayer perceptrons (MLPs) to generate frequency band attention maps denoted as F avg /A max . The first layer is selected by>(r is the reduction rate) neurons and activation was performed using the ReLU function, the number of neurons in the second layer was 5. A. The avg And A max And combining through summing corresponding elements of the matrix to generate a final frequency band attention diagram. The calculation formula is as follows:
M c (F)=σ(MLP(AvgPool(x i ))+MLP(MaxPool(x i )))
=σ(W 1 (W 0 (F avg ))+W 1 (W 0 (F max ))
where σ denotes the activation function, x i Is an input feature, W 0 ∈R 5/r×5 And W 1 ∈R 5×5/r A parameter matrix representing a multi-layer perceptron;
and 3, step 3: extracting time sequence/space characteristics;
learning the time sequence dependency and the spatial relationship of the complex EEG signals by adopting a MEET model based on multi-band fusion and a space-time Transformer, wherein a time self-attention module learns the time sequence dependency among different frames, and a space self-attention module learns the spatial relationship among different positions in the same frame; in the temporal self-attention module, tensor blocks at the same spatial position in consecutive t frames are grouped, the tensor blocks in each group are vectorized, multi-head self-attention is calculated as query/key/value, and the spatial self-attention module is similar to a ViT model (Vision Transformer). The two self-attention weights α are calculated on query (p, t) as follows:
in the formula, l and a respectively represent the number of layers of an encoder and the serial number of a multi-head self-attention module, p and t respectively represent the position serial number and the time serial number of a query block (query patch), SM is a softmax activation function, q/k represents query/key, and correspondingly, the dimension of each attention head is D h . The spatial attention module's calculations are based on the results of the temporal attention module.
And 4, step 4: and (3) preprocessing the data set in the step (1) and inputting the data set into the step (2) and the step (3) to obtain final model output, namely a final classification result.
To cope with tasks of different scales, three MEET model variants were designed. As shown in table 1 below, "depth" represents the number of layers of the EEG transducer encoder (including the temporal self-attention module and the spatial self-attention module). "time" represents the time required to train the model 100 times on one task. In the following, MEET-Small is used for a large number of basic assessments (including in-vivo and cross-individual experiments) because of its fast training speed and no significant attenuation in accuracy; MEET-Base is used for a comparison experiment to confirm the structure and parameters of the model; MEET-Large is used to explore the upper limit of model learning ability.
TABLE 1
The specific embodiment is as follows:
1. preprocessing the electroencephalogram signal into a multiband image with corresponding time length;
as shown in the left side of FIG. 1, after extracting the multi-band feature for each segment of the input electroencephalogram signal, the multi-band fusion module uses the frequency band attention block to derive the feature of the learnable weight of the linear combination band of the multi-band. The electroencephalogram signal with the time length of T is down sampled to 200Hz and is decomposed into five frequency bands: delta (1-4 Hz), theta (4-8 Hz), alpha (8-14 Hz), beta (14-31 Hz), and Gamma (31-50 Hz). Differential entropy, widely used in electroencephalography analysis, is independently performed for each EEG channel on each of five frequency bands using the differential entropy as a feature extractorAfter the feature extraction, the three-dimensional electrode coordinates are mapped to a two-dimensional plane using the AEP (orthogonal equivalent distance Projection) method. Thus, the one-dimensional differential entropy feature vectors are regrouped into a two-dimensional scattergram. We then interpolate the scatter plot using the C-T scheme to generate a feature map with a resolution of 32 x 32. The signatures over the five bands are stacked. Through the above steps, the input EEG data is represented as a four-dimensional feature tensorWhere H W is the resolution of the feature map and T is the time length of the sequence of feature maps;
2. multi-band feature fusion
Three-dimensional multi-band feature tensorTwo results were obtained at time i, denoted as F, by maximum pooling (Maxpool) and average pooling (AvgPool), respectively avg And &>Then F is mixed avg /F max Feeding two layers of weight-shared multilayer perceptron (MLP) to generate frequency band attention maps denoted A avg /A max . The first layer is selected by>(r is the reduction rate) neurons and activation was performed using the ReLU function, the number of neurons in the second layer was 5. A. The avg And A max And combining through summing corresponding elements of the matrix to generate a final frequency band attention diagram. The calculation formula is as follows:
M c (F)=σ(MLP(AvgPool(x i ))+MLP(MaxPool(x i )))
=σ(W 1 (W 0 (F avg ))+w 1 (W 0 (F max )))
where σ denotes the activation function, x i Is an input feature, W 0 ∈R 5/r×5 And W 1 ∈R 5×5/r A parameter matrix representing a multi-layer perceptron;
3. temporal/spatial feature extraction
Learning the time sequence dependency and the spatial relationship of the complex EEG signals by adopting a MEET model based on multi-band fusion and a space-time Transformer, wherein a time self-attention module learns the time sequence dependency among different frames, and a space self-attention module learns the spatial relationship among different positions in the same frame; in the temporal self-attention module, tensor blocks at the same spatial position in consecutive t frames are grouped, the tensor blocks in each group are vectorized, and multi-head self-attention is calculated as query/key/value, and the spatial self-attention module is similar to a ViT model (Vision Transformer). The two self-attention weights α are calculated on query (p, t) as follows:
in the formula, l and a respectively represent the number of layers of an encoder and the serial number of a multi-head self-attention module, p and t respectively represent the position serial number and the time serial number of a query block (query patch), SM is a softmax activation function, q/k represents query/key, and correspondingly, the dimension of each attention head is D h . The calculation of the spatial attention module is based on the results of the temporal attention module.
4. Testing phase
And (3) preprocessing the data set in the step (1), and inputting the preprocessed data set into the step (2) and the step (3) to obtain final model output, namely a final classification result. Two published EEG datasets were used to assess the performance of MEETs: SEED (three classification tasks) and SEED-IV (four classification tasks). For SEED, MEET-Small achieved excellent classification results on every individual (mean accuracy 99.30%). And still obtain very high accuracy (average accuracy is 96.02%) for the more difficult SEED-IV dataset. The test performance of various classification methods is counted, and compared with the current most advanced model 4D-aNN. In the SEED experiment, MEET-Small improves the average accuracy by 3.05 percent; the promotion range of SEED-IV is larger, and the average accuracy is improved by 9.25%. At the same time, the standard deviation of the proposed MEET model is also minimal, which strongly demonstrates the stability and robustness of MEET.
The invention provides an electroencephalogram signal analysis method based on multi-band fusion and a space-time Transformer, which proves that the electroencephalogram signal can be effectively modeled to distinguish brain states by using the Transformer as a backbone network, and more importantly, the multi-band fusion strategy of the MEET can obviously improve the classification performance. MEET-Small achieves the best results with far less computational resources required for training, compared to most algorithms using RNN or RNN-framework-based combination of CNN/GNN. This is largely due to the proposed temporal/spatial self-attention module, which learns the timing information of EEG signals by simulating a bi-directional network. The time complexity of parallel computation is much lower than the serial computation method used by RNN. Meanwhile, MEET proves that 5-band fusion is a better fusion strategy, and relevant meaningful brain attention areas can be explained based on a unique self-attention mechanism. In addition, in order to further improve the brain state classification capability, the MEET designs a pre-training strategy and evaluates the pre-training strategy to check the effectiveness of the pre-training, and the results show that 0.87% and 0.64% of promotion is obtained compared with the training from the beginning, which proves that the prediction capability of the model can be remarkably improved by the pre-trained MEET. In BCI using EEG, MEET can provide an efficient characterization learning framework with higher brain state discrimination capability while spending less training resources. For the application of a brain-controlled robot, real-time analysis of brain states is a key, MEET has pre-training and fine-tuning strategies, and the potential of realizing online brain state reasoning under limited time consumption and resource cost is great. For applications using EEG for diagnosis of brain diseases, such as Attention Deficit Hyperactivity Disorder (ADHD), MEET can mimic brain waves and help analyze attention deficit mechanisms. In addition, MEET can also model and study the interaction between abnormal and normal brain attention areas, which will provide great support for brain rehabilitation.
Claims (4)
1. A multi-band fusion and space-time Transformer based electroencephalogram signal analysis method is characterized by comprising the following steps:
step 1: preprocessing the electroencephalogram signal into a multi-band image;
the electroencephalogram signal with the time length of T is down sampled to 200Hz and decomposed into five frequency bands: delta (1-4 Hz), theta (4-8 Hz), alpha (8-14 Hz), beta (14-31 Hz), and Gamma (31-50 Hz); adopting differential entropy as a feature extractor, independently executing differential entropy feature extraction on each EEG channel on each of the five frequency bands, and then mapping the three-dimensional electrode coordinates to a two-dimensional plane by using an AEP method, so that the one-dimensional differential entropy feature vectors are recombined into a two-dimensional scatter diagram; then, interpolating the scatter diagram by using a C-T method to generate a feature diagram with the resolution of 32 multiplied by 32; the characteristic maps of the five frequency bands are stacked;
input EEG data is represented as a four-dimensional feature tensor x i ∈R H×W×5×T Where H W is the resolution of the feature map;
step 2: fusing multi-frequency band features;
three-dimensional multi-band feature tensorTwo results are obtained at time i, denoted F, by maximum pooling and average pooling, respectively avg And &>
Then F is mixed avg /F max Feeding two layers of weight-shared multi-layer perceptron to generate frequency band attention diagram marked as A avg /A max The first layer is composed ofIndividual neurons were composed and activated using the ReLU function, the number of neurons in the second layer was 5, r is the reduction rate; a. The avg And A max The elements corresponding to the matrix are summed and combined to generate a final frequency band attention diagram; the calculation formula is as follows:
M c (F)=σ(MLP(AvgPool(x i ))+MLP(MaxPool(x i )))
=σ(w 1 (W 0 (F avg ))+W 1 (W 0 (F max )))
where σ denotes the activation function, x i Is an input feature, W 0 ∈R 5/r×5 And W 1 ∈R 5×5/r A parameter matrix representing a multi-layer perceptron;
and step 3: extracting time sequence/space characteristics;
learning the time sequence dependency and the spatial relationship of the complex EEG signals by adopting a MEET model based on multi-band fusion and a space-time Transformer, wherein a time self-attention module learns the time sequence dependency among different frames, and a space self-attention module learns the spatial relationship among different positions in the same frame; the calculation formula is as follows:
learning the time sequence dependency and the spatial relationship of the complex EEG signals by adopting a MEET model based on multi-band fusion and a space-time Transformer, wherein a time self-attention module learns the time sequence dependency among different frames, and a space self-attention module learns the spatial relationship among different positions in the same frame; in the time self-attention module, tensor blocks at the same spatial position in continuous t frames are grouped, the tensor blocks in each group are vectorized and used as query/key/value to calculate multi-head self-attention; the two self-attention weights α are calculated on query (p, t) as follows:
in the formula, l and a represent the number of layers of the encoder and the serial number of the multi-head self-attention module respectively, and p and t are respectively shownShowing the position sequence number and the time sequence number of the query block, wherein SM is a softmax activation function, q/k is a query/key, and correspondingly, the dimension of each attention head is D h (ii) a The calculation of the spatial attention module is based on the results of the temporal attention module;
and 4, step 4: and (4) preprocessing the data set in the step (1) and inputting the preprocessed data set into the step (2) and the step (3) to obtain final model output, namely a final classification result.
2. The method for analyzing the EEG signal based on the multi-band fusion and the space-time Transformer is characterized in that the depth of a MEET model based on the multi-band fusion and the space-time Transformer is 3, the hidden layer dimension is 768, and the multi-layer perceptron dimension is 3072.
3. The method for analyzing the EEG signal based on the multi-band fusion and the space-time Transformer is characterized in that the depth of a MEET model based on the multi-band fusion and the space-time Transformer is 6, the hidden layer dimension is 768, and the multi-layer perceptron dimension is 3072.
4. The method for analyzing the EEG signal based on the multiband fusion and the spatio-temporal Transformer is characterized in that the depth of the MEET based on the multiband fusion and the spatio-temporal Transformer model is 12, the hidden layer dimension is 1024, and the multi-layer perceptron dimension is 4096.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211433136.2A CN115969381B (en) | 2022-11-16 | 2022-11-16 | Electroencephalogram signal analysis method based on multi-band fusion and space-time transducer |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211433136.2A CN115969381B (en) | 2022-11-16 | 2022-11-16 | Electroencephalogram signal analysis method based on multi-band fusion and space-time transducer |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115969381A true CN115969381A (en) | 2023-04-18 |
CN115969381B CN115969381B (en) | 2024-04-30 |
Family
ID=85960118
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211433136.2A Active CN115969381B (en) | 2022-11-16 | 2022-11-16 | Electroencephalogram signal analysis method based on multi-band fusion and space-time transducer |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115969381B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021104099A1 (en) * | 2019-11-29 | 2021-06-03 | 中国科学院深圳先进技术研究院 | Multimodal depression detection method and system employing context awareness |
CN113907706A (en) * | 2021-08-29 | 2022-01-11 | 北京工业大学 | Electroencephalogram seizure prediction method based on multi-scale convolution and self-attention network |
CN114176607A (en) * | 2021-12-27 | 2022-03-15 | 杭州电子科技大学 | Electroencephalogram signal classification method based on visual Transformer |
CN114298216A (en) * | 2021-12-27 | 2022-04-08 | 杭州电子科技大学 | Electroencephalogram vision classification method based on time-frequency domain fusion Transformer |
CN115238731A (en) * | 2022-06-13 | 2022-10-25 | 重庆邮电大学 | Emotion identification method based on convolution recurrent neural network and multi-head self-attention |
-
2022
- 2022-11-16 CN CN202211433136.2A patent/CN115969381B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021104099A1 (en) * | 2019-11-29 | 2021-06-03 | 中国科学院深圳先进技术研究院 | Multimodal depression detection method and system employing context awareness |
CN113907706A (en) * | 2021-08-29 | 2022-01-11 | 北京工业大学 | Electroencephalogram seizure prediction method based on multi-scale convolution and self-attention network |
CN114176607A (en) * | 2021-12-27 | 2022-03-15 | 杭州电子科技大学 | Electroencephalogram signal classification method based on visual Transformer |
CN114298216A (en) * | 2021-12-27 | 2022-04-08 | 杭州电子科技大学 | Electroencephalogram vision classification method based on time-frequency domain fusion Transformer |
CN115238731A (en) * | 2022-06-13 | 2022-10-25 | 重庆邮电大学 | Emotion identification method based on convolution recurrent neural network and multi-head self-attention |
Non-Patent Citations (1)
Title |
---|
官金安;汪鹭汐;赵瑞娟;李东阁;吴欢;: "基于3D卷积神经网络的IR-BCI脑电视频解码研究", 中南民族大学学报(自然科学版), no. 04, 15 December 2019 (2019-12-15) * |
Also Published As
Publication number | Publication date |
---|---|
CN115969381B (en) | 2024-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112633195B (en) | Myocardial infarction recognition and classification method based on frequency domain features and deep learning | |
Peng et al. | 3D-STCNN: Spatiotemporal Convolutional Neural Network based on EEG 3D features for detecting driving fatigue | |
CN112766355B (en) | Electroencephalogram signal emotion recognition method under label noise | |
KR102318775B1 (en) | Method for Adaptive EEG signal processing using reinforcement learning and System Using the same | |
CN111523601A (en) | Latent emotion recognition method based on knowledge guidance and generation counterstudy | |
CN114298216A (en) | Electroencephalogram vision classification method based on time-frequency domain fusion Transformer | |
CN113749657A (en) | Brain wave emotion recognition method based on multitask capsules | |
KR20200018868A (en) | Method for Adaptive EEG signal processing using reinforcement learning and System Using the same | |
Liu et al. | A CNN-transformer hybrid recognition approach for sEMG-based dynamic gesture prediction | |
CN114748053A (en) | fMRI high-dimensional time sequence-based signal classification method and device | |
CN112990008A (en) | Emotion recognition method and system based on three-dimensional characteristic diagram and convolutional neural network | |
Morabito et al. | Deep learning approaches to electrophysiological multivariate time-series analysis | |
CN115251909B (en) | Method and device for evaluating hearing by electroencephalogram signals based on space-time convolutional neural network | |
CN114578967A (en) | Emotion recognition method and system based on electroencephalogram signals | |
CN115221969A (en) | Motor imagery electroencephalogram signal identification method based on EMD data enhancement and parallel SCN | |
CN117574059A (en) | High-resolution brain-electrical-signal deep neural network compression method and brain-computer interface system | |
Wang et al. | Multiband decomposition and spectral discriminative analysis for motor imagery BCI via deep neural network | |
Mehtiyev et al. | Deepensemble: a novel brain wave classification in MI-BCI using ensemble of deep learners | |
CN117612710A (en) | Medical diagnosis auxiliary system based on electroencephalogram signals and artificial intelligence classification | |
CN115969381B (en) | Electroencephalogram signal analysis method based on multi-band fusion and space-time transducer | |
CN116369945A (en) | Electroencephalogram cognitive recognition method based on 4D pulse neural network | |
CN115813409A (en) | Ultra-low-delay moving image electroencephalogram decoding method | |
CN115909438A (en) | Pain expression recognition system based on depth time-space domain convolutional neural network | |
CN115844425A (en) | DRDS (dry brain data set) electroencephalogram signal identification method based on Transformer brain area time sequence analysis | |
Huang et al. | Cross-subject MEG decoding using 3D convolutional neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |