CN115813409A - Ultra-low-delay moving image electroencephalogram decoding method - Google Patents

Ultra-low-delay moving image electroencephalogram decoding method Download PDF

Info

Publication number
CN115813409A
CN115813409A CN202211542339.5A CN202211542339A CN115813409A CN 115813409 A CN115813409 A CN 115813409A CN 202211542339 A CN202211542339 A CN 202211542339A CN 115813409 A CN115813409 A CN 115813409A
Authority
CN
China
Prior art keywords
electroencephalogram
mapping
module
domain
spatial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211542339.5A
Other languages
Chinese (zh)
Inventor
康晓洋
王君孔帅
方涛
穆伟
王鹏超
王璐
张立华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN202211542339.5A priority Critical patent/CN115813409A/en
Publication of CN115813409A publication Critical patent/CN115813409A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses an ultra-low-delay moving image electroencephalogram decoding method. In the method, a signal conduction model is established by using a public head anatomy template, and cortical brain electrical energy is obtained using a standardized low resolution tomography method. In order to solve the problem of large calculation amount caused by rising of the number of channels, a filter bank common space mode method is used for obtaining a spatial filter kernel, and the calculation amount of feature extraction is reduced to a linear degree. And automatic classification and selection of features is accomplished using a classification network that includes a self-attention mechanism over three domains. The invention obtains advanced accuracy in the four-classification motor imagery electroencephalogram task, and has relatively low delay and higher physiological interpretability. The decoding method provided by the invention is beneficial to realizing a low-delay man-machine interaction system.

Description

Ultra-low-delay moving image electroencephalogram decoding method
Technical Field
The invention belongs to the fields of computer application technology, biomedical engineering, artificial intelligence and brain science. Relates to the feature extraction and classification of human brain electrical signals, in particular to a brain-computer interface control and brain-computer interface mode recognition method.
Background
Brain-computer interface (BCI) technology can decode human brain activity into instructions, which in turn are used to generate control signals. For example, the subject may simply imagine completing text entry, or controlling cursor and robotic arm movement. The technology not only provides an alternative way for paralyzed patients to interact with the outside, but also provides a brand new control strategy for healthy people. Among different types of human brain physiological signals (ECoG, LFP, EMG and the like), the noninvasive scalp EEG (electroencephalogram) is the key point of research in the field of BCI (brain-computer interface) due to higher time resolution and convenience. Wherein, the motor imagery electroencephalogram (MI-EEG) does not need to be induced externally, and a more natural human-computer interaction system can be realized.
However, the spatial resolution of the brain electrical signals collected by the limited number of electrodes covering the scalp is very low, which results in that the decoding algorithm cannot effectively utilize the spatial information of the motor imagery brain electrical signals. The reason for this is the volume conduction effect, which means that cortical neuronal activity can be spread through brain tissue to different locations on the scalp, which greatly impairs the spatial expression of intracranial neuronal activity on the scalp. The volume conduction effect makes the signal content measured by multiple sensors similar, further impairing the effectiveness of the decoding algorithm. There have been methods to increase the number of scalp electrodes to compensate for this drawback, but there is always a certain distance limit between the electrodes, so that there is an upper limit to the number of electrodes. In addition, some methods try to combine a multi-mode electromagnetic physiological acquisition technology, for example, the method collects a near-infrared signal while acquiring electroencephalogram, and the defect of low electroencephalogram spatial resolution is made up by using the advantage of high spatial resolution of the near-infrared signal. However, this method will greatly increase the application cost and is not convenient for application. Therefore, there is a need for a decoding algorithm that can analyze cortical activity in an experimental environment without using any additional auxiliary electrophysiological signal acquisition device, so as to achieve a spatial high resolution analysis of the electroencephalogram signal.
In addition, many methods have been used to apply deep learning techniques to motor imagery decoding, wherein supervised deep learning methods take absolute advantage, in particular Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN). These deep learning methods are usually accompanied with some feature extraction methods for electroencephalogram signals, and the two methods are combined to improve the decoding efficiency and accuracy of the electroencephalogram signals. Therefore, problems of the same type are caused, and the time consumed by different electroencephalogram feature extraction methods and different deep learning models when the calculated amount is too large is different. How to select a proper feature extraction method and a deep learning model to enable the faster the calculation efficiency and the higher the accuracy of signal decoding becomes an urgent problem to be solved.
Therefore, due to the problems, how to design an efficient and accurate motor imagery electroencephalogram data decoding system is very worthy of research and is also the problem to be solved by the invention.
Disclosure of Invention
In order to solve the above problems, the present invention aims to propose a moving image electroencephalogram decoding method that non-invasive neural imaging and spatial filter transformation achieve ultra-low delay; the invention maps sensor domain EEG signal to source domain by Electrophysiological Source Imaging (ESI) technique, thereby solving volume transmission problemInfluence of the effect on the EEG is conducted, and the spatial resolution of the EEG is improved; aiming at the problem of calculated amount brought by the improvement of spatial resolution, the invention trains a group of spatial filters by using a Filter Bank Common Spatial Pattern (FBCSP) algorithm, and reduces the characteristic extraction time of a test part to a linear degree; finally, the invention uses the neural network with the frequency domain-space domain-time domain self-attention mechanism to classify the obtained features so as to ensure the accuracy and efficiency of signal decoding. In the testing stage, except simple filtering, all the step complexity of the invention is controlled to be O (n) 2 ) Therefore, the motor imagery electroencephalogram decoding framework provided by the invention is expected to realize a brain-computer interface system with extremely low delay.
The technical scheme of the invention is specifically described as follows.
A method of ultra-low delay moving picture electroencephalogram decoding, comprising the steps of:
(1) Data preprocessing: performing band-pass filtering and signal average re-referencing on the electroencephalogram motor imagery data set;
(2) Establishing a signal conduction model by using a public head anatomy template, and completing mapping from scalp electroencephalogram to cortical brain power domain by a dynamic parameter statistical mapping imaging (dSPM) method;
(3) Extracting the characteristics of the electroencephalogram signals of the region of interest by using a filter bank common space mode FBCSP algorithm;
(4) Carrying out motor imagery classification on the electroencephalogram characteristic data by using a classification model based on a visual Transformer; wherein:
the classification model based on the visual Transformer comprises an encoder and a decoder, wherein the encoder comprises a key value data mapping module and a spatial domain and frequency domain self-attention mechanism module, and the decoder comprises a patch embedding module, a key value data mapping module and a time domain self-attention mechanism module; the key value data mapping module is used for mapping an input vector X to a query Q, a key K and a value V; a self-attention mechanism module which inputs a query Q and a key K to obtain a weight of attention and then multiplies the result by a value V to obtain an attention value; the patch embedding module is used for converting input two-dimensional data into a plurality of one-dimensional patches for embedding; and (4) taking the electroencephalogram signal features extracted in the step (4) as input by the encoder, taking one-dimensional data of the patch embedding module as input by the decoder, and taking the output of the model as the output of the decoder.
In the invention, in the step (1), the motor imagery tasks of the electroencephalogram motor imagery data set are four, such as four of a left hand, a right hand, a foot, a rest state and a tongue; band-pass filtering at 8-32Hz was performed using a Butterworth filter. The band-pass filtering can be used for enabling the follow-up frequency band division to be average enough, and meanwhile, the low-frequency drift and the power frequency interference can be effectively removed. The average re-reference is used to improve the positioning accuracy of the next step of source imaging.
In the invention, the specific steps of the step (2) are as follows:
(1) creating a three-layer head model as a signal transduction model by segmenting the ICBM152 magnetic resonance image;
(2) then using a boundary element method BEM to obtain a guide field matrix which quantitatively describes the change in the signal volume propagation process;
(3) and finally, obtaining a source imaging mapping kernel by using a dynamic parameter statistical mapping method dSPM, and directly converting the electroencephalogram measured by the scalp sensor into cortical electroencephalogram.
In the step (2), the conductivity of the scalp, skull and brain is set to 0.3300S/m,0.0220S/m,0.3300S/m, respectively, when calculating the boundary element method BEM. In the step (3), the number of the generated sources of the motor brain electrical data set is four.
In step 2, the mapping relationship of the whole process can be expressed as formula (1):
L·S source =S sensor (1)
S source and S sensor Respectively representing source dipole and scalp electroencephalogram signals, and L representing a mapping matrix.
However, since the source dipole point usually has more elements than the scalp electrode, this constitutes a "non-unique" underdetermined problem. That is, the same scalp electrode measurement may produce different source spatial pattern signals. It is common practice to find some constraints by linear estimation methods, the most estimatedPossible source current distribution
Figure BDA0003978243530000031
Equations (2) -4 describe this process, where E is not just a simple inverse matrix representation of L, but rather a multi-parameter expression that mixes the regularization parameter λ, the noise covariance C.
Figure BDA0003978243530000032
E MNE =G MNE L (3)
L T (LL T +λC) -1 =G MNE (4)
Unlike Minimum Norm Estimation (MNE), our dynamic parametric statistical mapping (dSPM) used in the invention normalizes the estimated source activity when using a noise covariance matrix. The method threshold limits the amplitude of the expected current, which is converted to a dimensionless statistical test variable by dividing by the corresponding noise variance. This higher level of "depth weighting" of higher amplitude source activity can significantly improve positioning error, with the corresponding weighting matrix denoted as W dSPM . The corresponding formula is described as shown in (5) and (6):
G dSPM =W dSPM G MNE (5)
Figure BDA0003978243530000033
where diag denotes the elements on the diagonal of the matrix. E in formula (2) MNE Can be further controlled by G dSPM Instead of that.
In the invention, in the step (3), two interest regions in the Desikan-Killiany brain atlas are selected as the interest regions: a region of the precordial cortex and a region of the precordial cortex. These two areas, the precordial and the posteroventricular, cover areas 4 and 6 of the Broadmann brain map, which contain primary and secondary motion areas, can play an important role during the motor imagery task being performed, while avoiding the rise in computational load due to the increased number of channels of the electrical brain signals after the imaging of the electrophysiological source.
In the invention, in the step (3), a FBCSP filter bank common space mode algorithm is adopted, wherein 6 spatial filters are included, and respectively correspond to 6 frequency bands of 8-12Hz,12-16Hz,16-20Hz,20-24Hz,24-28Hz and 28-32Hz, and spatial filter parameters corresponding to 4 maximum characteristic values are respectively selected on each spatial filter according to the type of a source signal, and finally all the spatial filters are spliced and combined. Finally, the spatial filter W is formed by R m*n N =96,m is the number of sources generated in step 3.
In the invention, in the step (4), the classification model based on the visual Transformer comprises attention mechanism modules on a plurality of domains, and is very suitable for EEG task classification with frequency domain-space domain-time domain characteristics. The attention mechanism satisfies:
Figure BDA0003978243530000041
wherein, Q, K and V are obtained by inputting X through linear transformation, and X is not directly used for improving the fitting capability of the model. Because of the matrix W transforming Q, K and V Q ,W K And W V The method is obtained through training, and can better reflect the characteristic distribution of the input X. Since Q and K are both linear changes of X, Q and K are T The size of the dot product of (c) reflects the correlation between different input channels. The Softmax operation is to normalize the input and improve the stability of the model. Assuming that the mean of the elements in Q and K is 0 and the variance is 1, then QK T The variance of (d) is given as d. When d becomes large, then Softmax (QK) T ) Tends to be steep, thus lowering the QK T Divided by a scaling factor
Figure BDA0003978243530000042
And the variance influence caused by the dot product is eliminated, so that the variance becomes 1 again, and the stability of the model is kept during training. The result of the Softmax output is multiplied by V, so thatThe feature distribution conforms to the feature distribution of the original input X. In section 2.5, the size of the channel data after being filtered by the spatial filter generated by FBCSP is mxn, because the filter is a concatenation of filters generated in different frequency bands, and therefore Attention (Q, K, V) actually reflects the Attention results in the frequency domain and the spatial domain. Similarly, we also implement the corresponding Attention mechanism in the time domain, and therefore, the model proposed in the present invention is based on frequency domain-spatial domain-time domain Attention.
In the invention, in the step (4), the step of carrying out feature classification through a classification model of a visual Transformer comprises the following steps:
(1) mapping the input vector to a query Q, a key K and a value V through a key value data mapping module;
(2) obtaining a vector with frequency domain-space domain characteristics through a space domain and frequency domain self-attention mechanism module; preferably, wherein Nhl is the number of headers;
(3) inputting the vector with the frequency domain-spatial domain characteristics into a patch embedding module, and converting the two-dimensional data into a plurality of one-dimensional patches for embedding;
(4) outputting the vector with the time domain-frequency domain-space domain characteristics through a time domain self-attention mechanism module; where h2 is the number of headers;
(5) and finally, sequentially inputting the vectors with the time domain-frequency domain-space domain characteristics into the linear layer F1 and the linear layer Fc2 for classification.
Preferably, in the step (4), the heads of the spatial domain self-attention mechanism module, the frequency domain self-attention mechanism module and the time domain self-attention mechanism module are respectively 5; the sizes of the linear layer F1 and the linear layer Fc2 are 1900, 4, respectively.
In the invention, in the step (4), in the training process of the classification model based on the visual Transformer, the proportion of a training set to a test set is 4:1, the learning rate is set to 0.001 and the training round is set to 1000.
Compared with the prior art, the invention has the beneficial effects that: the method obtains advanced accuracy in the four-classification motor imagery electroencephalogram task, and has relatively low delay and high physiological interpretability. The decoding method provided by the invention is beneficial to realizing a low-delay man-machine interaction system.
Drawings
Fig. 1 is a block diagram of the entire decoding system.
FIG. 2 is a feature extraction framework for computing features based on FBCSP algorithm source signals.
Fig. 3 is a frequency domain-spatial domain-time domain self-attention mechanism classification network architecture for classification.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described implementation examples are only a part of the embodiments of the present invention, but not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In an embodiment, a moving image electroencephalogram decoding method for realizing ultra-low delay by non-invasive neural imaging and spatial filter transformation is provided, which includes the following specific steps:
1. using BCIC IV IIa and High Gamma common electroencephalogram data sets as electroencephalogram signals to be analyzed;
2. in the preprocessing stage, firstly, band-pass filtering of 8-32Hz is carried out on electroencephalogram, and an electrode reference is converted into a Common Average Reference (CAR);
3. completing mapping from scalp brain electricity to cortex brain electricity by using a dynamic parameter statistical mapping imaging (dSPM) method;
4. two regions of interest in the Desikan-Killiany brain map were selected for further analysis, two cortical regions, the precordial and the postcardiac;
5. in the feature extraction part, the feature of the electroencephalogram signal is extracted by using a filter bank common space mode (FBCSP) algorithm;
6. in the data characteristic classification part, a classification framework based on a visual transform model is built by using a Pythrch tool.
In step 1, the High Gamma and BCIC IV IIa data sets contained 14 and 9 subjects, respectively. Each subject in the High Gamma data set is required to perform four motor imagery tasks of left hand, right hand, foot and still for a total of 640 trials of data per subject. Each test subject in the BCIC IV IIa dataset was asked to perform 4 motor imagery (left hand, right hand, foot and tongue) tasks, each test subject containing a total of 576 test runs of data.
In step 2, firstly, simple preprocessing is carried out on data, 8-32Hz filtering is carried out on a source signal by using a Butterworth filter, and signal precision is improved by adopting average re-reference.
In step 3, in the source signal generation section, first, a three-layer head model is created by segmenting the ICBM152 magnetic resonance image. And secondly, on the basis of the head model generated in the previous step, a guide field matrix is obtained by using a boundary element method, and the matrix quantitatively describes the change in the signal volume propagation process. And finally, obtaining a source imaging mapping kernel by using a dynamic parameter statistical mapping method (dSPM), wherein the kernel is a constant coefficient matrix, and the electroencephalogram measured by the scalp sensor can be directly converted into cortical electroencephalogram through conventional matrix multiplication. The whole process is shown in fig. 1 (a).
In step 4, in the ROI selection phase, regions of interest, mainly including the precordial and the precordial regions, are selected based on the Desikan-Killiany brain map, and only the source signal in the regions of interest is selected for decoding. ROI selection can also be viewed as a type of matrix mapping, again by matrix multiplication. The whole process is shown in fig. 1 (b).
In the step 5, in the feature extraction stage, the source signal is processed according to the following steps of 4: a ratio of 1 is divided into a training set and a test set. The position of the whole frame of the process is shown in the left half part of fig. 1 (a), sEEG represents brain electrical signals after ROI selection, and the specific implementation details are shown in fig. 2. A spatial filter bank containing 6 spatial filters is trained by using a training set, and each spatial filter corresponds to each frequency band (8-12Hz, 12-169Hz, 16-20Hz,20-24Hz,24-28Hz,28-32 Hz) divided after filtering. And extracting 4 parameters with the largest characteristic values from each task on each sub-frequency band, and finally selecting 16 (4 multiplied by 4) spatial filter parameters corresponding to the largest characteristic values. Then to the instituteThe number of channels after spatial filtering becomes 96 (16 × 6) by splicing and combining the existing spatial filters. Finally, the spatial filter W is formed by R m*n N =96,m is the number of sources generated in step 3. When the spatial filter is trained, the process of extracting the features can also be regarded as one-time matrix multiplication, F = W × S, F denotes the extracted features, W denotes the spatial filter matrix, and S denotes the source signal.
In step 6, in the feature classification stage, a decoding framework based on a visual Transformer model is used. Fig. 3 details a frequency-space-time domain attention network architecture based on a Transformer architecture implemented herein. In the figure (a) is represented a key-value data mapping module for mapping an input vector X to Q (query), K (key) and V (value). In the figure, (b) represents a self-attention mechanism, Q and K are input to obtain a weight of attention, and then the result is multiplied by V to obtain an attention value. Attention reflects the importance of the features. In the figure, (c) shows a patch embedding module for converting input two-dimensional data into a plurality of one-dimensional patch embedding. Nh1 and Nh2 are the number of headers, which is 5.Fc1 and Fc2 are linear layers with sizes of 1900 and 4, respectively. Some matrix sizes are simplified for ease of illustration. During model training, the specific gravity of the training set and the test set is 4:1, the learning rate is set to 0.001, and the epoch size is 1000. Training is completed on NVIDIA RTX 3090 (24G), and the tool kit for building the network is Pythrch. The features acquired in the feature extraction stage are sent into the introduced model, then the possibility of respectively belonging to four categories can be calculated, finally, the label with the highest possibility is returned as the predicted value of the features, end-to-end data feature classification is realized by the method, the average accuracy rate on a BCI IV IIa data set is 82.1% + -1.9%, and the average accuracy rate on an HGD data set is 85.8% + -1.5%. In the test phase, the method has extremely low decoding delay, and only 0.02s is needed from obtaining a signal to giving a prediction result in a single experiment.
In conclusion, the invention utilizes the design method and the data to obtain the motor imagery electroencephalogram decoding frame, and has the advantages of high-precision decoding of multiple motor imagery electroencephalogram tasks and low time delay. Particularly, in the subsequent data measurement stage, after the training of the cortical electroencephalogram mapping matrix of the source imaging part, the ROI selection part mapping matrix and the FBCSP spatial filter matrix is completed, the calculation process of data is simplified into multiplication among the matrixes, the time and the cost for extracting electroencephalogram characteristics are greatly reduced, and sufficient preconditions are provided for the subsequent realization of a low-delay man-machine interaction system.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, but various changes may be apparent to those skilled in the art, and it is intended that all inventive concepts utilizing the inventive concepts set forth herein be protected without departing from the spirit and scope of the present invention as defined and limited by the appended claims.

Claims (10)

1. A method of ultra-low delay moving image electroencephalogram decoding, comprising the steps of:
(1) Data preprocessing: performing band-pass filtering and signal average re-referencing on the electroencephalogram motor imagery data set;
(2) Establishing a signal conduction model by using a public head anatomy template, and completing mapping from scalp electroencephalogram to cortical brain power domain by a dynamic parameter statistical mapping imaging (dSPM) method;
(3) Extracting the characteristics of the electroencephalogram signals of the region of interest by using a filter bank common space mode FBCSP algorithm;
(4) Carrying out motor imagery classification on the electroencephalogram characteristic data by using a classification model based on a visual Transformer; wherein:
the classification model based on the visual Transformer comprises an encoder and a decoder, wherein the encoder comprises a key value data mapping module and a spatial domain and frequency domain self-attention mechanism module, and the decoder comprises a patch embedding module, a key value data mapping module and a time domain self-attention mechanism module; wherein the key value data mapping module is used for inputting the vectorXMapping to queriesQKey, keyKSum valueV(ii) a Self-attention mechanism module, input queryQAnd keyKTo obtain attention weight, and then comparing the result with the valueVMultiplying to obtain an attention value; the patch embedding module is used for converting input two-dimensional data into a plurality of one-dimensional patches for embedding; and (5) taking the electroencephalogram signal features extracted in the step (4) as input by the encoder, taking one-dimensional data of the patch embedding module as input by the decoder, and taking the output of the model as the output of the decoder.
2. The method according to claim 1, wherein in step (1), the motor imagery tasks of the electroencephalogram motor imagery data set are four; band-pass filtering at 8-32Hz was performed using a Butterworth filter.
3. The method of claim 1, wherein the specific steps of step (2) are as follows:
Figure 727860DEST_PATH_IMAGE001
creating a three-layer head model as a signal transduction model by segmenting the ICBM152 magnetic resonance image;
Figure 353007DEST_PATH_IMAGE002
then using a boundary element method BEM to obtain a guide field matrix which quantitatively describes the change in the signal volume propagation process;
Figure 119975DEST_PATH_IMAGE003
and finally, obtaining a source imaging mapping kernel by using a dynamic parameter statistical mapping method dSPM, and directly converting the brain electricity measured by the scalp sensor into the cortex brain electricity.
4. Method according to claim 3, characterised by the steps of
Figure 599498DEST_PATH_IMAGE002
In the calculation of the boundary element method BEM, the conductivities of the scalp, the skull and the brain are respectively set to 0.3300S/m,0.0220S/m and 0.3300S/m.
5. The method of claim 3, wherein the number of sources from which the motor brain electrical data set is generated is four.
6. The method of claim 1, wherein in step (3), the region of interest is selected from two regions of interest in a Desikan-Killiany brain atlas: a region of the precordial cortex and a region of the posterocardiac cortex.
7. The method according to claim 1, wherein in step (3), a FBCSP filter bank common spatial mode algorithm is adopted, which includes 6 spatial filters corresponding to 6 frequency bands of 8-12hz,12-16hz,16-20hz,20-24hz,24-28Hz, and 28-32Hz, and each spatial filter selects spatial filter parameters corresponding to 4 maximum eigenvalues according to the source signal type, and finally, all spatial filters are spliced and combined.
8. The method of claim 1, wherein in step (4), the attention mechanism satisfies:
Figure 833033DEST_PATH_IMAGE004
Figure 273373DEST_PATH_IMAGE005
wherein the content of the first and second substances,QKandVare all inputXThe signal is obtained by linear transformation, and the signal is obtained by linear transformation,
Figure 453819DEST_PATH_IMAGE006
is a scaling factor.
9. The method of claim 1, wherein in step (4), the step of performing feature classification by the vision-based Transformer classification model comprises:
Figure 646903DEST_PATH_IMAGE001
mapping input vectors to queries by a key-value data mapping moduleQKey, keyKSum valueV
Figure 51339DEST_PATH_IMAGE002
Obtaining a vector with frequency domain-space domain characteristics through a space domain and frequency domain self-attention mechanism module;
Figure 103609DEST_PATH_IMAGE003
inputting the vector with the frequency domain-spatial domain characteristics into a patch embedding module, and converting the two-dimensional data into a plurality of one-dimensional patches for embedding;
Figure 431953DEST_PATH_IMAGE007
outputting the vector with the time domain-frequency domain-space domain characteristics through a time domain self-attention mechanism module;
Figure 886068DEST_PATH_IMAGE008
finally, the vectors with time domain-frequency domain-space domain characteristics are input into the linear layer F1 and the linear layer Fc2 for classification.
10. The method of claim 1, wherein in step (4), the ratio of the training set to the test set in the training process of the visual Transformer-based classification model is 4:1, the learning rate is set to 0.001 and the training round is set to 1000.
CN202211542339.5A 2022-12-02 2022-12-02 Ultra-low-delay moving image electroencephalogram decoding method Pending CN115813409A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211542339.5A CN115813409A (en) 2022-12-02 2022-12-02 Ultra-low-delay moving image electroencephalogram decoding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211542339.5A CN115813409A (en) 2022-12-02 2022-12-02 Ultra-low-delay moving image electroencephalogram decoding method

Publications (1)

Publication Number Publication Date
CN115813409A true CN115813409A (en) 2023-03-21

Family

ID=85543860

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211542339.5A Pending CN115813409A (en) 2022-12-02 2022-12-02 Ultra-low-delay moving image electroencephalogram decoding method

Country Status (1)

Country Link
CN (1) CN115813409A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116595455A (en) * 2023-05-30 2023-08-15 江南大学 Motor imagery electroencephalogram signal classification method and system based on space-time frequency feature extraction

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116595455A (en) * 2023-05-30 2023-08-15 江南大学 Motor imagery electroencephalogram signal classification method and system based on space-time frequency feature extraction
CN116595455B (en) * 2023-05-30 2023-11-10 江南大学 Motor imagery electroencephalogram signal classification method and system based on space-time frequency feature extraction

Similar Documents

Publication Publication Date Title
CN109165556B (en) Identity recognition method based on GRNN
Lahoud et al. Zero-learning fast medical image fusion
Miao et al. A spatial-frequency-temporal optimized feature sparse representation-based classification method for motor imagery EEG pattern recognition
US20040220782A1 (en) Signal interpretation engine
CN112244873A (en) Electroencephalogram time-space feature learning and emotion classification method based on hybrid neural network
CN110090017B (en) Electroencephalogram signal source positioning method based on LSTM
CN114533086B (en) Motor imagery brain electrolysis code method based on airspace characteristic time-frequency transformation
Cui et al. EEG source localization using spatio-temporal neural network
CN110522412B (en) Method for classifying electroencephalogram signals based on multi-scale brain function network
Zeng et al. GRP-DNet: A gray recurrence plot-based densely connected convolutional network for classification of epileptiform EEG
CN109199376A (en) The coding/decoding method of Mental imagery EEG signals based on the imaging of OA-WMNE brain source
CN112957014A (en) Pain detection and positioning method and system based on brain waves and neural network
CN111914925B (en) Patient behavior multi-modal perception and analysis system based on deep learning
Janapati et al. Towards a more theory-driven BCI using source reconstructed dynamics of EEG time-series
Yue et al. Exploring BCI control in smart environments: intention recognition via EEG representation enhancement learning
Wang et al. Multiband decomposition and spectral discriminative analysis for motor imagery BCI via deep neural network
CN115813409A (en) Ultra-low-delay moving image electroencephalogram decoding method
Jiang et al. Analytical comparison of two emotion classification models based on convolutional neural networks
Asadzadeh et al. Accurate emotion recognition utilizing extracted EEG sources as graph neural network nodes
Hansen et al. Spatio-temporal reconstruction of brain dynamics from EEG with a Markov prior
Li et al. Subject-based dipole selection for decoding motor imagery tasks
Dinh et al. Contextual minimum-norm estimates (CMNE): a deep learning method for source estimation in neuronal networks
Liu et al. WRA-MTSI: a robust extended source imaging algorithm based on multi-trial EEG
CN114428555B (en) Electroencephalogram movement intention recognition method and system based on cortex source signals
Rashid et al. Analyzing functional magnetic resonance brain images with opencv2

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination