CN115844425B - DRDS brain electrical signal identification method based on transducer brain region time sequence analysis - Google Patents
DRDS brain electrical signal identification method based on transducer brain region time sequence analysis Download PDFInfo
- Publication number
- CN115844425B CN115844425B CN202211588317.2A CN202211588317A CN115844425B CN 115844425 B CN115844425 B CN 115844425B CN 202211588317 A CN202211588317 A CN 202211588317A CN 115844425 B CN115844425 B CN 115844425B
- Authority
- CN
- China
- Prior art keywords
- time
- eeg
- features
- brain region
- time sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 210000004556 brain Anatomy 0.000 title claims abstract description 88
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000012300 Sequence Analysis Methods 0.000 title claims abstract description 11
- 239000000284 extract Substances 0.000 claims abstract description 17
- 230000004927 fusion Effects 0.000 claims abstract description 13
- 230000009466 transformation Effects 0.000 claims abstract description 6
- 238000007781 pre-processing Methods 0.000 claims abstract description 5
- 238000000605 extraction Methods 0.000 claims description 11
- 238000012549 training Methods 0.000 claims description 5
- 238000011176 pooling Methods 0.000 claims description 3
- 238000012731 temporal analysis Methods 0.000 claims 1
- 238000000700 time series analysis Methods 0.000 claims 1
- 238000000537 electroencephalography Methods 0.000 description 56
- 230000004438 eyesight Effects 0.000 description 8
- 238000013135 deep learning Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 6
- 238000011160 research Methods 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000019771 cognition Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000007177 brain activity Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000003058 natural language processing Methods 0.000 description 2
- 230000004382 visual function Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000036772 blood pressure Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000012733 comparative method Methods 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000002599 functional magnetic resonance imaging Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000002582 magnetoencephalography Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000002600 positron emission tomography Methods 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000004304 visual acuity Effects 0.000 description 1
Landscapes
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
The invention discloses a DRDS brain electrical signal identification method based on a transducer brain region time sequence analysis, and relates to the technical field of brain electrical signal identification. The invention comprises the following steps: preprocessing EEG data of a subject, and slicing the preprocessed EEG data to be used as an input sample of a network; extracting the features of the EEG signals by using time convolution and dimension transformation, and sending the extracted features to a brain region transducer module containing Transformer Encoder structures to extract spatial features; transposing the spatial features, and sending the spatial features into a time sequence transducer module containing Transformer Encoder structures to extract global self-attention features and extract time sequence features; and constructing a space-time multi-scale convolution fusion module to obtain the space-time characteristics of the advanced EEG and finish the classification of EEG signals. The invention effectively utilizes the space connection between the electroencephalogram signal electrodes, fully excavates the time sequence information between the contexts, and improves the classification accuracy.
Description
The invention relates to the technical field of electroencephalogram signal identification, in particular to a DRDS electroencephalogram signal identification method based on a transducer brain region time sequence analysis.
Background
Stereoscopic vision is an important physiological index of visual function, is a key of good motion control and accurate stereoscopic cognition, and is reflected in stereoscopic visual acuity, namely the minimum parallax triggering stereoscopic perception, in medical angle. In recent years, many stereoscopic related studies have been conducted on static stereogram horizontal parallax or stereoscopic depth motion based on varying speeds, parallaxes, and the like, and few recognition studies have been conducted on depth dynamic random point stereograms (Dynamic Random Dots Stereogram, DRDS). Related studies have shown that object motion in stereoscopic scenes is closely related to stereoscopic vision. The stereoscopic vision research can obtain various physiological electric signals to be tested by means of precise acquisition equipment, and then a signal characteristic extraction method is designed for further analysis to obtain a result with higher generality and objectivity. In the process of stereoscopic vision cognition, some researchers focus on physiological indexes which can objectively reflect the intention of a tested person, including: blood pressure value, heart rate value, electro-oculogram signal, electromyogram signal, electrocardio signal, brain activity, etc. In recent years, the methods mainly used for the study of brain activity are: positron emission tomography, magnetoencephalography, functional magnetic resonance imaging, electroencephalography (Electroencephalography, EEG), and the like. Among the above methods, EEG is widely used in the field of stereoscopic vision recognition as a non-invasive, high time resolution technique. But the brain electrical signal is very weak and has information redundancy, and has non-stationarity and nonlinearity. Therefore, compared with other signals, the analysis and processing technology of the EEG signals is important to obtain accurate identification and classification results, and EEG signal research based on depth dynamic random point stereograms is of great significance.
Conventional machine learning EEG signal classification methods generally consist of two stages, manual feature extraction and classification. In the aspect of feature extraction, the main method is that the short-time Fourier transform of a non-overlapping Hanning window is adopted to extract the time-frequency EEG features; capturing power spectral density as a frequency characteristic by using a Welch method; capturing optimal spatial features of the EEG using a filter bank common spatial pattern; or by constructing brain feature space with functional connections. And after the EEG signals are subjected to feature extraction, the obtained EEG features are sent to a linear discriminant analysis, random forest or support vector machine classifier for classification.
In recent years, deep learning has shown great potential in fields of natural language processing, computer vision, speech recognition, and the like, because deeper intrinsic feature representations can be automatically obtained from raw data. In addition, deep learning is also applied to EEG signal classification tasks and achieves a significant performance improvement. Deep learning algorithms are now widely used in many areas, such as computer vision, natural language processing, EEG signal recognition, and the like. In the field of EEG recognition, deep learning based methods can combine traditional feature extraction and classification processes, automatically implementing EEG classification in a data-driven manner through a neural network. The electroencephalogram signal identification method based on deep learning has great advantages in terms of universality and accuracy of signal identification. Considering that EEG signals are continuous sequence signals in the time dimension, electrodes are distributed at different spatial positions, and the extraction of high-quality space-time characteristics is the basis for correctly representing the EEG signals, the invention provides a DRDS EEG signal identification method based on a transducer brain region time sequence analysis.
Disclosure of Invention
1. Technical problem to be solved by the invention
The invention aims to provide a deep learning algorithm capable of better carrying out DRDS electroencephalogram signal identification.
2. Technical proposal
In order to achieve the above purpose, the present invention provides the following technical solutions:
A DRDS brain electrical signal identification method based on a transducer brain region time sequence analysis comprises the following steps:
S1, preprocessing EEG data of a subject, and slicing the preprocessed EEG data to be used as an input sample of a network;
S2, extracting the features of the EEG signals by using time convolution and dimension transformation, and sending the extracted features to a brain region transducer module containing Transformer Encoder structures to extract spatial features;
S3, transposing the spatial features, and sending the spatial features into a time sequence transducer module containing Transformer Encoder structures to extract global self-attention features and extract time sequence features;
S4, constructing a space-time multi-scale convolution fusion module comprising three space multi-scale convolution layers and three time multi-scale convolution layers to obtain advanced EEG space-time characteristics and finish classification of EEG signals.
Preferably, the S1 specifically includes the following:
Preprocessing first downsamples, filters, baseline processing and artifact removal of the acquired EEG signal, then to expand the dataset, the signal is overlapped and sliced, converting it in the time dimension into a series of 1s samples, 6912 samples per subject after slicing.
Preferably, the time convolution size mentioned in S2 is 1×7, and the step size is 1×2;
The brain region transducer module is realized on the basis of brain region division of EEG electrodes, and the working flow of the brain region transducer module is specifically as follows:
S2.1, dividing the characteristics into 6 different brain region sets from the angle of EEG brain regions by a brain region transducer module;
S2.2, sending the characteristics of each brain region into a corresponding brain region Transformer Encoder module, and extracting global dependence information between electrodes in each brain region;
S2.3, splicing global information extracted from 6 brain regions to obtain a set of electroencephalogram information among all electrodes of the whole brain region;
And S2.4, sending signals of all the electrodes to a brain region Transformer Encoder module, and extracting global importance information of each electrode in the whole brain region.
Through the operation, the brain region transducer module extracts the global importance information in each brain region and on the whole brain region.
Preferably, the S3 specifically includes the following:
The time sequence transducer module is based on the time sequence Encoder module, adopts a Transformer Encoder structure to extract global self-attention characteristics among the electrodes of P time slice sequences with the length of 1 XN, and extracts time sequence characteristics; the timing Encoder module consists essentially of the Transformer Encoder structure.
Preferably, the S4 specifically includes the following:
the spatio-temporal multi-scale convolution fusion module performs higher-level feature extraction on the information obtained by the Transformer encoder structure, and specifically comprises the following steps of:
s4.1, sending the features extracted by the time sequence transducer module into three spatial multi-scale convolution layers, and extracting EEG deep space information from local and global angles;
s4.2, splicing the three-scale information to form advanced spatial EEG characteristics;
S4.3, extracting and fusing advanced time information by using three time convolutions with different scales to obtain final advanced EEG space-time characteristics;
S4.4, accelerating network training to reduce the number of features, and adopting 1×1 convolution and 4×4 pooling operation to obtain final fusion features;
S4.5, sending the fusion characteristics into two fully connected layers and one softmax layer to finish classification of EEG signals.
3. Advantageous effects
The technical scheme provided by the invention has the beneficial effects that:
According to the invention, through the brain region transducer module, the time sequence transducer module and the space-time multi-scale convolution module, the spatial relation between electrodes in different brain regions and the whole brain region is extracted, and the time sequence information of EEG is extracted, a novel brain region time sequence analysis network based on the transducer is provided for identifying EEG classification by using a stereogram, so that the accurate identification of brain electrical signals induced by the stereogram is realized, the characteristics ensure that the brain region time sequence analysis network can be used in technical practice, and provide condition basis and theoretical support for stereogram identification and stereovision related research, so that the stereovision is better applied to the fields of aviation and remote sensing measurement, industrial automation systems, medicine and the like.
Drawings
FIG. 1 is a block diagram of a brain region time sequence analysis network of a DRDS brain signal identification method based on a transducer brain region time sequence analysis;
FIG. 2 is a schematic diagram of a brain region Encoder according to example 2 of the present invention;
FIG. 3 is a schematic diagram of a timing Encoder module mentioned in embodiment 2 of the present invention;
fig. 4 is a structural division diagram of brain regions mentioned in example 2 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in further detail below.
Example 1:
referring to fig. 1, an embodiment of the present invention provides a DRDS electroencephalogram signal identification method based on a transducer brain region timing analysis, as shown in fig. 1, the method includes the following steps:
101: pretreatment of
The acquired EEG signals are preprocessed, including downsampling, filtering, baseline processing, artifact removal, slicing, and the like.
102: Extracting spatial features of EEG signals
The invention extracts the local and global information of different brain regions through the brain region transducer module and extracts the spatial characteristics of EEG signals.
103: Extracting timing features of EEG signals
The invention captures the time sequence information of the EEG signal by using the characteristic that the transducer learns to depend on a long distance through a time sequence transducer module, and extracts the time sequence characteristics of the EEG signal.
104: Extracting advanced EEG spatiotemporal features
The invention carries out higher-level feature extraction on the EEG signal through a space-time multi-scale convolution fusion module, and firstly, extracts EEG deep space information from local and global angles through three space multi-scale convolution layers; and extracting and fusing advanced time information through three different-scale time convolutions to obtain final advanced EEG space-time characteristics, and sending the extracted advanced EEG space-time characteristics into two full-connection layers and one Softmax layer to realize three classification of EEG signals.
Example 2
Referring to fig. 2-4, the scheme of embodiment 1 is further described below with reference to specific calculation formulas and examples based on embodiment 1, but with the difference that the following description is provided:
201: pretreatment of
Preprocessing first downsamples, filters, baseline processing and removes artifacts from the acquired EEG signal, then for expanding the dataset, overlapping slices of the signal are converted into a series of 1s samples in the time dimension, after slicing each subject with 6912 samples, the preprocessed EEG signal is used as input to the network.
202: Extracting spatial features of EEG signals
The present invention divides the brain region into 6 parts according to the structure of human brain, as shown in fig. 4, and then transmits the features of each brain region to the corresponding brain region Transformer Encoder module, and extracts global information between electrodes in each brain region. After the information of each brain region is extracted, the information from 6 brain regions is spliced, and at the moment, the set of all the electrode brain electrical information of the whole brain region is obtained. Then, the signals of all the electrodes are sent to a brain region Transformer Encoder module, and the global importance information of each electrode in the whole brain region is extracted. Through the operation, the brain region transducer module extracts the global importance information in each brain region and on the whole brain region. The main flow of the implementation is as follows.
Assuming that the input EEG signal is x= [ X 1,…,xN]∈RN×L,xN∈RL, wherein the number of electrodes is N, the time sequence length is P, firstly, on the premise of not damaging the space-time information of the signal, the signal is converted into the characteristic X i by adopting time convolution and dimension transformation with the size of 1X 7 and the step size of 1X 2, and the calculation formula is as follows:
Xi=reshape(conv1×7(X)),Xi∈RN×P (1)
then, the signals of the N electrodes are divided into 6 brain regions and sent to a brain region Encoder module, as shown in fig. 2, the brain region Encoder mainly adopts a Transformer Encoder structure, and in the module, the feature extraction process in each brain region does not involve dimensional change, and the calculation process of the ith brain region is as follows:
Where i=1, 2, …,6. And then, merging the characteristics of all the brain regions, and then sending the merged characteristics into a Encoder module of the whole brain region to obtain an output characteristic Z of a brain region transducer module, wherein the calculation formula is as follows:
and extracting spatial information in each brain region and on the whole brain region through a brain region transducer module by a network, and continuously sending the feature map extracted by the module into a time sequence transducer module to extract EEG time sequence information.
203: Extracting timing features of EEG signals
Transposing the extracted spatial features, and sending the transposed spatial features into a time sequence transducer module containing Transformer Encoder structures to extract global self-attention features and extract time sequence features; the time sequence converter module is based on the time sequence Encoder module, and adopts a Transformer Encoder structure to extract global self-attention characteristics among the electrodes of P time slice sequences with the length of 1 XN, so as to extract time sequence characteristics. The timing Encoder module consists essentially of the Transformer Encoder structure, as shown in fig. 3.
After passing through the time sequence Transformer module, in order to ensure that the features in the network can participate in subsequent convolution feature learning, the dimension expansion and dimension transformation are required to be performed on the feature map S to form a three-dimensional feature S *, and the implementation is as follows:
S*=reshape(permute(S)),S*∈R1×N×P (4)
204: extracting advanced EEG spatiotemporal features
The method carries out higher-level feature extraction on the features extracted by the Transformer encoder structure through a space-time multi-scale convolution fusion module. In order to further extract EEG features, the features S * are sent to three spatial multi-scale convolution layers, EEG deep space information is extracted from local and global angles, and then the three-scale information is spliced to form advanced spatial EEG features, so that the method is mainly realized as follows.
S1=conv30×1(S*),S1∈R32×16×128 (5)
S2=conv15×1(S*),S2∈R32×16×128 (6)
S3=conv3×1(S*),S3∈R32×28×128 (7)
T=concat{S1,S2,S3},T∈R32×60×128 (8)
And then three different-scale time convolutions are used for extracting and fusing the advanced time information to obtain the final advanced EEG space-time characteristics, and the main implementation formula is as follows.
T1=conv1×7(T),T1∈R32×60×61 (9)
T2=conv1×31(T),T2∈R32×60×49 (10)
T3=conv1×127(T),T3∈R32×60×18 (11)
F=concat{T1,T2,T3},F∈R32×60×128 (12)
Finally, in order to reduce the number of features and speed up network training, a final fusion feature F * is obtained by adopting 1×1 convolution and 4×4 pooling operation, and the calculation formula is as follows:
F*=avg_poll4×4(conv1×1(F)),F*∈R16×15×32 (13)
205: application of technology
The embodiment of the invention realizes the identification of the DRDS electroencephalogram signals and obtains good classification effect. Stereoscopic vision is an important visual function physiological index, is a key of good motion control and accurate stereoscopic cognition, and plays an important role in the field of medical research. The DRDS electroencephalogram signal research provides a condition basis and theoretical support for stereogram recognition and stereovision related research, and lays a solid foundation for the application of stereovision in the medical field.
Example 3
Based on examples 1-2, but with the differences, the protocols in examples 1 and 2 were validated in connection with specific experiments as follows:
The present invention uses two data sets, a stereogram identification data set a (Stereogram Recognition Dataset A, SRDA) and stereogram identification data set B (Stereogram Recognition Dataset B, SRDB), to test performance, the basic format of the data sets being shown in table 1.
Table 1 database format description
In the model proposed by the present invention, the cross entropy loss L is used for evaluating the true tags y i and the predictive tagsThe inconsistency between the model loss function formulas are shown in (14).
In the above equation, M is the number of samples, Θ is the set of all trainable parameters in the network, II 1 represents the l 1 norm of the vector, and λ is a constant parameter. In the loss function, the regularization term λiiΘii 1 is used to prevent overfitting. In the training process, an SGD optimizer is adopted to perform model back propagation optimization, the learning rate is set to be 0.01, the batch size is set to be 32, and lambda is set to be 0.00001. After 100 iterations of each training, the optimal experimental result is saved. And finally, comprehensively evaluating the performance of the model by adopting five-fold cross validation.
The invention adopts three indexes to evaluate the performance of each electroencephalogram classification model based on an electroencephalogram dataset SRDA and an SRDB, and comprises Accumay, F1-score and Kappa. In the classification problem TP (True Positive) indicates that positive samples are correctly identified as positive numbers of samples, FP (False Positive) indicates that negative samples are incorrectly identified as positive numbers of samples, TN (True Negative) indicates that negative samples are correctly identified as negative numbers of samples, and FN (False Negative) indicates that positive samples are incorrectly identified as negative numbers of samples. The evaluation index used can be calculated by the following formula:
in the index calculation formula, p o represents the actual classification accuracy, and p e represents the model classification accuracy under random conditions, for example, p e is about 33.33% under the three-classification problem.
Meanwhile, in order to evaluate the performance of the TER-TSAN model provided by the invention, two deep learning electroencephalogram identification methods ATTNSLEEP and TS-SEFFNet are added on two electroencephalogram data sets SRDA and SRDB for performance comparison so as to verify the superior performance of the model. The overall performance results are shown in table 2.
Table 2 results of model overall performance comparisons
As can be seen from table 2, on SRDA dataset, model TER-TSAN achieved 96.84% classification accuracy over all comparative methods, which suggests that TER-TSAN can extract more significant features. The main reason is that the TER-TSAN can fully correlate the space between the brain region and the whole brain region, simultaneously capture the time sequence information of the EEG signal, and extract the advanced space-time fusion characteristics of multiple receptive fields. In addition, the proposed model also achieved the highest performance of 0.961 and 0.953 on the other two index F1-score and Kappa values, fully demonstrating the excellent classification effect of TER-TSAN.
Meanwhile, the performance of the model is further verified on SRDB data, and as shown in Table 2, the proposed TER-TSAN algorithm obtains the overall performance of 96.33%, 0.957 and 0.945 respectively at the classification accuracy, F1-score and Kappa values, which is still superior to all comparison algorithms. Overall, the experimental results prove that the proposed TER-TSAN can fully extract space-time characteristics in brain regions, and shows excellent stereogram recognition classification performance.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.
Claims (2)
1. The DRDS electroencephalogram signal identification method based on the Transformer brain region time sequence analysis is characterized by comprising the following steps of:
S1, preprocessing EEG data of a subject, and slicing the preprocessed EEG data to be used as an input sample of a network;
S2, extracting the features of the EEG signals by using time convolution and dimension transformation, and sending the extracted features to a brain region transducer module containing Transformer Encoder structures to extract spatial features;
the workflow of the brain region transducer module is specifically as follows:
S2.1, dividing the characteristics into 6 different brain region sets from the angle of EEG brain regions by a brain region transducer module;
S2.2, sending the characteristics of each brain region into a corresponding brain region Transformer Encoder module, and extracting global dependence information between electrodes in each brain region;
S2.3, splicing global information extracted from 6 brain regions to obtain a set of electroencephalogram information among all electrodes of the whole brain region;
s2.4, sending signals of all electrodes to a brain region Transformer Encoder module, and extracting global importance information of each electrode in the whole brain region;
S3, transposing the spatial features, and sending the spatial features into a time sequence transducer module containing Transformer Encoder structures to extract global self-attention features and extract time sequence features; the method specifically comprises the following steps:
The time sequence transducer module is based on the time sequence Encoder module, adopts a Transformer Encoder structure to extract global self-attention characteristics among the electrodes of P time slice sequences with the length of 1 XN, and extracts time sequence characteristics;
After passing through the time sequence transducer module, the feature map S is subjected to dimension expansion and dimension transformation to become a three-dimensional feature The specific implementation formula is as follows:
S*=reshape(premute(S)),S*∈R1×N×P
S4, constructing a space-time multi-scale convolution fusion module comprising three space multi-scale convolution layers and three time multi-scale convolution layers to obtain advanced EEG space-time characteristics and finish classification of EEG signals; the method specifically comprises the following steps:
the space-time multi-scale convolution fusion module carries out higher-level feature extraction on the information obtained by the Transformer encoder structure, and specifically comprises the following steps:
s4.1, sending the features extracted by the time sequence transducer module into three spatial multi-scale convolution layers, and extracting EEG deep space information from local and global angles;
s4.2, splicing the three-scale information to form advanced space EEG characteristics, wherein the specific implementation formula is as follows:
S1=conv30×1(S*),S1∈R32×16×128
S2=conv15×1(S*),S2∈R32×16×128
S3=conv3×1(S*),S3∈R32×28×128
T=concat{S1,S2,S3},T∈R32×60×128
s4.3, extracting and fusing advanced time information by using three different-scale time convolutions to obtain final advanced EEG space-time characteristics, wherein the specific implementation formula is as follows:
T1=conv1×7(T),T1∈R32×60×61
T2=conv1×31(T),T2∈R32×60×49
T3=conv1×127(T),T3∈R32×60×18
F=concat{T1,T2,T3},F∈R32×60×128
S4.4, accelerating network training to reduce the number of features, and adopting 1×1 convolution and 4×4 pooling operation to obtain the final fusion features, wherein the specific implementation formula is as follows:
F*=avg_poll4×4(conv1×1(F)),F*∈R16×15×32
S4.5, sending the fusion characteristics into two fully connected layers and one softmax layer to finish classification of EEG signals.
2. The method for identifying DRDS brain electrical signals based on Transformer brain region time series analysis according to claim 1, wherein the signal slice mentioned in S1 specifically comprises:
the signals were superimposed and converted in the time dimension into a series of 1s samples, 6912 samples per subject after slicing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211588317.2A CN115844425B (en) | 2022-12-12 | 2022-12-12 | DRDS brain electrical signal identification method based on transducer brain region time sequence analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211588317.2A CN115844425B (en) | 2022-12-12 | 2022-12-12 | DRDS brain electrical signal identification method based on transducer brain region time sequence analysis |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115844425A CN115844425A (en) | 2023-03-28 |
CN115844425B true CN115844425B (en) | 2024-05-17 |
Family
ID=85672032
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211588317.2A Active CN115844425B (en) | 2022-12-12 | 2022-12-12 | DRDS brain electrical signal identification method based on transducer brain region time sequence analysis |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115844425B (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109259759A (en) * | 2018-08-19 | 2019-01-25 | 天津大学 | The evaluation method that stereoscopic vision fatigue caused by horizontal movement is influenced based on EEG |
CN112215057A (en) * | 2020-08-24 | 2021-01-12 | 天津大学 | Electroencephalogram signal classification method based on three-dimensional depth motion |
CN112381008A (en) * | 2020-11-17 | 2021-02-19 | 天津大学 | Electroencephalogram emotion recognition method based on parallel sequence channel mapping network |
WO2021143403A1 (en) * | 2020-01-17 | 2021-07-22 | 上海优加利健康管理有限公司 | Processing method and apparatus for generating heartbeat tag sequence using heartbeat time sequence |
CN113907706A (en) * | 2021-08-29 | 2022-01-11 | 北京工业大学 | Electroencephalogram seizure prediction method based on multi-scale convolution and self-attention network |
CN114176607A (en) * | 2021-12-27 | 2022-03-15 | 杭州电子科技大学 | Electroencephalogram signal classification method based on visual Transformer |
CN114298216A (en) * | 2021-12-27 | 2022-04-08 | 杭州电子科技大学 | Electroencephalogram vision classification method based on time-frequency domain fusion Transformer |
CN114398991A (en) * | 2022-01-17 | 2022-04-26 | 合肥工业大学 | Electroencephalogram emotion recognition method based on Transformer structure search |
CN115222998A (en) * | 2022-09-15 | 2022-10-21 | 杭州电子科技大学 | Image classification method |
WO2022250408A1 (en) * | 2021-05-25 | 2022-12-01 | Samsung Electronics Co., Ltd. | Method and apparatus for video recognition |
CN115444419A (en) * | 2022-08-29 | 2022-12-09 | 南京邮电大学 | Domain-adaptive intelligent emotion recognition method and device based on electroencephalogram signals |
-
2022
- 2022-12-12 CN CN202211588317.2A patent/CN115844425B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109259759A (en) * | 2018-08-19 | 2019-01-25 | 天津大学 | The evaluation method that stereoscopic vision fatigue caused by horizontal movement is influenced based on EEG |
WO2021143403A1 (en) * | 2020-01-17 | 2021-07-22 | 上海优加利健康管理有限公司 | Processing method and apparatus for generating heartbeat tag sequence using heartbeat time sequence |
CN112215057A (en) * | 2020-08-24 | 2021-01-12 | 天津大学 | Electroencephalogram signal classification method based on three-dimensional depth motion |
CN112381008A (en) * | 2020-11-17 | 2021-02-19 | 天津大学 | Electroencephalogram emotion recognition method based on parallel sequence channel mapping network |
WO2022250408A1 (en) * | 2021-05-25 | 2022-12-01 | Samsung Electronics Co., Ltd. | Method and apparatus for video recognition |
CN113907706A (en) * | 2021-08-29 | 2022-01-11 | 北京工业大学 | Electroencephalogram seizure prediction method based on multi-scale convolution and self-attention network |
CN114176607A (en) * | 2021-12-27 | 2022-03-15 | 杭州电子科技大学 | Electroencephalogram signal classification method based on visual Transformer |
CN114298216A (en) * | 2021-12-27 | 2022-04-08 | 杭州电子科技大学 | Electroencephalogram vision classification method based on time-frequency domain fusion Transformer |
CN114398991A (en) * | 2022-01-17 | 2022-04-26 | 合肥工业大学 | Electroencephalogram emotion recognition method based on Transformer structure search |
CN115444419A (en) * | 2022-08-29 | 2022-12-09 | 南京邮电大学 | Domain-adaptive intelligent emotion recognition method and device based on electroencephalogram signals |
CN115222998A (en) * | 2022-09-15 | 2022-10-21 | 杭州电子科技大学 | Image classification method |
Non-Patent Citations (3)
Title |
---|
A Multi-scale Deformable Convolution Network Model for Text Recognition;Cheng Lang,et al;THIRTEENTH INTERNATIONAL CONFERENCE ON GRAPHICS AND IMAGE PROCESSING;20210820;全文 * |
EEG based dynamic RDS recognition with frequency domain selection and bispectrum feature optimization;Shen Lili,et al;JOURNAL OF NEUROSCIENCE METHODS;20200428;第337卷;全文 * |
立体深度运动感知的脑电信号研究;沈丽丽,耿小荃;电子科技大学学报;20200831;第49卷(第04期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN115844425A (en) | 2023-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112120694B (en) | Motor imagery electroencephalogram signal classification method based on neural network | |
CN109522894B (en) | Method for detecting dynamic covariation of fMRI brain network | |
CN111523601B (en) | Potential emotion recognition method based on knowledge guidance and generation of countermeasure learning | |
CN112381008B (en) | Electroencephalogram emotion recognition method based on parallel sequence channel mapping network | |
CN111202517B (en) | Sleep automatic staging method, system, medium and electronic equipment | |
CN113951900B (en) | Motor imagery intention recognition method based on multi-mode signals | |
CN112754431A (en) | Respiration and heartbeat monitoring system based on millimeter wave radar and lightweight neural network | |
CN111184509A (en) | Emotion-induced electroencephalogram signal classification method based on transfer entropy | |
CN111407243A (en) | Pulse signal pressure identification method based on deep learning | |
CN113180659B (en) | Electroencephalogram emotion recognition method based on three-dimensional feature and cavity full convolution network | |
CN112353397A (en) | Electrocardiogram signal identity recognition method | |
CN108470182B (en) | Brain-computer interface method for enhancing and identifying asymmetric electroencephalogram characteristics | |
CN113017627A (en) | Depression and bipolar disorder brain network analysis method based on two-channel phase synchronization feature fusion | |
CN116881762A (en) | Emotion recognition method based on dynamic brain network characteristics | |
Farokhah et al. | Simplified 2D CNN architecture with channel selection for emotion recognition using EEG spectrogram | |
CN115844425B (en) | DRDS brain electrical signal identification method based on transducer brain region time sequence analysis | |
CN113974627A (en) | Emotion recognition method based on brain-computer generated confrontation | |
CN115414050A (en) | EEG brain network maximum clique detection method and system for realizing emotion recognition | |
CN117113015A (en) | Electroencephalogram signal identification method and device based on space-time deep learning | |
CN117407748A (en) | Electroencephalogram emotion recognition method based on graph convolution and attention fusion | |
CN115844424B (en) | Sleep spindle wave hierarchical identification method and system | |
Yun-Mei et al. | The abnormal detection of electroencephalogram with three-dimensional deep convolutional neural networks | |
Akrout et al. | Artificial and convolutional neural network of EEG-based motor imagery classification: A comparative study | |
CN115813409A (en) | Ultra-low-delay moving image electroencephalogram decoding method | |
CN112842342B (en) | Electrocardiogram and magnetic signal classification method combining Hilbert curve and integrated learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |