CN113191395B - Target detection method based on multi-level information fusion of double brains - Google Patents
Target detection method based on multi-level information fusion of double brains Download PDFInfo
- Publication number
- CN113191395B CN113191395B CN202110373684.XA CN202110373684A CN113191395B CN 113191395 B CN113191395 B CN 113191395B CN 202110373684 A CN202110373684 A CN 202110373684A CN 113191395 B CN113191395 B CN 113191395B
- Authority
- CN
- China
- Prior art keywords
- data
- feature module
- super
- target detection
- double
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 210000004556 brain Anatomy 0.000 title claims abstract description 50
- 238000001514 detection method Methods 0.000 title claims abstract description 40
- 230000004927 fusion Effects 0.000 title claims abstract description 31
- 238000012549 training Methods 0.000 claims abstract description 21
- 238000003062 neural network model Methods 0.000 claims abstract description 17
- 238000012706 support-vector machine Methods 0.000 claims abstract description 16
- 238000000034 method Methods 0.000 claims abstract description 14
- 238000012545 processing Methods 0.000 claims abstract description 9
- 238000001914 filtration Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 4
- 230000011218 segmentation Effects 0.000 claims 1
- 238000000537 electroencephalography Methods 0.000 description 9
- 238000012360 testing method Methods 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000000763 evoking effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000002045 lasting effect Effects 0.000 description 1
- 238000002582 magnetoencephalography Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000000638 stimulation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Biophysics (AREA)
- Mathematical Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Mathematics (AREA)
- Biomedical Technology (AREA)
- Databases & Information Systems (AREA)
- Signal Processing (AREA)
- Psychiatry (AREA)
- Pathology (AREA)
- Physiology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Fuzzy Systems (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Algebra (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Computational Linguistics (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
The invention discloses a target detection method based on multi-level information fusion of double brains. The method comprises four steps of data acquisition, data processing, model training and classification. The method comprises the steps of providing a double-tested and same-task RSVP paradigm, using double EEG signals to perform P300 detection, providing a hyperscanNet neural network model based on multi-level information fusion for target detection aiming at the paradigm, respectively performing feature layer fusion and data layer fusion through a super-scanning feature module and an original feature module in a data processing stage, processing the super-scanning feature module, inputting deep features extracted by the super-scanning feature module and shallow sub-features extracted by the original feature module into a support vector machine, realizing training of the hyperscanNet neural network model, and finally classifying and detecting brain electrical data. The method realizes innovation of a paradigm and a detection method.
Description
Technical Field
The invention belongs to the field of brain-computer cooperative target detection, and relates to a target detection method based on multi-level information fusion of double brains.
Background
The definition of the brain-computer interface (BCI) system was first given from the first discovery of brain electrical signals to the project of the university of california in 1973, responsible for the Jacques Vidal, and analysis of brain-intended BCI based on EEG enabled the brain to interact directly with the external environment. Image retrieval based on the RSVP paradigm is widely applied and is mainly accomplished through detection of ERP. ERP is an evoked potential in EEG, comprising P300, N170, N200, etc., which may be evoked by auditory or visual stimuli, etc. It is a difficult problem to detect the presence of ERP from many EEGs with low signal to noise ratios. The EEG background noise has the characteristics of low stability, nonlinearity and the like, and the generated ERP waveform has certain correlation with the experimental paradigm, and even some parameters of changing the specific experimental paradigm can influence the amplitude and the latency of the ERP. Notably, ERP is single-three based in the classical RSVP paradigm, which presents a greater difficulty than multi-three based ERP detection.
In order to improve the accuracy of target detection in RSVP, there are two main studies.
On one hand, a better method is researched to improve the ERP detection accuracy of single-three. In 2006, solis-Escalnate et al proposed a single-three detection method based on empirical mode decomposition, decomposing the average event response from the training set of P300, and enabling an AUC of about 0.55 to be achieved. In 2008, krusienski et al used stepwise linear discriminant analysis (SWLDA) to discriminate P300, and the recognition rate of characters reached about 35%. In 2009 Bertrand Rivet et al proposed an xDAWN algorithm that improved the signal-to-noise ratio of EEG data by constructing a spatial filter, and achieved 80% classification accuracy for speller character classification. In 2011, lucie daubecney et al used Kalman filter (Kalman filtering) to filter EEG data and put it into a Support Vector Machine (SVM) for classification, and the accuracy of P300 prediction exceeded 50%. By 2018, lawon et al proposed a compact convolutional neural network model architecture EEGNet, classifying single-three P300 eventually trained on 1500 samples to yield AUC values exceeding 0.9. In addition, many researchers have proposed methods such as Bayesian Linear Discriminant Analysis (BLDA), genetic Algorithm (GA), recurrent Neural Network (RNN), etc. for single-three classification.
Another aspect is to enable target detection using multi-three ERP by improving the experimental paradigm such that the same target appears multiple times. Cecotti et al, 2015, proposed a dual-RSVP paradigm that gave good results on Magnetoencephalography (MEG) data. The dual-RSVP paradigm was examined on EEG data by Zhimin Lin et al, 2016, confirming that this paradigm is equally viable on EEG data. Further, zhimin Lin tried experiments using triple-RSVP to achieve better results than dual-RSVP. Most experimental paradigms are based on a single test, with some of the tests being performed with two tests to accomplish different tasks.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a target detection method based on multi-level information fusion of double brains, provides a double-tested and homotasked RSVP paradigm, uses double EEG signals to detect P300, provides a neural network based on multi-level information fusion for target detection aiming at the paradigm, and realizes innovation of the paradigm and the detection method.
A target detection method based on multi-level information fusion of double brains specifically comprises the following steps:
step one, data acquisition
Two tested brain electrical data are collected as target detection data by receiving the same stimulus from two tested brain electrical data.
Step two, data processing
And establishing a HyperscanNet neural network model, wherein the HyperscanNet neural network model comprises a super scanning characteristic module, an original characteristic module and a support vector machine. And (3) processing the target detection data acquired in the step one through the super scanning feature module and the original feature module of the hyperscan Net neural network model at the same time, and inputting the obtained features to be classified into a support vector machine.
Step 2.1, after filtering and channel selection, target detection data input to the super-scanning feature module are segmented according to a time dimension:
X 1 ,X 2 ∈R C×T the brain electrical data after filtering and channel selection for target detection data, C is the number of channels, T is the time length, k is the number of divided parts,i=1, 2,..k, for the i-th piece of electroencephalogram data after two test cuts, respectively.
X 1 ,X 2 The segmented k data are subjected to feature extraction through corresponding k LSTM modules respectively to obtain a preliminary feature matrix F 1 ,F 2 ∈R C×T :
Wherein the method comprises the steps ofRespectively X 1 、X 2 The ith LSTM module of the corresponding input.
Then for the preliminary feature matrix F 1 、F 2 Carrying out one-dimensional convolution, and fusing the number of channels from C to 1 to realize space fusion to obtain further single Brain characteristic Brain 1 ,Brain 2 ∈R 1×T . Then carrying out space fusion on the two single brain features to obtain fused features mBrain E R 1×T :
Finally, tiling the two single brain features and the fused features together in a cross-layer connection mode, and taking the tiled features as a final feature F output by the super-scanning feature module out :
And 2.2, directly normalizing and tiling the data after the electroencephalogram signals input to the original characteristic module are filtered, channel selected and downsampled, and outputting the normalized and tiled data as characteristics.
Preferably, the downsampled data frequency is 250Hz.
Step three, model training
Firstly, training a super scanning feature module of the HyperscanNet neural network model, and then inputting the super scanning feature module and feature data output by an original feature module into a support vector machine module for training.
And 3.1, training the super-scanning feature module by using the back propagation of the network by using the full-connection layer and the softmax layer, discarding the full-connection layer and the softmax layer after repeated iterative training, and inputting the fusion features of the feature layer output by the super-scanning feature module and the fusion data of the data layer output by the original feature module into a support vector machine.
And 3.2, the support vector machine after receiving the data in the step 3.1 is used for solving the quadratic programming problem, and the training of the HyperscanNet neural network model is completed by solving the quadratic programming problem.
Step four, data classification
And D, classifying the extracted characteristics to be classified by using the HyperscanNet neural network model obtained by training in the step three, and outputting a detection result to obtain whether the detected target receives the stimulation of the target.
The invention has the following beneficial effects:
1. the brain fusion of the feature layer and the data layer is respectively realized by the two modules on the brain data, so that partial original features are reserved, and the classification accuracy is improved.
2. In the process of feature layer fusion, information in different time periods is subjected to feature extraction by using different LSTM modules respectively, so that the information in the different time periods can be independently processed, and interference caused by a time period with relatively smaller action in P300 prediction to a time period with relatively larger action is avoided.
3. And in the data acquisition stage, the electroencephalogram data of two detected objects are acquired simultaneously, so that the target omission caused by interference in single brain detection is avoided.
Drawings
FIG. 1 is a schematic diagram of a classification process;
FIG. 2 is a diagram of a HyperscanNet neural network model.
Detailed Description
The invention is further explained below with reference to the drawings;
a target detection method based on multi-level information fusion of double brains specifically comprises the following steps:
step one, data acquisition
Fig. 1 is a schematic diagram of a classification process according to the present embodiment. In this example, two subjects simultaneously received visual stimuli from the same screen, and brain electrical data of 64 electrodes distributed in the international 10-20 system brain electrode were recorded by Neuroscan Synamps system, with a sampling rate of 1000Hz. Four experiments were performed with eight subjects divided into four groups, each experiment having 20 sessions, each session containing 50 stimulus presentations, each stimulus presentation lasting 200ms, with a rest of 600ms after presentation, avoiding the Attentional blink phenomenon interfering with the formation of P300. Of the 20 x 50 stimulus presentations, 100 appeared target persons, 900 non-target persons.
Step two, data processing
And establishing a HyperscanNet neural network model, wherein the HyperscanNet neural network model comprises a super scanning characteristic module, an original characteristic module and a support vector machine.
And (3) processing the target detection data acquired in the step (I) through a super scanning feature module and an original feature module of the hyperscan Net neural network model respectively, as shown in figure 2.
Step 2.1, after filtering and channel selection, target detection data input to the super-scanning feature module are segmented according to a time dimension:
X 1 ,X 2 ∈R C×T for the electroencephalogram data of two tested after filtering and channel selection, C is the number of channels, T is the time length, k is the number of divided parts,i=1, 2,..k, for the i-th piece of electroencephalogram data after two test cuts, respectively.
X 1 、X 2 The segmented k data are subjected to feature extraction through corresponding k LSTM modules respectively to obtain a preliminary feature matrix F 1 ,F 2 ∈R C×T :
Wherein the method comprises the steps ofRespectively X 1 、X 2 The ith LSTM module of the corresponding input.
The data in different time periods are segmented and then feature extraction is carried out, so that the information in different time periods can be mutually independent. Different characteristic extraction modes are adopted for different components of different time periods, so that the problem that a period of relatively small action in P300 prediction causes interference to a period of relatively large action and influences accuracy is avoided.
Then for the preliminary feature matrix F 1 、F 2 Carrying out one-dimensional convolution, and fusing the number of channels from C to 1 to realize space fusion to obtain further single Brain characteristic Brain 1 ,Brain 2 ∈R 1×T . Then carrying out space fusion on the two single brain features to obtain fused features mBrain E R 1×T :
Finally, tiling the two single brain features and the fused features together by a cross-layer connection mode, and doingFinal feature F output for overscan feature module out :
And 2.2, target detection data input into the original feature module is subjected to filtering and channel selection, downsampled to 250Hz, and directly standardized and tiled to be used as feature output.
The characteristic extraction is carried out on the two tested brain electrical signals through two modules, and on one hand, the double brain fusion of the characteristic layer is completed through the super-scanning characteristic module. On the other hand, the original feature module completes the fusion of the data layers through simple downsampling, standardization and other operations, and some original features are reserved.
Step three, model training
Firstly, training a super scanning feature module of the HyperscanNet neural network model, and then inputting the super scanning feature module and feature data output by an original feature module into a support vector machine module for training.
And 3.1, training the super-scanning feature module by using the back propagation of the network by using the full-connection layer and the softmax layer, discarding the full-connection layer and the softmax layer after repeated iterative training, and inputting the fusion features of the feature layer output by the super-scanning feature module and the fusion data of the data layer output by the original feature module into a support vector machine.
And 3.2, the support vector machine after receiving the data in the step 3.1 is used for solving the quadratic programming problem, and the training of the HyperscanNet neural network model is completed by solving the quadratic programming problem.
Step four, data classification
And (3) classifying the extracted two tested brain electrical data by using the HyperscanNet neural network model obtained by training in the step (III), and outputting a result as a target or a non-target.
In the data set acquired in the first step, the ratio of the target to the non-target is 1:9, and the serious sample imbalance phenomenon exists. At this scale, there is a great difficulty in finding as full a target as possible while ensuring that the predicted target is a true target. Therefore, the recall rate and the accuracy rate of the target are paid attention to in evaluating the model, but the information returned by the AUC is relatively coarse, and the recall rate or the accuracy rate information of the target cannot be directly or indirectly obtained, so that the AUC is not used as an evaluation index in the invention.
The evaluation index adopted in this example is F1 Score of the detection result:
f1 The Score is a harmonic mean of precision and recall, and can coordinate the relationship between precision and recall well, and once the precision and recall are too low, the F1 Score is greatly reduced, i.e. certain requirements are imposed on the precision and recall.
The following table compares the detection results of this embodiment with those of other network models:
TABLE 1
The sub in the table shows the tested, the model shows the model, and three values of 1, 0.8 and 0.6 show the time length of the data segment.
Note that the two brain effect was not as good as that of the single brain when S3 and S4 were fused. But in addition, the effect of the double brain is generally better than that of the single brain. With the decrease of the application time, the change of the detection capability of the brains and the brains is greatly related to the tested, and the influence of the time is relatively small in the tested S1+S2 and the tested S5+S6. While the effect is greater in the remaining two groups of subjects, possibly due to differences between the subjects. Notably, in the three groups with better fusion, the two brains took 0.6 seconds, and the achieved F1 Score could be comparable to, or even surpassed, the single brain for 1 second.
Claims (4)
1. A target detection method based on multi-level information fusion of double brains is characterized by comprising the following steps: the method specifically comprises the following steps:
step one, data acquisition
Two tested brain electrical data are collected as target detection data by receiving the same stimulus from two tested brain electrical data;
step two, data processing
Processing the target detection data acquired in the first step through a super scanning feature module and an original feature module of a hyperscan Net neural network model at the same time;
the super-scanning feature module performs filtering and channel selection on target detection data, then performs segmentation according to a time dimension, and performs feature extraction on the segmented data through LSTM modules with corresponding numbers to obtain primary features; then the channel number of the primary features is reduced to 1, two single brain features are obtained, after the two single brain features are fused, the two single brain features and the fused features are tiled together in a cross-layer connection mode to be used as final features output by the super-scanning feature module;
the original feature module performs filtering, channel selection and downsampling on target detection data, and directly performs standardization and tiling on the data to serve as feature output;
step three, data classification
The specific method for training the model comprises the following steps:
the method comprises the steps of training a super-scanning feature module by using a full-connection layer and a softmax layer to perform network back propagation, and inputting feature layer fusion features output by the super-scanning feature module and data layer fusion data output by an original feature module into a support vector machine after repeated iterative training; the support vector machine is used for solving the quadratic programming problem, and training of the HyperscanNet neural network model is completed by solving the quadratic programming problem;
extracting the super-scanning feature module and the original feature module to obtain feature merging, and inputting the feature merging serving as the feature to be classified into a support vector machine; and outputting a detection result by the support vector machine.
2. The target detection method based on multi-level information fusion of double brains according to claim 1, wherein: in the data acquisition process of the first step, the Neuroscan Synamps system is used for recording the electroencephalogram data of 64 electrodes which are tested to be distributed by the brain electrodes of the international 10-20 system.
3. The target detection method based on multi-level information fusion of double brains according to claim 1, wherein: and the one-dimensional convolution is utilized in the super-scanning feature module to realize the spatial fusion of the primary features, so as to obtain the single brain features.
4. The target detection method based on multi-level information fusion of double brains according to claim 1, wherein: and step two, the data frequency after downsampling by the original characteristic module is 250Hz.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110373684.XA CN113191395B (en) | 2021-04-07 | 2021-04-07 | Target detection method based on multi-level information fusion of double brains |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110373684.XA CN113191395B (en) | 2021-04-07 | 2021-04-07 | Target detection method based on multi-level information fusion of double brains |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113191395A CN113191395A (en) | 2021-07-30 |
CN113191395B true CN113191395B (en) | 2024-02-09 |
Family
ID=76974923
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110373684.XA Active CN113191395B (en) | 2021-04-07 | 2021-04-07 | Target detection method based on multi-level information fusion of double brains |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113191395B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113425312B (en) * | 2021-07-30 | 2023-03-21 | 清华大学 | Electroencephalogram data processing method and device |
CN114403903A (en) * | 2022-01-14 | 2022-04-29 | 杭州电子科技大学 | Cross-tested RSVP (resource reservation protocol) -oriented multi-feature low-dimensional subspace ERP (Enterprise resource planning) detection method |
CN115337026B (en) * | 2022-10-19 | 2023-03-10 | 之江实验室 | Convolutional neural network-based EEG signal feature retrieval method and device |
CN115421597B (en) * | 2022-11-04 | 2023-01-13 | 清华大学 | Brain-computer interface control method and system based on double-brain coupling characteristics |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108304917A (en) * | 2018-01-17 | 2018-07-20 | 华南理工大学 | A kind of P300 signal detecting methods based on LSTM networks |
CN110222643A (en) * | 2019-06-06 | 2019-09-10 | 西安交通大学 | A kind of Steady State Visual Evoked Potential Modulation recognition method based on convolutional neural networks |
CN112244873A (en) * | 2020-09-29 | 2021-01-22 | 陕西科技大学 | Electroencephalogram time-space feature learning and emotion classification method based on hybrid neural network |
CN112528819A (en) * | 2020-12-05 | 2021-03-19 | 西安电子科技大学 | P300 electroencephalogram signal classification method based on convolutional neural network |
-
2021
- 2021-04-07 CN CN202110373684.XA patent/CN113191395B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108304917A (en) * | 2018-01-17 | 2018-07-20 | 华南理工大学 | A kind of P300 signal detecting methods based on LSTM networks |
CN110222643A (en) * | 2019-06-06 | 2019-09-10 | 西安交通大学 | A kind of Steady State Visual Evoked Potential Modulation recognition method based on convolutional neural networks |
CN112244873A (en) * | 2020-09-29 | 2021-01-22 | 陕西科技大学 | Electroencephalogram time-space feature learning and emotion classification method based on hybrid neural network |
CN112528819A (en) * | 2020-12-05 | 2021-03-19 | 西安电子科技大学 | P300 electroencephalogram signal classification method based on convolutional neural network |
Also Published As
Publication number | Publication date |
---|---|
CN113191395A (en) | 2021-07-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113191395B (en) | Target detection method based on multi-level information fusion of double brains | |
Esfahani et al. | Classification of primitive shapes using brain–computer interfaces | |
CN114533086B (en) | Motor imagery brain electrolysis code method based on airspace characteristic time-frequency transformation | |
Carlson et al. | An introduction to time-resolved decoding analysis for M/EEG | |
CN109480834A (en) | A kind of Method of EEG signals classification based on quick multiple dimension empirical mode decomposition | |
CN111091074A (en) | Motor imagery electroencephalogram signal classification method based on optimal region common space mode | |
Haider et al. | Performance enhancement in P300 ERP single trial by machine learning adaptive denoising mechanism | |
Caramia et al. | Optimizing spatial filter pairs for EEG classification based on phase-synchronization | |
Mousa et al. | A novel brain computer interface based on principle component analysis | |
Gu et al. | AOAR: an automatic ocular artifact removal approach for multi-channel electroencephalogram data based on non-negative matrix factorization and empirical mode decomposition | |
Qian et al. | Decision-level fusion of EEG and pupil features for single-trial visual detection analysis | |
Ouyang et al. | Handling EEG artifacts and searching individually optimal experimental parameter in real time: a system development and demonstration | |
Haloi et al. | Selection of appropriate statistical features of EEG signals for detection of Parkinson’s disease | |
Shi et al. | Categorizing objects from MEG signals using EEGNet | |
Ahmed et al. | Effective hybrid method for the detection and rejection of electrooculogram (EOG) and power line noise artefacts from electroencephalogram (EEG) mixtures | |
Cong | Blind source separation | |
CN112861629B (en) | Multi-window distinguishing typical pattern matching method and brain-computer interface application | |
Iaquinta et al. | EEG multipurpose eye blink detector using convolutional neural network | |
CN113408444B (en) | Event-related potential signal classification method based on CNN-SVM | |
CN114519367A (en) | Motor imagery electroencephalogram frequency characteristic analysis method and system based on sample learning | |
CN110516711B (en) | Training set quality evaluation method of MI-BCI system and optimization method of single training sample | |
Wang et al. | Residual learning attention cnn for motion intention recognition based on eeg data | |
CN114403903A (en) | Cross-tested RSVP (resource reservation protocol) -oriented multi-feature low-dimensional subspace ERP (Enterprise resource planning) detection method | |
Chandel et al. | Computer Based Detection of Alcoholism using EEG Signals | |
Hasson-Meir et al. | Inference of Brain Mental States from Spatio-temporal Analysis of EEG Single Trials. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |