CN112932504B - Dipole imaging and identifying method - Google Patents

Dipole imaging and identifying method Download PDF

Info

Publication number
CN112932504B
CN112932504B CN202110058762.7A CN202110058762A CN112932504B CN 112932504 B CN112932504 B CN 112932504B CN 202110058762 A CN202110058762 A CN 202110058762A CN 112932504 B CN112932504 B CN 112932504B
Authority
CN
China
Prior art keywords
dipole
dimensional
imaging
toi
characteristic data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110058762.7A
Other languages
Chinese (zh)
Other versions
CN112932504A (en
Inventor
李明爱
刘斌
刘有军
孙炎珺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202110058762.7A priority Critical patent/CN112932504B/en
Publication of CN112932504A publication Critical patent/CN112932504A/en
Application granted granted Critical
Publication of CN112932504B publication Critical patent/CN112932504B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Fuzzy Systems (AREA)
  • Evolutionary Computation (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a dipole imaging and identifying method, which is characterized in that a standardized low-resolution brain electromagnetic tomography imaging (sLORETA) algorithm is adopted to inversely transform band-pass filtered scalp layer electroencephalogram signals to a cerebral cortex; dividing the four types of motor imagery tasks into two and two classification tasks, calculating a dipole amplitude difference value between each two types of tasks, selecting a common time period with obvious difference as interesting time TOI, merging the activated regions of each type of tasks in the TOI to obtain an interesting region ROI, and extracting coordinates and amplitudes of dipoles in the ROI; then, aiming at each discrete time point, performing operations such as translation, amplification, rounding and the like on the dipole coordinate, assigning the dipole amplitude value to the corresponding coordinate point, constructing a two-dimensional dipole imaging graph, and stacking the two-dimensional dipole imaging graph into a two-dimensional image sequence according to the time dimension; and finally, performing data amplification by using a sliding time window method to obtain three-dimensional dipole characteristic data, and inputting the three-dimensional dipole characteristic data into a three-dimensional convolution neural network 3DCNN for classification.

Description

Dipole imaging and identifying method
Technical Field
The invention belongs to the technical field of motor imagery electroencephalogram (MI-EEG) identification and processing based on brain source space.
Background
With the increase of stroke patients, how to perform efficient rehabilitation training on the stroke patients in daily life becomes a focus of attention of people. Because the motor imagery electroencephalogram (MI-EEG) has the characteristics of high time resolution, low cost, convenience in acquisition and the like, a brain-computer interface (BCI) system based on the MI-EEG is widely applied. The key to the BCI technique is to improve the decoding accuracy of the MI-EEG signal.
MI-EEG signals are susceptible to contamination by noise from sensors and skull conduction structures during acquisition, resulting in distortion of the signal and a reduction in signal-to-noise ratio (SNR). At the same time, the spatial resolution of the measured MI-EEG signal is low due to volume conduction effects and the limitation of the number of scalp electrodes. Both of these factors can make decoding MI-EEG signals difficult and with low recognition accuracy at the scalp level. Brain-source imaging (ESI) technology maps EEG signals from a skull layer to a high-dimensional brain-source space, so that the problems of noise interference, insufficient spatial resolution and the like of brain-electrical signals in a skull conduction process are solved. The dipole contains abundant time domain and space domain information, and how to fully utilize the information to improve the decoding precision of MI-EEG becomes a research hotspot at present.
In recent years, Deep Learning (DL) has achieved better performance than the conventional Machine Learning (ML) method in many fields due to its powerful learning ability. The DL method can automatically extract and identify features, some researchers successfully apply the DL method to the field of MI-EEG identification, and particularly, the method has good research value in decoding the MI-EEG by utilizing space domain and time domain information rich in dipoles based on a deep learning method.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention combines the prior ESI technology with a Convolutional Neural Network (CNN) and provides a MI-EEG decoding method based on dipole imaging and 3DCNN, which particularly relates to the following steps: firstly, a standardized low-resolution brain electromagnetic tomography imaging (sLORETA) algorithm is adopted to inversely transform scalp layer electroencephalogram signals after band-pass filtering to a cerebral cortex; further, dividing the four types of motor imagery tasks into two and two classification tasks, calculating a dipole amplitude difference value between each two types of tasks, selecting a common time interval with obvious difference as time of interest (TOI), merging the activated regions of each type of tasks in the TOI to obtain a region of interest (ROI), and extracting coordinates and amplitudes of dipoles in the ROI; then, aiming at each discrete time point, performing operations such as translation, amplification, rounding and the like on the dipole coordinate, assigning the dipole amplitude value to the corresponding coordinate point, constructing a two-dimensional dipole imaging graph, and stacking the two-dimensional dipole imaging graph into a two-dimensional image sequence according to the time dimension; finally, data amplification is carried out by using a sliding time window method, three-dimensional dipole characteristic data are obtained and input into a three-dimensional convolution neural network (3DCNN) for classification; the specific design is as follows:
(1) dividing four types of motor imagery tasks into two types of tasks, calculating dipole amplitude difference values between each two types of tasks, selecting common time periods with obvious differences as time of interest (TOI), merging regions activated by each type of tasks in the TOI to obtain a region of interest (ROI), and extracting coordinates and amplitude values of dipoles in the ROI; the computing efficiency is improved while information redundancy is avoided.
(2) And for each discrete time point, carrying out operations such as translation, amplification, rounding and the like on the dipole coordinate, and assigning the dipole amplitude to the corresponding coordinate point to obtain a two-dimensional dipole imaging graph. And stacking the dipole imaging images into a two-dimensional image sequence according to the time dimension.
(3) And performing data amplification on the two-dimensional image sequence by adopting a sliding time window method, obtaining three-dimensional dipole characteristic data and inputting the three-dimensional dipole characteristic data into a 3DCNN network for classification. And finally, outputting the probability of each category of the electroencephalogram signals by the full connection layer and the softmax layer.
The method comprises the following specific steps:
step1 MI-EEG Signal Pre-processing
Step1.1 hypothesis
Figure BDA0002901678720000021
The scalp electroencephalogram signal collected for the ith experiment of the mth class belongs to m epsilon {1,2,3,4}, i ═ {1,2,3, · ·, N · m }. Wherein, N m Representing the times of collecting experiments; n is a radical of s Representing the number of sampling points; n is a radical of c Representing the number of leaders.
Step1.2 use band-pass filter to filter EEG signal X according to neurophysiological knowledge m,i Filtering to 8-30Hz, and performing down-sampling at 1000Hz to obtain preprocessed MI-EEG signal
Figure BDA0002901678720000022
Construction of Step2 dipole imaging graph
Step2.1 preprocessing MI-EEG signal X 'based on sLORETA algorithm' m,i Carrying out dipole source estimation to obtain dipole source estimation sequence
Figure BDA0002901678720000023
In the formula N d Indicating the number of cortical dipoles.
Combined selection of Step2.2 TOI and ROI
Suppose the m-th class i-th experiment N d The average amplitude of each dipole is:
Figure BDA0002901678720000024
therefore, the difference values of the dipole average responses of the two types 1 and 2 and the two types 3 and 4 of the motor imagery task are respectively:
Figure BDA0002901678720000025
Figure BDA0002901678720000026
finally by analysis
Figure BDA0002901678720000027
And
Figure BDA0002901678720000028
selecting a common time interval with obvious difference of dipole response amplitudes of two types of motor imagery tasks as time of interest (TOI), and recording the number of time points in the TOI as N T . The dipole activated by the mth motor imagery task in the TOI forms the ROI of the type and is marked as R m Then the ROI of the four classes of motor imagery tasks can be expressed as R ∈ R 1 ∪R 2 ∪R 3 ∪R 4 Here, u denotes a union set.
Step2.3 dipole coordinate transformation
Extracting coordinates and amplitudes of dipoles in the TOI and the ROI, and transforming the coordinates, which comprises the following specific steps:
firstly, the coordinate is subjected to a translation operation:
Figure BDA0002901678720000031
so that x 1 >0,y 1 >0. In the formula, epsilon>0,
Figure BDA0002901678720000032
Respectively, the coordinate translation amounts of (x, y).
Then the translated coordinates (x) are compared 1 ,y 1 ) Amplify n (positive integer) times and perform a rounding operation using round function:
(x 2 ,y 2 )=(x 1 ,y 1 )×n (4)
(x 3 ,y 3 )=round(x 2 ,y 2 ) (5)
assigning, for each discrete time point, a dipole amplitude value to a corresponding coordinate position (x) 3 ,y 3 ) To obtain a frame N a ×N b Size dipole imaging plot.
Construction of two-dimensional dipole imaging graph sequence by Step2.4
Will be within TOI N T And the two-dimensional dipole imaging graphs of each time point are stacked according to the time dimension to form a two-dimensional dipole imaging graph sequence.
Step3 utilizes 3DCNN to identify three-dimensional dipole characteristic data
Augmentation of Step3.1 data to obtain three-dimensional dipole characteristic data
And carrying out data augmentation on the two-dimensional dipole imaging graph by using a sliding time window method. Setting the window length to N w Forming N from the two-dimensional dipole image in the sliding window a ×N b ×N w Three-dimensional dipole characteristic data of (a); setting the sliding step length to N st And expanding the three-dimensional dipole characteristic data to S times of the original data by using a sliding window method.
Step3.2 design 3DCNN
The model mainly comprises two 3D convolution layers, a 3D pooling layer, two batch normalization layers (BN), a Flatten layer, two Dropout layers and two Dense layers. Each convolutional layer has the same 5 × 5 × 5 convolutional kernel, and the pooling window size of the maximum pooling layer is 3 × 3 × 3. The activation functions are chosen as ReLU and Softmax. The main functions of the BN and Dropout layers are to speed up the training speed of the network and reduce the occurrence of the over-fitting phenomenon. The network structure is shown in the following table:
TABLE 1 deep convolutional network architecture
Figure BDA0002901678720000033
Figure BDA0002901678720000041
Step 3.3 identification of three-dimensional dipole characteristic data
And inputting the three-dimensional dipole characteristic data subjected to data amplification into a 3DCNN model proposed in Step3.2 for identification and classification. 90% of the data samples were used to train the network and 10% of the samples were used for testing.
The invention has the following advantages:
(1) aiming at the defects that MI-EEG signals have low spatial resolution, low signal-to-noise ratio and the like in a scalp layer, the scalp layer EEG signals are mapped to a high-dimensional brain source space by using the sLORETA algorithm, and the influence of the volume conduction effect of the signals in the conduction process is favorably overcome. And jointly selecting the TOI and the ROI in a brain source domain, transforming dipole coordinates, endowing the dipole amplitude values to corresponding coordinates, and constructing a dipole imaging graph. The time domain and space domain information of the dipole is comprehensively utilized, and the fusion of the time domain and space domain information of the MI-EEG is realized in the brain source domain.
(2) The present invention uses a 3DCNN model instead of the traditional feature extraction classification algorithm. The 3DCNN model can fully learn the characteristics contained in the dipole imaging graph sequence, and can effectively improve the decoding precision of MI-EEG signals.
Drawings
FIG. 1 is a technical flow chart of the present invention;
FIG. 2.1 is a timing chart of an electroencephalogram acquisition experiment;
FIG. 2.2 is a diagram of an electroencephalogram acquisition electrode distribution;
FIG. 3.1 is a graph of the dipole amplitude average response difference for the subject S1 in the left-hand and right-hand motor imagery mission;
FIG. 3.2 is a graph of the average response difference of dipole amplitude for the subject' S foot and tongue motor imagery task S1;
3.3(a) (b) (c) (d) respectively represent imagined left hand, right hand, foot and tongue task dipole activation plots for subject S1;
FIG. 3.4 shows the result of ROI selection of the subject S1;
3.5(a) (b) (c) (d) respectively show imagining left, right, foot and tongue task dipole imaging plots for subject S1;
fig. 4 is a structural diagram of 3DCNN according to the present invention.
Detailed Description
The specific experiment of the invention is carried out in MATLAB R2018b and Python Keras environment under Windows 10 (64-bit) operating system.
The Data set 2a Data set of BCI Competition IV is used in the present invention. The data set comprises 9 subjects wearing an international 10-20 standard 22-conductor cap to complete the acquisition experiment, with a sampling frequency of 250Hz and a band-pass filtering of 0.5-100 Hz. The distribution of the scalp electrode positions is shown in fig. 2.1.
The acquisition timing diagram is shown in fig. 2.2, with each experiment lasting 7.5 s. 0-2 s is a rest state period, a '+' character cursor appears on a screen, and a short alarm sound is given out when t is 0 s; 2 s-3.5 s are the prompt period of the motor imagery task, arrows appear on the screen and point to the left, right, up and down, and respectively represent four motor imagery tasks of the left hand, the right hand, the foot and the tongue; 3 s-6 s are motor imagery periods, and the subject performs motor imagery according to a prompt arrow on the screen; 6-7.5 s is a rest period, the screen is in a black screen state, and the testee has a rest; the next experiment was then performed. The Data sets 2a dataset consisted of 576 experiments per subject (144 for each of the four motor imagery tasks) and 1875 sampling points per experiment.
Step1 MI-EEG Signal preprocessing
Step1.1 extracts each motor imagery single experiment X of the subject according to each task class label (left hand m is 1, right hand m is 2, foot m is 3, tongue m is 4) m,i ∈R 22×1875 Where i ═ 1,2, 3.., 144}, a total of 576 sets of MI-EEG data were obtained.
Step1.2 respectively carrying out 8-30Hz band-pass filtering and 1000Hz down-sampling operation on each class of motor imagery tasks for 144 times to obtain a preprocessed MI-EEG signal recorded as X' m,i ∈R 22×1875
Construction of Step2 dipole imaging graph
Step2.1 selects ICBM152 standard head model, and calculates zero-lead domain matrix G epsilon R by using Boundary Element Method (BEM) 15002 ×22
Step2.2 Pre-processed MI-EEG Signal X 'based on sLORETA Algorithm' m,i Solving the estimation of dipole source to obtain 15002 dipole time sequence estimation on cerebral cortex
Figure BDA0002901678720000051
Step2.3 TOI and ROI Joint selection
Fig. 3.1 and 3.2 are graphs of the difference in magnitude of the dipole mean response of the subject S1 for the left and right hand and foot and tongue motor imagery tasks, respectively. As shown in the figure, the common time period of the two types of motor imagery tasks with the largest difference is mainly concentrated in 500-600 ms. According to the dipole activation diagram of the four types of motor imagery tasks in fig. 3.3, the four types of motor imagery tasks of 540-559 ms and S1 appear in a clearly separable area, and at the moment, N is T 20. To more clearly show the dipole activation with increasing time, the dipole activation map is shown every 5 ms. These activation regions are located primarily in the parietal and occipital regions of the cortical layer, corresponding to the SP _ L, SP _ R, IP _ L, IP _ R, LO _ L and LO _ R regions in the Desikan-kiliany nerve partition.
Step2.4 construction of dipole imaging map
And extracting the coordinates and the amplitudes of the dipoles in the TOI and the ROI, and interpolating the amplitudes of the dipoles to corresponding coordinates after coordinate transformation to obtain a 38 multiplied by 45 dipole imaging graph. Two-dimensional dipole imaging maps are stacked in the time dimension to form a two-dimensional image sequence with dimensions of 38 × 45 × 20.
Step3 classification based on 3DCNN
Augmentation of Step3.1 data to obtain three-dimensional dipole characteristic data
And carrying out data augmentation on the two-dimensional dipole imaging graph by using a sliding time window method. Setting the window length to be 5ms, and forming a 38 multiplied by 45 multiplied by 5 three-dimensional dipole characteristic data by using a two-dimensional dipole imaging graph in the sliding window; setting the sliding step length to be 3ms, and expanding the three-dimensional dipole characteristic data to 6 times of the original data by using a sliding window method.
Step3.2 identification of three-dimensional image sequences
And inputting the three-dimensional dipole characteristic data into a 3DCNN model for classification. 90% of the data samples were used for training the network and 10% were used for testing. During network training, setting the Batch _ Size to be 64, setting the initial value of the learning rate to be 0.0001, performing first-order gradient optimization processing on a random objective function by using an Adam optimizer, wherein the loss tends to be stable after 40 epochs, and the test set effect of each subject is shown as the following table:
TABLE 2 Classification of the individual subjects
Test subject S1 S2 S3 S4 S5 S6 S7 S8 S9 Mean value
Percent identification (%) 92.59 93.81 92.59 93.11 94.29 92.09 91.92 90.91 91.36 92.52

Claims (1)

1. A method for MI-EEG decoding based on dipole imaging and 3DCNN, characterized by:
the method is characterized in that:
step1 MI-EEG signal preprocessing;
step1.1 hypothesis
Figure FDA0003701707500000011
For the scalp electroencephalogram collected from the ith experiment of the mth category, m ∈ {1,2,3,4}, i ═ 1,2,3, …, N m }; wherein N is m Representing the times of collecting experiments; n is a radical of c Representing the number of lead connections; n is a radical of s Representing the number of sampling points;
step1.2 use band-pass filter to filter EEG signal X according to neurophysiological knowledge m,i Filtering to 8-30Hz, andthe signals are down-sampled by 1000Hz, and the obtained signals are recorded as
Figure FDA0003701707500000012
Constructing a Step2 dipole imaging graph;
step2.1 preprocessing MI-EEG signal X 'based on sLORETA algorithm' m,i Carrying out dipole source estimation to obtain dipole source estimation sequence
Figure FDA0003701707500000013
In the formula N d Representing the number of cortical dipoles;
combined selection of Step2.2 TOI and ROI
Suppose the m-th class i-th experiment N d The average amplitude of each dipole is:
Figure FDA0003701707500000014
then the difference values of the dipole average responses of the two types 1 and 2 and the two types 3 and 4 of motor imagery task are respectively:
Figure FDA0003701707500000015
Figure FDA0003701707500000016
finally by analysis
Figure FDA0003701707500000017
And
Figure FDA0003701707500000018
selecting a common time interval with obvious difference of dipole response amplitudes of two types of motor imagery tasks as interesting time TOI, and recording time in the TOIThe number of dots is N T (ii) a The dipoles activated by the mth motor imagery task in the TOI form the ROI of the type and are marked as R m Then the ROI of the four types of motor imagery tasks is expressed as R ∈ R 1 ∪R 2 ∪R 3 ∪R 4 The result is that U represents a union set;
step2.3 dipole coordinate transformation
Extracting the coordinates and the amplitudes of the dipoles in the TOI and the ROI, and transforming the coordinates, which comprises the following specific steps:
the dipole coordinates (x, y) are subjected to a translation operation:
Figure FDA0003701707500000019
so that x 1 >0,y 1 >0; in the formula, epsilon>0,
Figure FDA00037017075000000110
Respectively representing coordinate translation amounts of (x, y);
then the translated coordinates (x) 1 ,y 1 ) Amplifying by n times, and carrying out rounding operation by using a round function:
(x 2 ,y 2 )=(x 1 ,y 1 )×n (5)
(x 3 ,y 3 )=round(x 2 ,y 2 ) (6)
for each discrete time point, the dipole amplitude is interpolated to its corresponding coordinate position (x) 3 ,y 3 ) To obtain a frame N a ×N b A two-dimensional dipole imaging plot of size;
construction of two-dimensional dipole imaging graph sequence by Step2.4
Will be N within TOI T Stacking the two-dimensional dipole imaging graphs of each time point according to the time dimension to form a two-dimensional dipole imaging graph sequence;
step3 utilizes 3DCNN to identify three-dimensional dipole characteristic data
Augmentation of Step3.1 data to obtain three-dimensional dipole characteristic data
Carrying out data augmentation on the two-dimensional dipole imaging graph by using a sliding time window method; setting the window length to N w Forming N from the two-dimensional dipole image in the sliding window a ×N b ×N w Three-dimensional dipole characteristic data of (a); setting the sliding step length to N st Expanding the three-dimensional dipole characteristic data to S times of the original data by using a sliding window method;
the 3DCNN model designed by Step3.2 consists of two 3D convolution layers, a 3D pooling layer, two batch normalization layers BN, a Flatten layer, two Dropout layers and two Dense layers;
step3.2 identification of three-dimensional dipole characteristic data
And inputting the three-dimensional dipole characteristic data into a 3DCNN model designed in Step3.2 for identification and classification.
CN202110058762.7A 2021-01-16 2021-01-16 Dipole imaging and identifying method Active CN112932504B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110058762.7A CN112932504B (en) 2021-01-16 2021-01-16 Dipole imaging and identifying method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110058762.7A CN112932504B (en) 2021-01-16 2021-01-16 Dipole imaging and identifying method

Publications (2)

Publication Number Publication Date
CN112932504A CN112932504A (en) 2021-06-11
CN112932504B true CN112932504B (en) 2022-08-02

Family

ID=76235379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110058762.7A Active CN112932504B (en) 2021-01-16 2021-01-16 Dipole imaging and identifying method

Country Status (1)

Country Link
CN (1) CN112932504B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114098762A (en) * 2021-11-26 2022-03-01 江苏科技大学 Electric model of de novo cortical electroencephalogram (BCG)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105468143A (en) * 2015-11-17 2016-04-06 天津大学 Feedback system based on motor imagery brain-computer interface
CN109199376A (en) * 2018-08-21 2019-01-15 北京工业大学 The coding/decoding method of Mental imagery EEG signals based on the imaging of OA-WMNE brain source
CN109726751A (en) * 2018-12-21 2019-05-07 北京工业大学 Method based on depth convolutional neural networks identification brain Electrical imaging figure
CN109965869A (en) * 2018-12-16 2019-07-05 北京工业大学 MI-EEG recognition methods based on brain source domain space
CN110584660A (en) * 2019-09-05 2019-12-20 北京工业大学 Electrode selection method based on brain source imaging and correlation analysis
CN111460892A (en) * 2020-03-02 2020-07-28 五邑大学 Electroencephalogram mode classification model training method, classification method and system
CN112120694A (en) * 2020-08-19 2020-12-25 中国地质大学(武汉) Motor imagery electroencephalogram signal classification method based on neural network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9125581B2 (en) * 2012-05-23 2015-09-08 Seiko Epson Corporation Continuous modeling for dipole localization from 2D MCG images with unknown depth
KR101962276B1 (en) * 2017-09-07 2019-03-26 고려대학교 산학협력단 Brain-computer interface apparatus and brain-computer interfacing method for manipulating robot arm apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105468143A (en) * 2015-11-17 2016-04-06 天津大学 Feedback system based on motor imagery brain-computer interface
CN109199376A (en) * 2018-08-21 2019-01-15 北京工业大学 The coding/decoding method of Mental imagery EEG signals based on the imaging of OA-WMNE brain source
CN109965869A (en) * 2018-12-16 2019-07-05 北京工业大学 MI-EEG recognition methods based on brain source domain space
CN109726751A (en) * 2018-12-21 2019-05-07 北京工业大学 Method based on depth convolutional neural networks identification brain Electrical imaging figure
CN110584660A (en) * 2019-09-05 2019-12-20 北京工业大学 Electrode selection method based on brain source imaging and correlation analysis
CN111460892A (en) * 2020-03-02 2020-07-28 五邑大学 Electroencephalogram mode classification model training method, classification method and system
CN112120694A (en) * 2020-08-19 2020-12-25 中国地质大学(武汉) Motor imagery electroencephalogram signal classification method based on neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A novel approach of decoding EEG four-class motor imagery tasks via scout ESI and CNN;Yimin Hou 等;《Journal of Neural Engineering》;20200205;全文 *
A novel decoding method for motor imagery tasks with 4D data representation and 3D convolutional neural networks;Ming-ai Li 等;《Journal of Neural Engineering》;20210426;全文 *

Also Published As

Publication number Publication date
CN112932504A (en) 2021-06-11

Similar Documents

Publication Publication Date Title
CN110069958B (en) Electroencephalogram signal rapid identification method of dense deep convolutional neural network
CN110288018B (en) WiFi identity recognition method fused with deep learning model
CN112120694B (en) Motor imagery electroencephalogram signal classification method based on neural network
CN111062250B (en) Multi-subject motor imagery electroencephalogram signal identification method based on deep feature learning
CN112381008B (en) Electroencephalogram emotion recognition method based on parallel sequence channel mapping network
CN109965869B (en) MI-EEG identification method based on brain source domain space
CN102940490A (en) Method for extracting motor imagery electroencephalogram signal feature based on non-linear dynamics
CN114089834A (en) Electroencephalogram identification method based on time-channel cascade Transformer network
CN113017645B (en) P300 signal detection method based on void convolutional neural network
CN111273767A (en) Hearing-aid brain computer interface system based on deep migration learning
CN112932504B (en) Dipole imaging and identifying method
CN114781441B (en) EEG motor imagery classification method and multi-space convolution neural network model
CN112932503B (en) Motor imagery task decoding method based on 4D data expression and 3DCNN
CN117520891A (en) Motor imagery electroencephalogram signal classification method and system
CN116421200A (en) Brain electricity emotion analysis method of multi-task mixed model based on parallel training
CN114428555B (en) Electroencephalogram movement intention recognition method and system based on cortex source signals
CN110432899B (en) Electroencephalogram signal identification method based on depth stacking support matrix machine
CN115374831A (en) Dynamic and static combination velocity imagery classification method for multi-modal registration and space-time feature attention
Huang et al. Cross-subject MEG decoding using 3D convolutional neural networks
CN114580464A (en) Human heart rate variability and respiratory rate measurement method based on variational modal decomposition and constraint independent component analysis
CN106354990B (en) Method for detecting consistency of EEG and fMRI
CN112016415B (en) Motor imagery classification method combining ensemble learning and independent component analysis
CN114266276B (en) Motor imagery electroencephalogram signal classification method based on channel attention and multi-scale time domain convolution
Wang et al. Multi-channel LFP recording data compression scheme using Cooperative PCA and Kalman Filter
CN116226622A (en) Biomedical signal processing and analyzing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant