CN110321856B - Time-frequency multi-scale divergence CSP brain-computer interface method and device - Google Patents

Time-frequency multi-scale divergence CSP brain-computer interface method and device Download PDF

Info

Publication number
CN110321856B
CN110321856B CN201910609453.7A CN201910609453A CN110321856B CN 110321856 B CN110321856 B CN 110321856B CN 201910609453 A CN201910609453 A CN 201910609453A CN 110321856 B CN110321856 B CN 110321856B
Authority
CN
China
Prior art keywords
frequency
time
electroencephalogram
divergence
csp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910609453.7A
Other languages
Chinese (zh)
Other versions
CN110321856A (en
Inventor
周卫东
刘国洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201910609453.7A priority Critical patent/CN110321856B/en
Publication of CN110321856A publication Critical patent/CN110321856A/en
Application granted granted Critical
Publication of CN110321856B publication Critical patent/CN110321856B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Psychiatry (AREA)
  • Public Health (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Fuzzy Systems (AREA)
  • Psychology (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention relates to a brain-computer interface method and a brain-computer interface device of time-frequency multi-scale divergence CSP (chip size scale), which are characterized in that firstly, electroencephalogram signals are collected through an electroencephalogram amplifier and A/D (analog-to-digital) conversion, then, the collected electroencephalogram signals are sent to a computer for processing, the time-frequency multi-scale segmentation of the electroencephalogram signals and the selection of time-frequency electroencephalogram sections are realized, a time-frequency multi-scale classifier is generated through one-to-one CSP (chip size scale processor) and one-to-many SVM (support vector machine), and the classification of the electroencephalogram signals is completed through the time-frequency multi-scale classifier. The invention improves the running speed of the motor imagery brain-computer interface and improves the identification accuracy by utilizing the methods of time-domain frequency-domain multi-scale segmentation, time-frequency brain electrical segment selection, divergence CSP and the like.

Description

Brain-computer interface method and device for time-frequency multi-scale divergence CSP
Technical Field
The invention relates to a brain-computer interface method and device of time-frequency multi-scale divergence CSP, and belongs to the technical field of brain-computer interfaces.
Background
At present, many patients in real life lose basic ability to communicate with the outside world through language or limbs due to serious dyskinesia such as stroke or amyotrophic lateral sclerosis. This seriously affects the quality of life of the patient and also places a significant burden on his family and society. With the development of biomedical engineering and the attention of people to the field of rehabilitation medicine, brain Computer Interface (BCI) technology has become one of the hot spots of research in recent years. The brain-computer interface is a communication system which is not dependent on the conventional peripheral nerve and muscle system of the brain and directly establishes information communication and control between the brain and electronic equipment such as computers. In the field of rehabilitation medicine, the brain-computer interface technology can help patients with limb disabilities or brain injuries to control external equipment, such as wheelchairs, artificial limbs, household appliances and the like.
The areas of the cerebral cortex activated by different motor imagery modes are also different; unilateral limb movement or imaginary movement can activate the main sensory motor cortex, event Related Desynchronization (ERD) is generated on the opposite side of the brain, and Event Related Desynchronization (ERS) is generated on the ipsilateral side of the brain; ERD means that rhythmic activity at a certain frequency shows a decrease in amplitude when a certain cortical region is active, and ERS means that a certain frequency shows an increase in amplitude when a certain activity does not significantly activate the relevant cortical region at a certain moment. Motor imagery can result in either amplitude compression, i.e. event-dependent desynchronization, ERD, or amplitude increase, i.e. event-dependent synchronization, ERS, of u rhythms at frequencies of 8-12Hz and beta rhythms at frequencies of 13-28 Hz.
At present, there are many feature extraction methods in the aspect of motor imagery, such as Common Spatial Pattern filter (CSP), frequency band power, auto-regression coefficient, and riemann variance feature. At present, the method with a good effect in the aspect of motor imagery classification is a Filter Bank CSP (Filter Bank CSP) algorithm proposed by CT Guan et al in 2008, but the algorithm proposed by the CT Guan et al does not perform multi-scale segmentation on electroencephalogram segments in time domain and frequency domain, and is low in accuracy and generalization.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a brain-computer interface method of a time-frequency multi-scale divergence CSP. The method comprises the steps of carrying out multi-scale segmentation on collected electroencephalogram signals in a time domain and a frequency domain, then carrying out one-to-one Divergence CSP (generalized-based Common Pattern) extraction characteristics on each scale section, carrying out time-frequency electroencephalogram section selection on the obtained characteristics by utilizing a single-layer neural network, carrying out one-to-one Divergence CSP characteristic extraction on the selected time-frequency electroencephalogram sections and carrying out classification on the electroencephalogram signals by using a one-to-many SVM generation time-frequency multi-scale classifier, and further obtaining an electroencephalogram state detection result.
The invention also provides a device for executing the time-frequency multi-scale divergence CSP brain-computer interface method.
Summary of The Invention
A time-frequency multi-scale divergence CSP brain-computer interface method is characterized in that a hardware platform formed by an electroencephalogram amplifier, a collection and a computer is used for detecting the electroencephalogram state; firstly, electroencephalogram signals are collected through an electroencephalogram amplifier and A/D conversion, then the collected original electroencephalogram signals are sent to a computer for processing, multi-scale segmentation and time-frequency electroencephalogram section selection are carried out on the electroencephalogram signals in time domain and frequency domain, one-to-one divergence CSP feature extraction and one-to-many SVM generation are carried out on the selected time-frequency electroencephalogram sections, finally, classification of the electroencephalogram signals is completed through the time-frequency multi-scale classifier, and a control command is sent out.
Interpretation of terms:
the two-classification SVM refers to a support vector machine classifier capable of realizing two classifications.
Detailed Description
The technical scheme of the invention is as follows:
a time-frequency multi-scale divergence CSP brain-computer interface method comprises the following steps:
1) The electroencephalogram amplifier and the A/D converter are used for collecting electroencephalogram signals generated when an experimenter imagines K classes of motion and storing the electroencephalogram signals into a computer; sampling frequency is Fs, electroencephalogram length is L, each category is collected for N times, and M = N × K samples are collected in total; the experimenter imagines the EEG signal S of the k-th class during movement k The corresponding category label is k; k =1,., K; k categories of motion like, for example, left hand, right hand, toe and tongue motion;
2) The electroencephalogram signal S k Performing N in time domain t Dividing into scales, and dividing the EEG signal S at scale n k Each segment is divided into segments in such a way that each segment overlaps 50% of the previous segment
Figure BDA0002121870140000021
Each length is T n Wherein N =1, \ 8230, N t (ii) a If the length of the finally segmented electroencephalogram segment has residue, another segment containing the residue length is taken as a length T n Taking the electroencephalogram segment as a segment under the scale; brain electrical signal S k Can be divided into N in time domain st A time-frequency brain electrical segment; wherein the content of the first and second substances,
Figure BDA0002121870140000022
Figure BDA0002121870140000023
round () is a rounding function, ceil () is an rounding-up function;
3) Aiming at each brain electrical segment segmented in the step 2), the frequency band range [ F ] on the frequency domain min ,F max ]Internal carrying out N f Performing scale division; 0<F min <F max <Fs,F min Is the lower limit frequency of the frequency band, F max Is the upper frequency of the frequency band; dividing the EEG segment into segments at the scale m according to the way that each frequency band is overlapped with the previous frequency band by 50 percent
Figure BDA0002121870140000024
A frequency bandwidth of D m M =1, \ 8230;, N f (ii) a If in the frequency band range [ F min ,F max ]If the residual frequency band is left after the inner division, the residual frequency band is not used; thus, each electroencephalogram segment obtained in the step 2) is divided into N sf A frequency bandwidth of D m The time-frequency brain electrical segment of (2); wherein N is f =floor(log 2 (F max -F min +1)-1),D m =1+2 m+1
Figure BDA0002121870140000031
floor () is a floor function;
4) The time-frequency brain electric segment obtained by the steps 2) and 3) is subjected to wrapped time-frequency brain electric segment selection, and the wrapped time-frequency brain electric segment selection comprises the following steps:
a) Setting the total number of time-frequency brain electrical segments as E = N st ×N sf Each time-frequency computer power segment P is adopted s Corresponding to N p Set of samples
Figure BDA0002121870140000032
Performing one-to-one divergence CSP training to obtain V divergence CSP filters W s ,
Figure BDA0002121870140000033
Ch is the number of brain electric channels, d is the number of lines intercepted in the divergence CSP algorithm; each time-frequency brain electrical segment P s Respectively projecting the data to V divergence CSP filters, and performing one-to-one divergence CSP filtering to obtain filtered time-frequency brain electrical segments
Figure BDA0002121870140000034
b) To pair
Figure BDA0002121870140000035
Performing logarithmic feature extraction on each row of data to obtain a feature value vector
Figure BDA0002121870140000036
Figure BDA0002121870140000037
As a vector of eigenvalues F ws The value of the ith characteristic value in (b),
Figure BDA0002121870140000038
is composed of
Figure BDA0002121870140000039
In the ith row of data, var (-) is a function for calculating variance, and V characteristic value vectors F corresponding to each time-frequency brain electrical segment ws Connected into a total eigenvector
Figure BDA00021218701400000310
Wherein Q = V × d is the number of features included in the total feature vector;
c) N corresponding to the total eigenvector calculated by each time-frequency brain electrical segment p The set of samples is used as a training set, and the rest are M-N p The total feature vector set of the samples is used as a verification set, E single-layer artificial neural networks which are output in a K classification mode are trained and tested on the corresponding verification set, and E accuracy rates are obtained; connecting the E accuracy rates into a vector to obtain an accuracy rate vector
Figure BDA00021218701400000311
A j J is the accuracy;
d) Sequencing the obtained accuracy rate vectors A in a descending order, and taking the time-frequency brain electrical segments corresponding to the first G accuracy rate values as selected time-frequency brain electrical segments;
5) Each time-frequency brain electrical segment P in the G time-frequency brain electrical segments obtained in the step 4) a Corresponding set of M samples
Figure BDA00021218701400000312
Obtaining V divergence CSP filter matrixes through one-to-one divergence CSP training
Figure BDA00021218701400000313
Each time-frequency computer power segment P a Respectively projecting the filtered time-frequency computer electrical segments to V divergence CSP filters to carry out one-to-one divergence CSP filtering to obtain the filtered time-frequency computer electrical segments
Figure BDA00021218701400000314
For is to
Figure BDA00021218701400000315
Performing logarithmic feature extraction on each row of data to obtain a feature value vector
Figure BDA0002121870140000041
As feature vector F wa The value of the ith characteristic value of (a),
Figure BDA0002121870140000042
is composed of
Figure BDA0002121870140000043
In the ith row of data, var (DEG) is a function for calculating variance, and feature vectors obtained by V-time divergence CSP corresponding to G time-frequency electroencephalogram segments of M samples are connected into a total feature set
Figure BDA0002121870140000044
Wherein H = V × d × G is the number of features corresponding to each sample;
6) Collecting the total characteristic obtained in the step 5)
Figure BDA0002121870140000045
Sending the corresponding class labels into a one-to-many SVM for training, namely sending samples of each class and other K-1 classes into a two-classification SVM for training to obtain K SVM classifiers;
7) Forming a time-frequency multi-scale classifier by the VXG divergence CSP filters obtained in the step 5) and the K SVM models obtained in the step 6); the time-frequency multi-scale classifier firstly carries out one-to-one divergence CSP filtering on the input time-frequency brain electric segments, obtains logarithmic features and then carries out one-to-many SVM classification on the obtained logarithmic features;
8) Sending G time-frequency electroencephalogram sections obtained by the acquired electroencephalogram signals into a time-frequency multi-scale classifier for classification to obtain a classification label p, p =1, \ 8230A of the electroencephalogram signals;
9) And converting the category label p into a corresponding control command to control the external equipment.
According to the present invention, preferably, the step 3) of dividing the frequency band means: and carrying out band-pass filtering on the electroencephalogram section by using a J-order Butterworth filter.
Further preferably, J =7.
Preferably, in step 4), the one-to-one divergence CSP training includes:
(1) get the set
Figure BDA0002121870140000046
Data sample of a certain time-frequency brain electrical segment
Figure BDA0002121870140000047
Wherein X represents two groups of the group of (1) 1 * 、X 2 * ,N * For the number of training set samples, T n For the length of the brain electrical segment at the nth scale in the time domain, N =1, \8230, N t
(2) Calculating X 1 * And X 2 * Covariance ofMatrix sigma 1 、Σ 2 And calculating a whitening matrix
Figure BDA0002121870140000048
(3) Random initialization rotation matrix
Figure BDA0002121870140000049
(4) Computing a whitened and rotated covariance matrix
Figure BDA00021218701400000410
(5) Selecting a maximum number of iterations N e ,n e Representing the number of iterations, from n e =1 the following loop iterations are started:
e) Calculating a gradient matrix
Figure BDA00021218701400000411
The objective function J (R) is represented by formula (I):
Figure BDA0002121870140000051
in the formula (I), D KL Is a KL divergence function, I d The unit matrix is a truncated matrix of the first d rows; λ is more than or equal to 0 and less than or equal to 1 as a penalty coefficient; c is a category serial number, and i is a sample serial number;
f) Is provided with
Figure BDA00021218701400000510
For the gradient of the objective function at update rotation matrix U = I, I being the identity matrix, at (0,1)]In a range of (2) searching according to a decreasing rule
Figure BDA0002121870140000052
So that J (e) tH )≤J(R);
g) Let U = e tH Updating the rotation matrix R to UR, updating the rotation covariance matrix
Figure BDA0002121870140000053
Is composed of
Figure BDA0002121870140000054
h) If n is e =N e If so, ending iteration, jumping out of the loop, and entering the step (6), otherwise, enabling n e Adding 1 and returning to the step e);
(6) pair matrix (I) d RP)Σ 1 (I d RP) T Performing matrix eigenvalue decomposition to obtain a corresponding eigenvector matrix B;
(7) sorting the eigenvector matrix according to the eigenvalue from big to small, and sorting the sorted eigenvector matrix
Figure BDA0002121870140000055
Projecting the alpha divergence on a rotation matrix RP to obtain an alpha divergence CSP filter matrix
Figure BDA0002121870140000056
(8) If a<V, skipping to the step (1) to continue execution, otherwise, returning to the CSP filter L with one-to-one divergence p
Further preferably, d =4 and λ =0.1.
According to a preferred embodiment of the present invention, in the single-layer artificial neural network in step 4), the input layer includes Q neurons, and the output layer includes K neurons; q = V × d is the number of features included in the total feature vector; the input layer and the output layer are connected in a full connection mode; the output layer outputs a probability value by adopting a softmax function; and (3) taking the cross entropy function as a loss function of model training, and setting theta as all training parameters of the model, so that the optimization target CE (theta) is shown as formula (II):
Figure BDA0002121870140000057
in the formula (II), the reaction solution is shown in the specification,
Figure BDA0002121870140000058
is the weighted sum of sample i for the jth neuron,
Figure BDA0002121870140000059
is the weighted sum of sample i over the kth neuron, which is related to θ. y is (i) The true label representing the ith sample, 1 {. Is an illustrative function, N * Updating the weight of the single-layer artificial neural network by adopting a random gradient descent method for the number of samples of the input model; the number of iterations is set to E p Next, the process is carried out.
Further preferably, E p =20。
According to a preferred embodiment of the present invention, the one-to-many SVM classification in step 7) includes: the input features are fed into K binary SVM models, classifying the input features into the class having the largest classification function value.
A device for brain-computer interface by using the method comprises an electroencephalogram amplifier, an A/D converter and a computer which are sequentially connected by a circuit, wherein an electroencephalogram detection module for detecting the electroencephalogram state is arranged in the computer, the electroencephalogram signal is collected by the electroencephalogram amplifier and the A/D converter and then transmitted to the computer, the electroencephalogram signal is subjected to multi-scale time-frequency segmentation, time-frequency electroencephalogram segment selection, one-to-one divergence CSP and one-to-many SVM by the electroencephalogram detection module, a time-frequency multi-scale classifier is formed for classifying the electroencephalogram signal, and a sample prediction label is obtained and converted into a control command for external equipment.
The invention has the beneficial effects that:
the electroencephalogram signal is subjected to multi-scale segmentation and wrapped time-frequency electroencephalogram section selection in a time domain and a frequency domain according to a certain rule, the selected time-frequency electroencephalogram section is subjected to one-to-one divergence CSP and one-to-many SVM, a multi-scale time-frequency classifier is formed to classify the electroencephalogram signal, and therefore an electroencephalogram signal category label and a control signal are obtained to control external equipment. The invention improves the identification accuracy by utilizing a multi-scale division method in the time domain and the frequency domain; through the selection of the time-frequency brain electric segment, redundant time-frequency brain electric segments are screened out, and the running speed of the motor imagery brain-computer interface is improved.
Drawings
FIG. 1 is a block diagram of the structure of the brain-computer interface device of the present invention;
FIG. 2 is a schematic flow chart of a brain-computer interface method of the time-frequency multi-scale divergence CSP of the present invention;
FIG. 3 is a schematic diagram of a visualized time-frequency image obtained by combining and adding the accuracy vectors according to the corresponding time domain range and frequency domain range;
FIG. 4 is a diagram illustrating the average accuracy achieved by using different numbers of time-frequency brain electrical segments on the motor imagery brain electrical signals of 12 volunteers.
Detailed Description
The invention will be further described with reference to the drawings and examples, but the invention is not limited thereto;
example 1
As shown in fig. 1-4;
according to the invention, the electroencephalogram signals are collected through the electrodes, amplified by the electroencephalogram amplifier and subjected to an A/D converter, and then input into a computer to realize classification of the electroencephalogram signals and generate control commands to control external equipment;
a time-frequency multi-scale divergence CSP brain-computer interface method is disclosed, a flow chart of which is shown in figure 2, and comprises the following steps:
1) Collecting K =4 categories of electroencephalogram signals generated when an experimenter wants 4 movements of a left hand, a right hand, toes and a tongue by using an electroencephalogram amplifier and an A/D converter, and storing the electroencephalogram signals into a computer; the sampling frequency is Fs =250Hz, the electroencephalogram length is L =1000, each category is collected for N =90 times, and M = N × K =360 samples in total are collected; the experimenter imagines the EEG signal S of the k-th class during movement k The corresponding label is k; k =1, 4;
2) The EEG signal S k Performing N in the time domain t Dividing into scales, and dividing the EEG signal S at scale n k Each segment is divided into segments in such a way that each segment overlaps 50% of the previous segment
Figure BDA0002121870140000071
Each length is T n Wherein N =1, \ 8230, N t (ii) a If the length of the finally segmented brain segment has residue,then another segment of length T containing the remaining length is taken n Taking the electroencephalogram section as a section under the scale; brain electrical signal S k Can be divided into N in time domain st =11 brain electrical segments; wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0002121870140000072
Figure BDA0002121870140000073
round () is a rounding function and ceil () is an rounding-up function. The electroencephalogram signal with the length of 1000 points is divided into 1 electroencephalogram section with the length of 1000 points and 1-1000 points under the scale 1; dividing into 1-500 points, 251-750 points, 501-1000 points and 3 brain electrical segments with the length of 500 at the scale of 2; 7 electroencephalogram segments with the length of 250, which are divided into 1-250 points, 126-375 points, 251-500 points, 376-625 points, 501-750 points, 626-875 points and 751-1000 points under the scale 3;
3) Aiming at each brain electrical segment segmented in the step 2), the frequency band range [ F ] on the frequency domain min =4Hz,F max =38Hz]Inner stroke for N f Dividing into multiple scales, dividing the EEG segment into multiple scales m by overlapping each band with the last band by 50%
Figure BDA0002121870140000074
A frequency bandwidth of D m Wherein m =1, \ 8230;, N f (ii) a If in the frequency band range [ F min =4Hz,F max =38Hz]If the residual frequency band is left after the inner division, the residual frequency band is not used; thus, each electroencephalogram segment obtained in the step 2) can be divided into
Figure BDA0002121870140000075
A frequency bandwidth of D m The brain electrical section of (1); wherein N is f =floor(log 2 (F max -F min +1)-1)=4,D m =1+2 m+1
Figure BDA0002121870140000076
floor () is a floor function. The band range of 4-38Hz is divided in the scale 116 frequency bands with the bandwidth of 4Hz are divided into 4-8Hz,6-10Hz,8-12Hz,10-14Hz,12-16Hz,14-18Hz,16-20H,18-22Hz,20-24Hz,22-26Hz,24-28Hz,26-30Hz,28-32Hz,30-34Hz,32-36Hz and 34-38 Hz; dividing the frequency bands into 7 frequency bands with the frequency bandwidth of 8Hz, wherein the frequency bands are 4-12Hz, 8-169Hz, 12-20Hz,16-24Hz,20-28Hz,24-32Hz and 28-36Hz under the scale of 2; dividing the frequency band into 3 frequency bands with the frequency band width of 16Hz, wherein the frequency bands are 4-20Hz,12-28Hz and 20-36Hz under the scale of 3; 1 frequency band with the frequency bandwidth of 32Hz is divided into 4-36Hz under the scale 1;
the step 3) of dividing the frequency band means that: performing band-pass filtering on the electroencephalogram section by using a J-order Butterworth filter, wherein J =7;
4) Carrying out wrapped time-frequency electroencephalogram section selection on the time-frequency electroencephalogram sections obtained in the steps 2) and 3), wherein the wrapped time-frequency electroencephalogram section selection step is as follows:
a) Setting the total number of time-frequency brain electrical segments as E = N st ×N sf =297, using each time-frequency brain electrical segment P s Corresponding to N p Set of =180 samples
Figure BDA0002121870140000081
Performing one-to-one divergence CSP training to obtain V =6 divergence CSP filters
Figure BDA0002121870140000082
Wherein Ch =60 is the number of channels, d =4 is the number of rows of the interception set in the divergence CSP algorithm; each time-frequency brain electrical segment P s Respectively projecting the signals to V divergence CSP filters to carry out one-to-one divergence CSP filtering to obtain filtered time-frequency electroencephalogram segments
Figure BDA0002121870140000083
b) For is to
Figure BDA0002121870140000084
Performing logarithmic feature extraction on each row of data to obtain a feature value vector
Figure BDA0002121870140000085
Wherein the content of the first and second substances,
Figure BDA0002121870140000086
as feature vector F ws The value of the ith characteristic value of (a),
Figure BDA0002121870140000087
is composed of
Figure BDA0002121870140000088
The ith row of data in (1), var (·) is a function that calculates the variance. V eigenvectors F corresponding to each time-frequency electroencephalogram segment ws Connected into a total eigenvector
Figure BDA0002121870140000089
Wherein Q = V × d is the number of features included in the total feature vector;
c) N corresponding to the total eigenvector calculated by each time-frequency brain electrical segment p The set of samples is used as a training set, and the rest M-N p Training E single-layer artificial neural networks which are output in a K classification mode and testing on the corresponding verification sets to obtain E accuracy rates; connecting the E accuracy rates into a vector to obtain an accuracy rate vector
Figure BDA00021218701400000810
Wherein A is j Is the jth accuracy. A schematic diagram of a visualized time-frequency image obtained by combining and adding the accuracy vectors according to the corresponding time domain range and frequency domain range is shown in fig. 3;
d) Sorting the obtained accuracy rate vectors A in a descending order, and taking the time-frequency brain electrical segments corresponding to the first G =150 accuracy rates as selected time-frequency brain electrical segments;
5) Each time-frequency brain electrical segment P in the G time-frequency brain electrical segments obtained in the step 4) a Corresponding set of M samples
Figure BDA00021218701400000811
V divergence CSP filter matrixes are obtained through one-to-one divergence CSP training
Figure BDA00021218701400000812
Each time-frequency brain electrical segment P a Respectively projecting the signals to V divergence CSP filters to carry out one-to-one divergence CSP filtering to obtain filtered time-frequency electroencephalogram segments
Figure BDA00021218701400000813
To pair
Figure BDA00021218701400000814
Performing logarithmic feature extraction on each row of data to obtain a feature value vector
Figure BDA00021218701400000815
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA00021218701400000816
as feature vector F wa The value of the ith characteristic value in (b),
Figure BDA00021218701400000817
is composed of
Figure BDA00021218701400000818
The ith row of data in (1), var (·) is a function that calculates the variance. Connecting the feature vectors obtained by the V-time divergence CSP corresponding to the G time-frequency electroencephalogram segments of the M samples into a total feature set
Figure BDA0002121870140000091
Wherein H = V × d × G is the number of features corresponding to each sample;
6) Collecting the total characteristics obtained in the step 5)
Figure BDA0002121870140000092
Sending the corresponding class labels into a one-to-many SVM for training, namely sending samples of each class and the other 3 classes into a two-classification SVM for training to obtain 4 SVM classifiers;
7) Forming a time-frequency multi-scale classifier by the VXG divergence CSP filters obtained in the step 5) and the 4 SVM models obtained in the step 6); the time-frequency multi-scale classifier firstly carries out one-to-one divergence CSP filtering on the input time-frequency brain electric segment, obtains logarithmic features and then carries out one-to-many SVM classification on the obtained logarithmic features;
8) Sending G time-frequency electroencephalogram segments obtained from the acquired electroencephalogram signals into a time-frequency multi-scale classifier for classification to obtain a category label p (p =1, \ 8230;, 4) of the electroencephalogram signals;
9) And converting the category label p into a corresponding control command to control the external equipment. When p =1, judging the electroencephalogram state at the moment as an electroencephalogram signal when the left hand is imagined, and converting the electroencephalogram state into a control command 1; when p =2, judging the electroencephalogram state at the moment as an electroencephalogram signal when the right hand is imagined, and converting the electroencephalogram state into a control command 2; when p =3, judging that the electroencephalogram state is the electroencephalogram signal when the toes are imagined, and converting the electroencephalogram signal into a control command 3; and when p =4, judging the electroencephalogram state at the moment as an electroencephalogram signal when the tongue is imagined, and converting the electroencephalogram state into a control command 4.
Example 2
The brain-computer interface method of the time-frequency multi-scale divergence CSP according to the embodiment 1 is characterized in that:
in the step 4), the one-to-one divergence CSP training is realized by the following steps:
(1) get the set
Figure BDA0002121870140000093
Data sample of a certain time-frequency brain electrical segment
Figure BDA0002121870140000094
Wherein X represents two groups of the group of (1) 1 * 、X 2 * ,N * For the number of training set samples, T n N =1, \ 8230, N is the length of the electroencephalogram segment at the nth scale in the time domain t
(2) Calculating X 1 * And X 2 * Covariance matrix Σ of 1 、Σ 2 And calculating a whitening matrix
Figure BDA0002121870140000095
(3) Random initiationChange rotation matrix
Figure BDA0002121870140000096
(4) Computing whitened and rotated covariance matrices
Figure BDA0002121870140000097
(5) Selecting a maximum number of iterations N e ,n e Representing the number of iterations, from n e =1 the following loop iteration is started:
a) Calculating a gradient matrix
Figure BDA0002121870140000098
The objective function J (R) is represented by formula (I):
Figure BDA0002121870140000101
in the formula (I), D KL Is a KL divergence function, I d The unit matrix of the front d rows of the truncated matrix; λ is more than or equal to 0 and less than or equal to 1 as a penalty coefficient; c is a category serial number, i is a sample serial number;
b) Is provided with
Figure BDA0002121870140000109
The gradient at update rotation matrix U = I for the objective function, where I is the identity matrix. In (0,1)]In a range of (1) searching according to a decreasing rule
Figure BDA0002121870140000102
So that J (e) tH )≤J(R);
c) Let U = e tH Updating the rotation matrix R to UR, updating the rotation covariance matrix
Figure BDA0002121870140000103
Is composed of
Figure BDA00021218701400001010
d) If n is e =N e If so, ending iteration, jumping out of the loop, and entering the step (6), otherwise, enabling n e Adding 1, and repeating the cycle;
(6) pair matrix (I) d RP)Σ 1 (I d RP) T Performing matrix eigenvalue decomposition to obtain a corresponding eigenvector matrix B;
(7) sorting the eigenvector matrixes according to the eigenvalues from big to small, and sorting the sorted eigenvector matrixes
Figure BDA0002121870140000104
Projecting the alpha divergence on a rotation matrix RP to obtain an alpha divergence CSP filter matrix
Figure BDA0002121870140000105
(8) If a<V, jumping to the step (1) to continue execution. Otherwise, return to the CSP filter L with one-to-one divergence p
d=4,λ=0.1。
In the single-layer artificial neural network in the step 4), the input layer comprises Q neurons, and the output layer comprises K neurons; wherein Q = V × d is the number of features included in the total feature vector; the input layer and the output layer are connected in a full connection mode; the output layer outputs a probability value by adopting a softmax function; and (3) taking the cross entropy function as a loss function of model training, and setting theta as all training parameters of the model, so that the optimization target CE (theta) is shown as formula (II):
Figure BDA0002121870140000106
in the formula (II), the compound is shown in the specification,
Figure BDA0002121870140000107
is the weighted sum of sample i for the jth neuron,
Figure BDA0002121870140000108
is the weighted sum of sample i for the kth neuron, which is related to θ. y is (i) RepresentThe true label of the ith sample, 1 {. Is the illustrative function, N * Is the number of samples of the input model. Updating the weight of the neural network by adopting a random gradient descent method; the number of iterations is set to E p Next, the process is carried out. E p =20。
The one-to-many SVM classification in the step 7) comprises the following steps: the input features are fed into K binary SVM models, classifying the input features into the class having the largest classification function value.
Example 3
A device for performing brain-computer interface by using the method of embodiment 1 or 2 comprises an electroencephalogram amplifier, an A/D converter and a computer which are connected by a circuit, wherein an electroencephalogram detection module for detecting the state of electroencephalogram is arranged in the computer, the electroencephalogram signal is collected by the electroencephalogram amplifier and then is transmitted to the computer through A/D conversion, the electroencephalogram signal is subjected to multi-scale time-frequency segmentation, time-frequency electroencephalogram segment selection, one-to-one divergence CSP and one-to-many SVM by the electroencephalogram detection module, a time-frequency multi-scale classifier is formed for classifying the electroencephalogram signal, and a class label of the electroencephalogram signal is obtained and converted into a control command for left, right, front and back movement of a wheelchair.
The four-classification average identification accuracy rate reaches 79.86% when 150 time-frequency brain electrical segments are selected by detecting 12 tested brain electrical samples by using the method. The influence of the number of selected time-frequency electroencephalogram segments on the average recognition accuracy of four classes of 12 persons is shown in fig. 4.

Claims (9)

1. A brain-computer interface method of time-frequency multi-scale divergence CSP is characterized by comprising the following steps:
1) The electroencephalogram amplifier and the A/D converter are used for collecting electroencephalogram signals generated when an experimenter imagines K classes of motion and storing the electroencephalogram signals into a computer; sampling frequency is Fs, electroencephalogram length is L, each category is collected for N times, and M = N × K samples are collected in total; the experimenter imagines the EEG signal S of the k-th class during movement k The corresponding category label is k; k =1, ·, K;
2) The electroencephalogram signal S k Performing N in the time domain t Segmentation at scalen lower electroencephalogram signal S k Is divided into segments in such a way that each segment overlaps 50% of the previous segment
Figure FDA0002121870130000011
Each length is T n N =1, \ 8230;, N t (ii) a If the length of the finally segmented brain wave segment has residue, another segment containing the residue length is taken as T n Taking the electroencephalogram segment as a segment under the scale; electroencephalogram signal S k Can be divided into N in time domain st A time-frequency brain electrical segment;
Figure FDA0002121870130000012
Figure FDA0002121870130000013
round () is a rounding function, ceil () is an rounding-up function;
3) Aiming at each brain electrical segment segmented in the step 2), the frequency band range [ F ] on the frequency domain min ,F max ]Internal implementation of N f Performing scale division; 0<F min <F max <Fs,F min Is the lower limit frequency of the frequency band, F max Is the upper frequency of the frequency band; dividing the EEG segment into segments at the scale m according to the way that each frequency band is overlapped with the previous frequency band by 50 percent
Figure FDA0002121870130000014
A frequency bandwidth of D m M =1, \ 8230;, N f (ii) a If in the frequency band range [ F min ,F max ]If the residual frequency band is left after the inner division, the residual frequency band is not used; each electroencephalogram section obtained in the step 2) is divided into N sf A frequency bandwidth of D m The time-frequency brain electrical segment of (1); n is a radical of f =floor(log 2 (F max -F min +1)-1),D m =1+2 m+1
Figure FDA0002121870130000015
floor () is a floor function;
4) The time-frequency brain electric segment obtained by the steps 2) and 3) is subjected to wrapped time-frequency brain electric segment selection, and the wrapped time-frequency brain electric segment selection comprises the following steps:
a) Setting the total number of time-frequency brain electrical segments as E = N st ×N sf Each time-frequency brain power segment P is adopted s Corresponding to N p Set of individual samples
Figure FDA0002121870130000016
Performing one-to-one divergence CSP training to obtain V divergence CSP filters W s ,
Figure FDA0002121870130000017
Ch is the number of brain electric channels, d is the number of rows intercepted in the divergence CSP algorithm; each time-frequency brain electrical segment P s Respectively projecting the data to V divergence CSP filters, and performing one-to-one divergence CSP filtering to obtain filtered time-frequency brain electrical segments
Figure FDA0002121870130000021
b) To pair
Figure FDA0002121870130000022
Performing logarithmic feature extraction on each row of data to obtain a feature value vector
Figure FDA0002121870130000023
Figure FDA0002121870130000024
As a vector of eigenvalues F ws The value of the ith characteristic value of (a),
Figure FDA0002121870130000025
is composed of
Figure FDA0002121870130000026
The ith row of data in (1), var (·) is a function for calculating variance, and V eigenvalue vectors F corresponding to each time-frequency brain electrical segment ws Connected into a total eigenvector
Figure FDA0002121870130000027
Q = V × d is the number of features included in the total feature vector;
c) N corresponding to the total eigenvector calculated by each time-frequency brain electrical segment p The set of samples is used as a training set, and the rest M-N p Training E single-layer artificial neural networks which are output in a K classification mode and testing on the corresponding verification sets to obtain E accuracy rates; connecting the E accuracy rates into a vector to obtain an accuracy rate vector
Figure FDA0002121870130000028
A j J is the accuracy;
d) Sequencing the obtained accuracy rate vectors A in a descending order, and taking the time-frequency brain electrical segments corresponding to the first G accuracy rate values as selected time-frequency brain electrical segments;
5) Each time-frequency brain electric segment P in the G time-frequency brain electric segments obtained in the step 4) a Corresponding set of M samples
Figure FDA0002121870130000029
Obtaining V divergence CSP filter matrixes through one-to-one divergence CSP training
Figure FDA00021218701300000210
Each time-frequency computer power segment P a Respectively projecting the signals to V divergence CSP filters to carry out one-to-one divergence CSP filtering to obtain filtered time-frequency electroencephalogram segments
Figure FDA00021218701300000211
For is to
Figure FDA00021218701300000212
Performing logarithmic feature extraction on each row of data to obtain a feature value vector
Figure FDA00021218701300000213
As feature vector F wa The value of the ith characteristic value of (a),
Figure FDA00021218701300000214
is composed of
Figure FDA00021218701300000215
In the ith row of data, var (DEG) is a function for calculating variance, and feature vectors obtained by V-time divergence CSP corresponding to G time-frequency electroencephalogram segments of M samples are connected into a total feature set
Figure FDA00021218701300000216
H = V × d × G is the number of features corresponding to each sample;
6) Collecting the total characteristics obtained in the step 5)
Figure FDA00021218701300000217
Sending the corresponding class labels into a one-to-many SVM for training, namely sending samples of each class and the other K-1 classes into a two-classification SVM for training to obtain K SVM classifiers;
7) Forming a time-frequency multi-scale classifier by the V multiplied by G divergence CSP filters obtained in the step 5) and the K SVM models obtained in the step 6); the time-frequency multi-scale classifier firstly carries out one-to-one divergence CSP filtering on the input time-frequency brain electric segment, obtains logarithmic features and then carries out one-to-many SVM classification on the obtained logarithmic features;
8) Sending G time-frequency electroencephalogram sections obtained by the acquired electroencephalogram signals into a time-frequency multi-scale classifier for classification, and obtaining category labels p, p =1, \8230Okof the electroencephalogram signals;
9) And converting the category label p into a corresponding control command to control the external equipment.
2. The method for brain-computer interface of time-frequency multi-scale divergence CSP according to claim 1, wherein the step 3) of dividing the frequency band is: and performing band-pass filtering on the electroencephalogram section by using a J-order Butterworth filter.
3. The method for brain-computer interface of time-frequency multi-scale divergence (CSP) according to claim 2, wherein J =7.
4. The brain-computer interface method of the time-frequency multi-scale divergence CSP according to claim 1, wherein in the step 4), the one-to-one divergence CSP training is realized by the steps of:
(1) get the set
Figure FDA0002121870130000031
Data sample of a certain time-frequency brain electrical segment
Figure FDA0002121870130000032
In (2), two types which have not been repeatedly used are X 1 * 、X 2 * ,N * For the number of training set samples, T n N =1, \ 8230, N is the length of the electroencephalogram segment at the nth scale in the time domain t
(2) Calculating X 1 * And X 2 * Of the covariance matrix sigma 1 、Σ 2 And calculating a whitening matrix
Figure FDA0002121870130000033
(3) Random initialization rotation matrix
Figure FDA0002121870130000034
(4) Computing whitened and rotated covariance matrices
Figure FDA0002121870130000035
(5) Selecting a maximum number of iterations N e ,n e Representing the number of iterations, from n e =1 the following loop iteration is started:
e) MeterCalculating gradient matrix
Figure FDA0002121870130000036
The objective function J (R) is represented by formula (I):
Figure FDA0002121870130000037
in the formula (I), D KL Is a KL divergence function, I d The unit matrix of the front d rows of the truncated matrix; λ is more than or equal to 0 and less than or equal to 1 as a penalty coefficient; c is a category serial number, i is a sample serial number;
f) Is provided with
Figure FDA0002121870130000038
For the gradient of the objective function at update rotation matrix U = I, I being the identity matrix, at (0,1)]In a range of (1) searching according to a decreasing rule
Figure FDA0002121870130000039
So that J (e) tH )≤J(R);
g) Let U = e tH Updating the rotation matrix R to UR, updating the rotation covariance matrix
Figure FDA00021218701300000310
Is composed of
Figure FDA00021218701300000311
h) If n is e =N e If so, ending iteration, jumping out of the loop, and entering the step (6), otherwise, enabling n e Adding 1 and returning to the step e);
(6) pair matrix (I) d RP)Σ 1 (I d RP) T Performing matrix eigenvalue decomposition to obtain a corresponding eigenvector matrix B;
(7) sorting the eigenvector matrixes according to the eigenvalues from big to small, and sorting the sorted eigenvector matrixes
Figure FDA0002121870130000041
Projecting the alpha divergence on the rotation matrix RP to obtain an alpha divergence CSP filter matrix
Figure FDA0002121870130000042
(8) If a<V, skipping to the step (1) to continue execution, otherwise, returning to the CSP filter L with one-to-one divergence p
5. The method for brain-computer interface of time-frequency multi-scale divergence CSP according to claim 4, wherein d =4 and λ =0.1.
6. The method for brain-computer interface of time-frequency multi-scale divergence CSP according to claim 1, wherein in the single-layer artificial neural network in step 4), the input layer comprises Q neurons, and the output layer comprises K neurons; q = V × d is the number of features included in the total feature vector; the input layer and the output layer are connected in a full connection mode; the output layer outputs a probability value by adopting a softmax function; and (3) taking the cross entropy function as a loss function of model training, and setting theta as all training parameters of the model, so that the optimization target CE (theta) is shown as formula (II):
Figure FDA0002121870130000043
in the formula (II), the compound is shown in the specification,
Figure FDA0002121870130000044
is the weighted sum of sample i for the jth neuron,
Figure FDA0002121870130000045
is the weighted sum of sample i for the kth neuron, y (i) The true label representing the ith sample, 1 {. Is an illustrative function, N * Updating the single-layer artificial neural network by adopting a random gradient descent method for inputting the number of samples of the modelThe weight of the complex; the number of iterations is set to E p Next, the process is carried out.
7. The method for brain-computer interface of CSP according to claim 6, wherein E is p =20。
8. The method for brain-computer interface of time-frequency multi-scale divergence CSP according to claim 1, wherein the one-to-many SVM classification of step 7) is: the input features are fed into K binary SVM models, classifying the input features into the class with the largest classification function value.
9. A device for performing brain-computer interface by using the brain-computer interface method of the time-frequency multi-scale divergence CSP is characterized by comprising an electroencephalogram amplifier, an A/D converter and a computer which are sequentially connected by a circuit, wherein the computer is internally provided with an electroencephalogram detection module for detecting the electroencephalogram state, the electroencephalogram signal is collected by the electroencephalogram amplifier and the A/D converter and then transmitted to the computer, the electroencephalogram signal is subjected to multi-scale time-frequency segmentation, time-frequency electroencephalogram segment selection, one-to-one divergence CSP and one-to-many SVM by using the electroencephalogram detection module, a time-frequency multi-scale classifier is formed for classifying the electroencephalogram signal, and a sample prediction label is obtained and converted into a control command for external equipment.
CN201910609453.7A 2019-07-08 2019-07-08 Time-frequency multi-scale divergence CSP brain-computer interface method and device Active CN110321856B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910609453.7A CN110321856B (en) 2019-07-08 2019-07-08 Time-frequency multi-scale divergence CSP brain-computer interface method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910609453.7A CN110321856B (en) 2019-07-08 2019-07-08 Time-frequency multi-scale divergence CSP brain-computer interface method and device

Publications (2)

Publication Number Publication Date
CN110321856A CN110321856A (en) 2019-10-11
CN110321856B true CN110321856B (en) 2023-01-10

Family

ID=68123076

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910609453.7A Active CN110321856B (en) 2019-07-08 2019-07-08 Time-frequency multi-scale divergence CSP brain-computer interface method and device

Country Status (1)

Country Link
CN (1) CN110321856B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111000557B (en) * 2019-12-06 2022-04-15 天津大学 Noninvasive electroencephalogram signal analysis system applied to decompression skull operation
CN113967022B (en) * 2021-11-16 2023-10-31 常州大学 Individual self-adaption-based motor imagery electroencephalogram characteristic characterization method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012153965A2 (en) * 2011-05-09 2012-11-15 광주과학기술원 Brain-computer interface device and classification method therefor
CN103735262A (en) * 2013-09-22 2014-04-23 杭州电子科技大学 Dual-tree complex wavelet and common spatial pattern combined electroencephalogram characteristic extraction method
CN105654063A (en) * 2016-01-08 2016-06-08 东南大学 Motor imagery EEG pattern recognition method based on time-frequency parameter optimization of artificial bee colony
CN109472194A (en) * 2018-09-26 2019-03-15 重庆邮电大学 A kind of Mental imagery EEG signals characteristic recognition method based on CBLSTM algorithm model
CN109858537A (en) * 2019-01-22 2019-06-07 南京邮电大学 EEG feature extraction method of the improved EEMD in conjunction with CSP

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012153965A2 (en) * 2011-05-09 2012-11-15 광주과학기술원 Brain-computer interface device and classification method therefor
CN103735262A (en) * 2013-09-22 2014-04-23 杭州电子科技大学 Dual-tree complex wavelet and common spatial pattern combined electroencephalogram characteristic extraction method
CN105654063A (en) * 2016-01-08 2016-06-08 东南大学 Motor imagery EEG pattern recognition method based on time-frequency parameter optimization of artificial bee colony
CN109472194A (en) * 2018-09-26 2019-03-15 重庆邮电大学 A kind of Mental imagery EEG signals characteristic recognition method based on CBLSTM algorithm model
CN109858537A (en) * 2019-01-22 2019-06-07 南京邮电大学 EEG feature extraction method of the improved EEMD in conjunction with CSP

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Toward optimal feature and time segment selection by divergence method for EEG signals classification;Jie Wang 等;《Computers in Biology and Medicine》;20180601;第97卷;第161-170页 *
基于概率协作表示的运动想象脑电分类算法;崔丽霞等;《山东科学》;20180415;第31卷(第02期);第105-112页 *

Also Published As

Publication number Publication date
CN110321856A (en) 2019-10-11

Similar Documents

Publication Publication Date Title
Chatterjee et al. Ensemble learning approach to motor imagery EEG signal classification
CN112861604B (en) Myoelectric action recognition and control method irrelevant to user
Wang et al. An approach of one-vs-rest filter bank common spatial pattern and spiking neural networks for multiple motor imagery decoding
AlOmari et al. Analysis of extracted forearm sEMG signal using LDA, QDA, K-NN classification algorithms
Hwaidi et al. Classification of motor imagery EEG signals based on deep autoencoder and convolutional neural network approach
Li et al. EEG signal classification method based on feature priority analysis and CNN
Khasnobish et al. A Two-fold classification for composite decision about localized arm movement from EEG by SVM and QDA techniques
CN110321856B (en) Time-frequency multi-scale divergence CSP brain-computer interface method and device
CN114533086A (en) Motor imagery electroencephalogram decoding method based on spatial domain characteristic time-frequency transformation
CN107822629A (en) The detection method of extremity surface myoelectricity axle
CN113180692A (en) Electroencephalogram signal classification and identification method based on feature fusion and attention mechanism
AU2013100576A4 (en) Human Identification with Electroencephalogram (EEG) for the Future Network Security
CN110522456A (en) A kind of WD based on deep learning trembles conditions of patients self-evaluating system
Zhang et al. An amplitudes-perturbation data augmentation method in convolutional neural networks for EEG decoding
Geng et al. A fusion algorithm for EEG signal processing based on motor imagery brain-computer interface
Alansari et al. Study of wavelet-based performance enhancement for motor imagery brain-computer interface
AlOmari et al. Novel hybrid soft computing pattern recognition system SVM–GAPSO for classification of eight different hand motions
Sridhar et al. A Neural Network Approach for EEG classification in BCI
CN109144277B (en) Method for constructing intelligent vehicle controlled by brain based on machine learning
Wang et al. Research on the key technologies of motor imagery EEG signal based on deep learning
CN110604578A (en) Human hand and hand motion recognition method based on SEMG
CN114548165B (en) Myoelectricity mode classification method capable of crossing users
Bhalerao et al. Automatic detection of motor imagery EEG signals using swarm decomposition for robust BCI systems
Ibraheem Lower limb analysis based on surface electromyography (semg) using different time-frequency representation techniques
Vijayvargiya et al. Implementation of machine learning algorithms for automated human gait activity recognition using sEMG signals

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant