CN112733727A - Electroencephalogram consciousness dynamic classification method based on linear analysis and feature decision fusion - Google Patents
Electroencephalogram consciousness dynamic classification method based on linear analysis and feature decision fusion Download PDFInfo
- Publication number
- CN112733727A CN112733727A CN202110037508.9A CN202110037508A CN112733727A CN 112733727 A CN112733727 A CN 112733727A CN 202110037508 A CN202110037508 A CN 202110037508A CN 112733727 A CN112733727 A CN 112733727A
- Authority
- CN
- China
- Prior art keywords
- qda
- rda
- feature
- matrix
- decision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 230000004927 fusion Effects 0.000 title claims abstract description 33
- 239000011159 matrix material Substances 0.000 claims abstract description 54
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 33
- 238000000605 extraction Methods 0.000 claims abstract description 11
- 210000004556 brain Anatomy 0.000 claims abstract description 7
- 230000006698 induction Effects 0.000 claims abstract description 6
- 230000010354 integration Effects 0.000 claims abstract description 5
- 238000012549 training Methods 0.000 claims description 24
- 238000005457 optimization Methods 0.000 claims description 6
- 238000009826 distribution Methods 0.000 claims description 5
- 230000014509 gene expression Effects 0.000 claims description 3
- 238000012360 testing method Methods 0.000 description 23
- 238000007500 overflow downdraw method Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 10
- 210000004027 cell Anatomy 0.000 description 4
- 238000002790 cross-validation Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000007635 classification algorithm Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000012850 discrimination method Methods 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 238000009413 insulation Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000002483 medication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000926 neurological effect Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 208000020016 psychiatric disease Diseases 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/15—Biometric patterns based on physiological signals, e.g. heartbeat, blood flow
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
The invention provides a linear analysis-based electroencephalogram consciousness dynamic classification method based on feature decision fusion, wherein an electroencephalogram signal data set X (X1, X2, … and Xn) is acquired through a brain wave induction helmet, and n is a positive integer; and classifying the signal data set X ═ X (X1, X2, …, Xn) by using a canonical discriminant analysis RDA and a quadratic discriminant analysis QDA to obtain a correlation coefficient matrix rhoRDAAnd ρQDAMeanwhile, a feature decision fusion comprising a feature extraction unit, a projection classification unit and a decision selection unit is constructed to carry out feature integration and decision selection on the decisions and coefficients of the RDA and the QDA, so that better classification accuracy is obtained. The invention integrates two algorithms by constructing feature decision fusion, thereby selecting the decision which is more likely to be accurate, and obtaining better classification accuracy rate on the classification of motor imagery data.
Description
Technical Field
The invention belongs to the field of electroencephalogram dynamic analysis, and particularly relates to an electroencephalogram consciousness dynamic classification method based on linear analysis and feature decision fusion.
Background
MI is an EEG signal, which reflects that specific functional areas of the brain are activated when a person is in motor imagery, and the corresponding EEG signal can generate stable and regular characteristic changes, which is the physiological basis for taking the motor imagery EEG as an input signal of a BCI system. For the purpose of decoding subjects from MI signals, various methods have been proposed to identify and classify MI signals, etc., such as Linear Discriminant Analysis (LDA), Gaussian classifier (Gaussian classifier), probabilistic neural network (probabilistic NN). Particularly, in the LDA method, when a new sample is classified, the new sample is projected on the same straight line, and the classification of the new sample is determined based on the position of the projected point. The analytic solution based on the generalized eigenvalue problem can be directly solved, so that the problem that in a general nonlinear algorithm, the local minimum problem frequently encountered in construction does not need to artificially encode the output class of the mode, and the LDA has particularly obvious advantages in processing the unbalanced mode class. Compared with a neural network method, the LDA does not need to adjust parameters, so that the problems of learning parameters, optimizing weight, selecting a neuron activation function and the like do not exist; it is insensitive to the normalization or randomization of the pattern, which is more prominent in various algorithms based on gradient descent. LDA is a classic linear learning method, which was originally proposed by Fisher in 1936 on the two-classification problem, and is also called Fisher linear discrimination. The idea of linear discrimination is quite simple: giving a training sample set, and trying to project samples on a straight line, so that the projection points of the same type of samples are as close as possible, and the projection points of the different type of samples are as far away as possible; when the new samples are classified, the new samples are projected to the same straight line, and the classes of the new samples are determined according to the positions of the projection points. QDA is a variant of LDA where a single covariance matrix is estimated for each type of observation. QDA is particularly useful if it is known in advance that individual classes exhibit different covariances. The disadvantage of QDA is that it cannot be used as a dimension reduction technique. Since QDA estimates the covariance matrix for each class, it has more significant parameters than LDA. RDA is a compromise between LDA and QDA, and is more suitable in situations where there are many potentially relevant features because RDA is a regularization technique. Because the methods provide different decisions, the method which integrates the different decisions by fusing a plurality of methods is a feasible method so as to improve the accuracy rate of the whole classification.
Disclosure of Invention
The invention aims to provide a linear analysis-based electroencephalogram consciousness dynamic classification method based on feature decision fusion.
The invention relates to a linear analysis-based electroencephalogram consciousness dynamic classification method based on feature decision fusion, which comprises the following steps of:
s1, acquiring an electroencephalogram signal data set X (X1, X2, …, Xn) through a brain wave induction helmet;
s2, classifying the signal data set X ═ (X1, X2, …, Xn) by using canonical discriminant analysis RDA, and obtaining a correlation coefficient matrix ρnIs defined as ρRDA;
s3, classifying the signal data set X ═ (X1, X2, …, Xn) using quadratic discriminant analysis QDA to obtain a correlation coefficient matrix ρnIs defined as ρQDA;
s4, constructing a feature decision fusion device to perform feature integration and decision selection on the decisions and coefficients of the RDA and the QDA to obtain electroencephalogram consciousness dynamic classification, wherein the specific steps are as follows;
s41, constructing a feature decision fusion device, wherein the feature decision fusion device comprises a feature extraction unit, a projection classification unit and a decision selection unit:
s42, performing feature extraction on the correlation coefficients of the RDA and the QDA through a feature extraction unit to generate a feature vector F;
wherein, wXIs a projection in the x direction, wX TFor transposed projection, wYIs a projection in the y direction, wY TFor transposed projection, X matrix as abscissa, XTIs a transposed matrix, Y matrix as abscissa, YTAs a transposed matrix, E [ w ]X TXYTwY]Represents wX TXYTwYExpectation of (1), E [ w ]X TXXTwX]Represents wX TXXTwXExpectation of (1), E [ w ]Y TXYTwY]Represents wY TXYTwY(iii) a desire;
respectively obtaining the maximum correlation coefficient and the second maximum correlation coefficient of the RDA and the QDA;
generating a feature vector F according to the obtained maximum correlation coefficient and the second maximum correlation coefficient of the RDA and QDA algorithms:
wherein,is the maximum correlation coefficient of the QDA,is the second largest correlation coefficient of the QDA,is the maximum correlation coefficient of the RDA,is the second largest correlation coefficient of RDA;
s43, dividing the feature vector F into two categories of RDA-false and QDA-false through a projection classification unit;
the projection classification unit uses a linear SVM classifier to perform projection in the range of the soft boundary target function, projects the characteristic vector F into a scalar value, and forms individual points after the scalar value is projected to a plane, and the points are expressed as:
wherein v isjJ is 1,2, …, N is the support vector, which is used to determine the maximum edge plane of the classifier, ajIs a parameter which can be varied and adjusted, yjRefers to the jth support vector category, F is the feature vector, b is the bias,is a linear kernel function;
the soft boundary objective function is:
wherein, deltajIs a relaxation variable, representing a sample vjWhether it is within the margin, the degree of adjustment is required, C is the control width and the misclassification weightThe adjustment factor of the scale, the relaxation variable is used to determine whether the point is within range.
Solving a soft boundary target function to obtain a maximum value of 2/| | W | |, which is used as a boundary line, and dividing the features in the feature vector F into two types of RDA-false and QDA-false according to the position of a scalar value projection point;
s44, selecting decision output of the RDA or QDA algorithm according to the classification result through a decision selection unit;
the decision selection unit is as follows:
if an RDA-false result is obtained, the module outputs a QDA decision; otherwise, the module outputs RDA decision, and the electroencephalogram consciousness dynamic classification with high accuracy is obtained.
Preferably, in step S1, the brain wave induction helmet is used to collect electroencephalogram signals for emotv;
preferably, ρ in step S2RDAThe solving step of (1) is specifically as follows;
the data set X ═ (X1, X2, …, Xn) is assigned to one of K sets of classes, in the training data the class of the data is known, so the prior probability and mean of class K are respectively:
where ω is the total number of samples, ωkIs the number of samples of class k, XnIs a point of the sample, and,is the average of the values of class k;
the regular discriminant analysis RDA improves the influence of multiple collinearity by modifying the singular covariance value; the sample covariance estimate for each class is as follows:
the covariance matrix is further adjusted by introducing a shrinkage parameter γ:
where λ is the regularization parameter, λ is 0 ≦ 1, p is the dimension of the argument, I is the identity matrix, and γ is the shrinkage parameter.
The optimization target is J (W):
the above formula is SkAnd a generalized Rayleigh quotient of S, wherein Sk=ωk∑k,This is the goal of maximizing QDA, the maximum value of J being the matrixIs the maximum eigenvalue ofThe feature vector corresponding to the maximum feature value of (1). Solved to obtainI.e. the determined optimal projection direction, WTIs transposedAnd projecting, namely projecting the samples in the training set to the w direction to obtain:
y=wTX (6)
preferably, ρ in step S3QDAThe solving step of (1) is specifically as follows;
let sample data set X ═ (X1, X2, …, Xn) obey a multivariate gaussian distribution, μiIs a mean vector, expressed as:
calculating a covariance matrix Σ j of the sample:
wherein j is 1, 2;
an intra-class divergence matrix S can be obtainedwComprises the following steps:
defining inter-class divergence matrix S simultaneouslybComprises the following steps:
Sb=(μ1-μ2)(μ1-μ2)T (12)
the optimization target is J (W):
the above formula is SwAnd SbThe generalized Rayleigh Quotient (QDA), which is the target to maximize the QDA, the maximum value of J is the matrix Sw -1SbIs the maximum eigenvalue of, and correspondingly is Sw -1SbThe feature vector corresponding to the maximum feature value of (1). Solving to obtain w ═ Sw -1(μ1-μ2) That is, the determined optimal projection direction is obtained by projecting the samples in the training set to the w direction:
y=wTX (14)
the invention has the following effects:
1. decision protocols of two different methods are integrated, so that the problems of low accuracy, poor self-adaptability and the like of a single decision are solved;
2. the decision of fusing the two algorithms is an effective method for improving the overall performance, and based on the idea, the two LDA-based algorithms are integrated, so that the classification accuracy of the electroencephalogram signals is improved.
Drawings
FIG. 1 is a schematic diagram of a linear analysis-based electroencephalogram consciousness dynamic classification method based on feature decision fusion of the invention;
FIG. 2 is a schematic diagram of the decision fusion test and training of the present invention;
fig. 3 is a general technical roadmap of the present invention.
Detailed Description
Embodiments of the present invention will be described below with reference to fig. 1 to 3.
The invention relates to a linear analysis-based electroencephalogram consciousness dynamic classification method based on feature decision fusion, wherein the general flow chart is shown in figure 3, and the steps are as follows:
s1, acquiring an electroencephalogram signal data set X (X1, X2, … and Xn) through a brain wave induction helmet, wherein n is a positive integer;
a dynamic task model is implemented in a virtual environment, the subject indirectly controls the ball by applying force to the bowl, and the ball can escape. The test is carried out in a room with good sound insulation effect, an Emotiv helmet is adopted in experimental equipment to collect 14-lead (AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8 and AF4) electroencephalogram signals of a testee, the electrode distribution adopts 10-20 international standard lead positioning, and the sampling frequency is 128 Hz. The test data is transmitted to the computer through the USB interface. A total of 10 (6 men, 4 women) healthy participants were enrolled in the trial, with exclusion criteria being visual, neurological or mental illness or any history of any existing medications, and all subjects read and signed informed consent.
In the test process, a subject needs to complete a boundary avoidance task, which is a high-difficulty sensorimotor task, the subject needs to operate a virtual bowl in a small ball shape within a specified range, and if the bowl exceeds the frame range from the right side and the ball does not overflow the bowl in the whole process, the task is successful; if the bowl is out of the frame from the left, or the ball overflows from the bowl during this process, the task fails. The initial position of the bowl-ball is on the left. According to the characteristics of the test, the following two groups of data are intercepted for analysis and processing. Assuming that a force is applied to the bowl in a direction to the left for a period of time, the bowl is moved to the right before the force is applied, and the ball is moved to the left relative to the bowl; due to inertia, the bowl will not move to the left after a period of time, and the data of the first 1s and the data of the last 1s in the period of time are respectively intercepted, and vice versa. The entire test comprised 120 test values, 60 for each of the left and right hands, and 1 observed set of measurements, X ═ 120 (X1, X2, …, Xn), was obtained for each subject.
S2, classifying the signal data set X ═ (X1, X2, …, Xn) by using canonical discriminant analysis RDA, and obtaining a correlation coefficient matrix ρnIs defined as ρRDA;
Both LDA and QDA are boundary discrimination methods aimed at finding boundaries separating groups or classes of samples. The boundary divides the space into regions, each divided according to a different group or class, and it also depends on the type of classifier: namely LDA obtains a linear boundary, wherein a straight line or a hyperplane divides a variable space into areas; the QDA obtains quadratic boundaries where a quadric divides the variable space into regions. LDA assumes a single covariance matrix for all classes, QDA assumes a different covariance matrix for each class. QDA allows distinguishing classes with significantly different class-specific covariance matrices and forming a separate variance model for each class, while the class groups represent multivariate normal distributions with the same mean. RDA is a compromise between LDA and QDA, and is more suitable in situations where there are many potentially relevant features because RDA is a regularization technique. The dataset X ═ (X1, X2, …, Xn) is assigned to one of the K groups of classes. In the training data, the class of the data is known, so the prior probability and the mean of class k are:
where ω is the total number of samples, ωkIs the number of samples of class k, XnIs a point of the sample, and,is the average of the values of class k.
The canonical discriminant analysis RDA ameliorates the effects of multiple collinearity by modifying the singular covariance values. The sample covariance estimate for each class is as follows:
The covariance matrix is further adjusted by introducing a shrinkage parameter γ:
where λ is the regularization parameter, λ is 0 ≦ 1, p is the dimension of the argument, I is the identity matrix, and γ is the shrinkage parameter.
The optimization target is J (w)
The above formula is SkAnd a generalized Rayleigh quotient of S, wherein Sk=ωk∑k,This is the goal of maximizing QDA, with the maximum value of J being the matrix Sk -1Maximum eigenvalue of S, and corresponding is Sk -1And the characteristic vector corresponding to the maximum characteristic value of the S. Solving to obtain w ═ Sk -1(μ1,-μ2) I.e. the determined optimal projection direction, WTThe transposed projection is obtained by projecting the samples in the training set to the w direction:
y=wTX (6)
s3, classifying the signal data set X ═ (X1, X2, …, Xn) using quadratic discriminant analysis QDA, and obtaining a correlationNumber matrix rhonIs defined as ρRDA;
Quadratic discriminant analysis QDA aims at finding a transformation of the input features that best distinguishes classes in the data set. The signal data set X ═ (X1, X2, …, Xn) is classified using quadratic discriminant analysis QDA, and the correlation coefficient matrix ρ is obtainednIs defined as ρQDA;
Let sample data set X ═ (X1, X2, …, Xn) obey a multivariate gaussian distribution, μiIs the mean vector, as follows:
the covariance matrix Σ j of the sample is calculated as:
wherein j is 1, 2;
an intra-class divergence matrix S can be obtainedwComprises the following steps:
defining inter-class divergence matrix S simultaneouslybComprises the following steps:
Sb=(μ1-μ2)(μ1-μ2)T (12)
the optimization target is J (w)
The above formula is SwAnd SbThe generalized Rayleigh Quotient (QDA), which is the target to maximize the QDA, the maximum value of J is the matrix Sw -1SbIs the maximum eigenvalue of, and correspondingly is Sw -1SbThe feature vector corresponding to the maximum feature value of (1). Solving to obtain w=Sw -1(μ1-μ2) I.e. the determined optimal projection direction, the samples in the training set are projected to the w direction
y=wTX (14)
s4, constructing a feature decision fusion device to perform feature integration and decision selection on the decisions and coefficients of the RDA and the QDA to obtain electroencephalogram consciousness dynamic classification, wherein the specific steps are as follows;
s41, constructing a feature decision fusion device, wherein the feature decision fusion device comprises a feature extraction unit, a projection classification unit and a decision selection unit:
s42, performing feature extraction on the correlation coefficients of the RDA and the QDA through a feature extraction unit to generate a feature vector F;
according toThe larger the value is, the larger rho is, and the maximum correlation coefficient and the second maximum correlation coefficient of RDA and QDA are respectively obtained according to the following expressions;
wherein, wXIs a projection in the x direction, wX TFor transposed projection, wYIs a projection in the y direction, wY TFor transposed projection, X matrix as abscissa, XTAs a transposed matrix, the Y matrix as abscissa,YTas a transposed matrix, E [ w ]Y TXYTwY]Represents wX TXYTwYExpectation of (1), E [ w ]X TXXTwX]Represents wX TXXTwXExpectation of (1), E [ w ]Y TXYTwY]Represents wY TXYTwY(iii) a desire;
generating a feature vector F according to the obtained maximum correlation coefficient and the second maximum correlation coefficient of the RDA and QDA algorithms:
after the correlation coefficients of only two methods are extracted, a matrix is obtained according to the following form:
wherein,is the maximum correlation coefficient of the QDA,is the second largest correlation coefficient of the QDA,is the maximum correlation coefficient of the RDA,is the second largest correlation coefficient of RDA;
s43, dividing the feature vector F into two categories of RDA-false and QDA-false through a projection classification unit
And (4) dividing the test of all training data into four classes of both-true, both-false, RDA-false and QDA-false according to the classification result of the RDA algorithm and the QDA algorithm. In one double correct trial, both algorithms make the same and correct decision. In the double-dummy test, the decision of both methods is inconsistent with the intention of the subject. Proposed error from twoThe decision fusion method of selecting one of the decisions may also give an erroneous result. So the RDA-false and QDA-false tests (where only one decision in RDA and QDA is correct) are chosen to train the decision fusion method. Delta is a relaxation variable in the soft boundary objective function, representing a sample vjWhether the point is within the edge or not, the degree of adjustment is needed, C is an adjustment coefficient for controlling the width and misclassification balance, and a relaxation variable is used for determining whether the point is within the range or not, namely only two types of RDA-false and QDA-false are selected.
The projection classification unit uses a linear SVM classifier to perform projection in the range of the soft boundary target function, projects the characteristic vector F into a scalar value, and forms individual points after the scalar value is projected to a plane, and the points are expressed as:
wherein v isjAnd (j ═ 1,2, …, N) is the support vector, which is used to determine the largest edge plane of the classifier, aj> 0, is a variable and adjustable parameter, yjRefers to the jth support vector category, F is the feature vector, b is the bias,is a linear kernel function;
the soft boundary objective function is:
where δ is a relaxation variable, representing a sample vjIf it is within the edge, the degree of adjustment is needed, C is the adjustment factor that controls the width and misclassification tradeoff, and the slack variable is used to determine if the point is within range.
Solving a soft boundary target function to obtain a maximum value of 2/| | w | |, which is used as a boundary line, dividing the characteristics in the characteristic vector F into two types of RDA-false and QDA-false according to the position of a scalar value projection point, wherein one side of the boundary line is of the RDA-false type, and the other side of the boundary line is of the QDA-false type;
s44, selecting decision output of the RDA or QDA algorithm according to the classification result through a decision selection unit;
as shown in formula (20), RDA-false and QDA-false (wherein only one of RDA and QDA is correct) are selected to train the decision fusion method, and if an RDA-false result is obtained, the module outputs a QDA decision. Otherwise, the module outputs the RDA decision.
The feature decision fusion device mainly comprises a feature extraction unit, a projection classification unit and a decision selection unit. And the feature decision fusion device inputs the decisions and the correlation coefficients of the RDA and QDA algorithms, and selects and outputs a more correct decision.
The effect of the method of the invention is compared and verified as follows:
in the test and training phase of decision fusion, performance estimation uses a cross-validation where 5 blocks of the data set are selected for training decision fusion and 1 block is selected for testing. During the training phase, we use another cross-validation to extract the RDA-false and QDA-false features. Specifically, the RDA and QDA algorithms are trained using 4 blocks in the training dataset and classify the remaining blocks in each round. According to the classification result, the decision fusion characteristics F of the RDA-false test and the QDA-false test are extracted and recorded in 5 rounds of each training period, namely, the total test is 200 times. And training a decision fusion method by using the recorded RDA-false and QDA-false characteristics. In the method, 5 LDA-based classification algorithms of LDA, QDA, RDA, nearest mean and weighted nearest mean are synthesized by using the proposed decision fusion. The integrated performance is evaluated by estimating the classification accuracy of all combinations and estimating the classification accuracy and Information Transfer Rate (ITR) of all combinations.
Fig. 2 illustrates the process of training and testing of the proposed decision fusion method. The performance was estimated using a one-out-of-one cross-validation in which 5 blocks of the data set were fused by the selected training decision and 1 block was tested by the selected test. During the training phase, we use another cross-validation to extract the RDA-false and QDA-false features. Specifically, the RDA and QDA algorithms are trained using 4 blocks in the training dataset and classify the remaining blocks in each round. According to the classification result, the decision fusion characteristics F of the RDA-false test and the QDA-false test are extracted and recorded in 5 rounds of each training period, namely, the total test is 200 times. And training a decision fusion method by using the recorded RDAfals and QDA-false characteristics.
In the testing stage, 5 LDA-based classification algorithms of LDA, QDA, RDA, nearest mean and weighted nearest mean are synthesized by using the proposed decision fusion method. The classification accuracy of all combinations is estimated. The classification accuracy and Information Transfer Rate (ITR) of all combinations is estimated. ITRs in bits/minute are defined as follows:
where P is the accuracy, N is the class number (i.e., N is 120 in this study), and T is the time required for one selection.
In terms of integration results, the performance was evaluated at a data length of 1 second. The resulting data consists of 5 rows and 5 columns, one LDA-based algorithm for each row. The main diagonal cell represents the average accuracy of each algorithm, and the other cells represent the average accuracy of a decision fusion method that integrates the two corresponding algorithms together. For example, the decision fusion-QDA & LDA method with a data length of 1s has an accuracy of 90.56%, which is higher than 86.36% of the LDA method, but lower than 93.70% of the QDA method. However, there is a 7.34% difference in the accuracy of the QDA and LDA methods. The test result also shows that the performance of the decision fusion method combining the QDA or RDA algorithm and the low-precision algorithm is reduced. The classification precision of the two algorithms before and after each decision fusion combination is calculated in the research. These results show that the overall classification accuracy is not improved when the decision fusion method fuses two algorithms with greatly different precision. And the maximum accuracy rate is 94.21% when the data length is 1s by combining the decision fusion-QDA & RDA method of two algorithms with relatively close performance.
By integrating 5 LDA-based classification algorithms including LDA, QDA, RDA, recent mean and weighted recent mean, the classification accuracy of the proposed decision fusion method is shown in the following table:
performance was evaluated at a data length of 1 second. The resulting data consists of 5 rows and 5 columns, one LDA-based algorithm for each row. The main diagonal cell represents the average accuracy of each algorithm, and the other cells represent the average accuracy of a decision fusion method that integrates the two corresponding algorithms together.
In the results of the table, the accuracy of the decision fusion-QDA & LDA method with a data length of 1s is 90.56%, higher than 86.36% of the LDA method, but lower than 93.70% of the QDA method. However, there is a 7.34% difference in the accuracy of the QDA and LDA methods. The test results in fig. 3 also show that the decision fusion method combining the QDA or RDA algorithm with the low-precision algorithm has degraded performance. The classification precision of the two algorithms before and after each decision fusion combination is calculated in the research. These results show that the accuracy of the overall classification is not improved when the two algorithms are fused with great difference in accuracy. And the maximum accuracy rate is 94.21% when the data length is 1s by combining the decision fusion-QDA & RDA method of two algorithms with relatively close performance.
The above-mentioned embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solution of the present invention can be made by those skilled in the art without departing from the spirit of the present invention, and the technical solution of the present invention shall fall within the protection scope defined by the claims.
Claims (4)
1. A linear analysis based electroencephalogram consciousness dynamic classification method based on feature decision fusion is characterized by comprising the following steps:
s1, acquiring an electroencephalogram signal data set X (X1, X2, …, Xn) through a brain wave induction helmet;
s2, classifying the signal data set X ═ (X1, X2, …, Xn) by using canonical discriminant analysis RDA, and obtaining a correlation coefficient matrix ρnIs defined as ρRDA;
s3, classifying the signal data set X ═ (X1, X2, …, Xn) using quadratic discriminant analysis QDA to obtain a correlation coefficient matrix ρnIs defined as ρQDA;
s4, constructing a feature decision fusion device to perform feature integration and decision selection on the decisions and coefficients of the RDA and the QDA to obtain electroencephalogram consciousness dynamic classification, wherein the specific steps are as follows;
s41, constructing a feature decision fusion device, wherein the feature decision fusion device comprises a feature extraction unit, a projection classification unit and a decision selection unit;
s42, performing feature extraction on the correlation coefficients of the RDA and the QDA through a feature extraction unit to generate a feature vector F;
wherein, wXIs a projection in the x direction, wX TFor transposed projection, wYIs a projection in the y direction, wY TFor transposed projection, X matrix as abscissa, XTIs a transposed matrix, Y matrix as abscissa, YTAs a transposed matrix, E [ w ]X TXYTwY]Represents wX TXYTwYExpectation of (1), E [ w ]X TXXTwX]Represents wX TXXTwXExpectation of (1), E [ w ]Y TXYTwY]Represents wY TXYTwY(iii) a desire;
respectively obtaining the maximum correlation coefficient and the second maximum correlation coefficient of the RDA and the QDA;
generating a feature vector F according to the obtained maximum correlation coefficient and the second maximum correlation coefficient of the RDA and QDA algorithms:
wherein,is the maximum correlation coefficient of the QDA,is the second largest number of phase relationships of QDA,Is the maximum correlation coefficient of the RDA,is the second largest correlation coefficient of RDA;
s43, dividing the feature vector F into two categories of RDA-false and QDA-false through a projection classification unit;
the projection classification unit uses a linear SVM classifier to perform projection in the range of the soft boundary target function, projects the characteristic vector F into a scalar value, and forms individual points after the scalar value is projected to a plane, and the points are expressed as:
wherein v isjJ is 1,2, …, N is the support vector, which is used to determine the maximum edge plane of the classifier, aj> 0, is a variable and adjustable parameter, yjRefers to the jth support vector category, F is the feature vector, b is the bias,is a linear kernel function;
the soft boundary objective function is:
wherein, deltajIs a relaxation variable, representing a sample vjWhether the point is in the edge or not, the degree of adjustment is needed, C is an adjustment coefficient for controlling the width and misclassification balance, a relaxation variable is used for determining whether the point is in the range or not, and W is the total number of samples;
solving a soft boundary target function to obtain a maximum value of 2/| | W | |, which is used as a boundary line, and dividing the features in the feature vector F into two types of RDA-false and QDA-false according to the position of a scalar value projection point;
s44, selecting decision output of the RDA or QDA algorithm according to the classification result through a decision selection unit;
the decision selection unit is as follows:
if an RDA-false result is obtained, the module outputs a QDA decision; otherwise, the module outputs RDA decision, and the electroencephalogram consciousness dynamic classification with high accuracy is obtained.
2. The linear analysis-based electroencephalogram consciousness dynamic classification method based on feature decision fusion of the electroencephalogram, which is characterized in that in the step S1, an electroencephalogram signal is collected for Emotiv by adopting a brain wave induction helmet.
3. The linear analysis-based electroencephalogram consciousness dynamic classification method for feature decision fusion according to claim 1, wherein in step S2, p isRDAThe solving step is as follows:
the data set X ═ (X1, X2, …, Xn) is assigned to one of K sets of classes, in the training data the class of the data is known, so the prior probability and mean of class K are respectively:
wherein,is the prior probability of the class k,is the average of class k, ω is the total number of samples, ωkIs the number of samples of class k, XnIs a point of the sample, and,is the average of the values of class k;
the regular discriminant analysis RDA improves the influence of multiple collinearity by modifying the singular covariance value; the sample covariance estimate for each class is as follows:
the covariance matrix is further adjusted by introducing a shrinkage parameter γ:
wherein λ is a regularization parameter, λ is 0 ≦ 1, p is a dimension of an argument, I is an identity matrix, γ is a shrinkage parameter;
the optimization target is J (w):
the above formula is SkAnd a generalized Rayleigh quotient of S, wherein Sk=ωk∑k,This is the goal of maximizing QDA, the maximum of JThe value is a matrix Sk -1Maximum eigenvalue of S, and corresponding is Sk -1The characteristic vector corresponding to the maximum characteristic value of the S; solving to obtain w ═ Sk -1(μ1-μ2) I.e. the determined optimal projection direction, wTThe transposed projection is obtained by projecting the samples in the training set to the w direction:
y=wTX (6)
4. The linear analysis-based electroencephalogram consciousness dynamic classification method for feature decision fusion according to claim 1, wherein in step S3, p isQDAThe solving step is as follows:
let sample data set X ═ (X1, X2, …, Xn) obey a multivariate gaussian distribution, μiIs a mean vector, expressed as:
calculating a covariance matrix Σ j of the sample:
wherein j is 1, 2;
an intra-class divergence matrix S can be obtainedwComprises the following steps:
defining inter-class divergence matrix S simultaneouslybComprises the following steps:
Sb=(μ1-μ2)(μ1-μ2)T (12)
the optimization target is J (w):
the above formula is SwAnd SbThe generalized Rayleigh Quotient (QDA), which is the target to maximize the QDA, the maximum value of J is the matrix Sw -1SbIs the maximum eigenvalue of, and correspondingly is Sw -1SbThe feature vector corresponding to the maximum feature value of (1); solving to obtain w ═ Sw -1(μ1-μ2) That is, the determined optimal projection direction is obtained by projecting the samples in the training set to the w direction:
y=wTX (14)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110037508.9A CN112733727B (en) | 2021-01-12 | 2021-01-12 | Electroencephalogram consciousness dynamic classification method based on linear analysis and feature decision fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110037508.9A CN112733727B (en) | 2021-01-12 | 2021-01-12 | Electroencephalogram consciousness dynamic classification method based on linear analysis and feature decision fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112733727A true CN112733727A (en) | 2021-04-30 |
CN112733727B CN112733727B (en) | 2022-04-19 |
Family
ID=75590657
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110037508.9A Active CN112733727B (en) | 2021-01-12 | 2021-01-12 | Electroencephalogram consciousness dynamic classification method based on linear analysis and feature decision fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112733727B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115018046A (en) * | 2022-05-17 | 2022-09-06 | 海南政法职业学院 | Deep learning method for detecting malicious traffic of mobile APP |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102521505A (en) * | 2011-12-08 | 2012-06-27 | 杭州电子科技大学 | Brain electric and eye electric signal decision fusion method for identifying control intention |
CN102629156A (en) * | 2012-03-06 | 2012-08-08 | 上海大学 | Method for achieving motor imagery brain computer interface based on Matlab and digital signal processor (DSP) |
CN108231067A (en) * | 2018-01-13 | 2018-06-29 | 福州大学 | Sound scenery recognition methods based on convolutional neural networks and random forest classification |
CN111259741A (en) * | 2020-01-09 | 2020-06-09 | 燕山大学 | Electroencephalogram signal classification method and system |
CN111523601A (en) * | 2020-04-26 | 2020-08-11 | 道和安邦(天津)安防科技有限公司 | Latent emotion recognition method based on knowledge guidance and generation counterstudy |
-
2021
- 2021-01-12 CN CN202110037508.9A patent/CN112733727B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102521505A (en) * | 2011-12-08 | 2012-06-27 | 杭州电子科技大学 | Brain electric and eye electric signal decision fusion method for identifying control intention |
CN102629156A (en) * | 2012-03-06 | 2012-08-08 | 上海大学 | Method for achieving motor imagery brain computer interface based on Matlab and digital signal processor (DSP) |
CN108231067A (en) * | 2018-01-13 | 2018-06-29 | 福州大学 | Sound scenery recognition methods based on convolutional neural networks and random forest classification |
CN111259741A (en) * | 2020-01-09 | 2020-06-09 | 燕山大学 | Electroencephalogram signal classification method and system |
CN111523601A (en) * | 2020-04-26 | 2020-08-11 | 道和安邦(天津)安防科技有限公司 | Latent emotion recognition method based on knowledge guidance and generation counterstudy |
Non-Patent Citations (2)
Title |
---|
VIKTOR ROZGIC等: "Robust EEG emotion classification using segment level decision fusion", 《2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS,SPEECH AND SIGNAL PROCESSING》 * |
付荣荣等: "基于稀疏共空间模式和Fisher判别的单次运动想象脑电信号识别方法", 《生物医学工程学杂志》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115018046A (en) * | 2022-05-17 | 2022-09-06 | 海南政法职业学院 | Deep learning method for detecting malicious traffic of mobile APP |
CN115018046B (en) * | 2022-05-17 | 2023-09-15 | 海南政法职业学院 | Deep learning method for detecting malicious flow of mobile APP |
Also Published As
Publication number | Publication date |
---|---|
CN112733727B (en) | 2022-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113011239B (en) | Motor imagery classification method based on optimal narrow-band feature fusion | |
CN105930663B (en) | Hand tremor signal and audio signal classification method based on evolution fuzzy rule | |
CN107977651B (en) | Common spatial mode spatial domain feature extraction method based on quantization minimum error entropy | |
WO2022183966A1 (en) | Electroencephalogram signal classification method and apparatus, device, storage medium and program product | |
CN111951637A (en) | Task scenario-related unmanned aerial vehicle pilot visual attention distribution mode extraction method | |
CN107992846A (en) | Block face identification method and device | |
CN111091074A (en) | Motor imagery electroencephalogram signal classification method based on optimal region common space mode | |
CN114237046B (en) | Partial discharge pattern recognition method based on SIFT data feature extraction algorithm and BP neural network model | |
Giles et al. | A subject-to-subject transfer learning framework based on Jensen-Shannon divergence for improving brain-computer interface | |
WO2010111392A1 (en) | Classifying an item to one of a plurality of groups | |
CN111582082B (en) | Two-classification motor imagery electroencephalogram signal identification method based on interpretable clustering model | |
CN111652138A (en) | Face recognition method, device and equipment for wearing mask and storage medium | |
CN104978569A (en) | Sparse representation based incremental face recognition method | |
CN112733727B (en) | Electroencephalogram consciousness dynamic classification method based on linear analysis and feature decision fusion | |
CN109543637A (en) | A kind of face identification method, device, equipment and readable storage medium storing program for executing | |
Ramakrishnan et al. | Epileptic eeg signal classification using multi-class convolutional neural network | |
CN112438741A (en) | Driving state detection method and system based on electroencephalogram feature transfer learning | |
CN112233742A (en) | Medical record document classification system, equipment and storage medium based on clustering | |
US10963805B2 (en) | Regression analysis system and regression analysis method that perform discrimination and regression simultaneously | |
CN111611963B (en) | Face recognition method based on neighbor preservation canonical correlation analysis | |
CN117609863A (en) | Long-time electroencephalogram emotion recognition method based on electroencephalogram micro state | |
Keller et al. | SMOTE and ENN based XGBoost prediction model for Parkinson’s disease detection | |
CN105787459A (en) | ERP signal classification method based on optimal score sparse determination | |
CN105184320A (en) | Non-negative sparse coding image classification method based on structural similarity | |
CN115358260A (en) | Electroencephalogram sleep staging method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |