CN112733727A - Electroencephalogram consciousness dynamic classification method based on linear analysis and feature decision fusion - Google Patents

Electroencephalogram consciousness dynamic classification method based on linear analysis and feature decision fusion Download PDF

Info

Publication number
CN112733727A
CN112733727A CN202110037508.9A CN202110037508A CN112733727A CN 112733727 A CN112733727 A CN 112733727A CN 202110037508 A CN202110037508 A CN 202110037508A CN 112733727 A CN112733727 A CN 112733727A
Authority
CN
China
Prior art keywords
qda
rda
feature
matrix
decision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110037508.9A
Other languages
Chinese (zh)
Other versions
CN112733727B (en
Inventor
付荣荣
李朋
王世伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yanshan University
Original Assignee
Yanshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yanshan University filed Critical Yanshan University
Priority to CN202110037508.9A priority Critical patent/CN112733727B/en
Publication of CN112733727A publication Critical patent/CN112733727A/en
Application granted granted Critical
Publication of CN112733727B publication Critical patent/CN112733727B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/15Biometric patterns based on physiological signals, e.g. heartbeat, blood flow

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention provides a linear analysis-based electroencephalogram consciousness dynamic classification method based on feature decision fusion, wherein an electroencephalogram signal data set X (X1, X2, … and Xn) is acquired through a brain wave induction helmet, and n is a positive integer; and classifying the signal data set X ═ X (X1, X2, …, Xn) by using a canonical discriminant analysis RDA and a quadratic discriminant analysis QDA to obtain a correlation coefficient matrix rhoRDAAnd ρQDAMeanwhile, a feature decision fusion comprising a feature extraction unit, a projection classification unit and a decision selection unit is constructed to carry out feature integration and decision selection on the decisions and coefficients of the RDA and the QDA, so that better classification accuracy is obtained. The invention integrates two algorithms by constructing feature decision fusion, thereby selecting the decision which is more likely to be accurate, and obtaining better classification accuracy rate on the classification of motor imagery data.

Description

Electroencephalogram consciousness dynamic classification method based on linear analysis and feature decision fusion
Technical Field
The invention belongs to the field of electroencephalogram dynamic analysis, and particularly relates to an electroencephalogram consciousness dynamic classification method based on linear analysis and feature decision fusion.
Background
MI is an EEG signal, which reflects that specific functional areas of the brain are activated when a person is in motor imagery, and the corresponding EEG signal can generate stable and regular characteristic changes, which is the physiological basis for taking the motor imagery EEG as an input signal of a BCI system. For the purpose of decoding subjects from MI signals, various methods have been proposed to identify and classify MI signals, etc., such as Linear Discriminant Analysis (LDA), Gaussian classifier (Gaussian classifier), probabilistic neural network (probabilistic NN). Particularly, in the LDA method, when a new sample is classified, the new sample is projected on the same straight line, and the classification of the new sample is determined based on the position of the projected point. The analytic solution based on the generalized eigenvalue problem can be directly solved, so that the problem that in a general nonlinear algorithm, the local minimum problem frequently encountered in construction does not need to artificially encode the output class of the mode, and the LDA has particularly obvious advantages in processing the unbalanced mode class. Compared with a neural network method, the LDA does not need to adjust parameters, so that the problems of learning parameters, optimizing weight, selecting a neuron activation function and the like do not exist; it is insensitive to the normalization or randomization of the pattern, which is more prominent in various algorithms based on gradient descent. LDA is a classic linear learning method, which was originally proposed by Fisher in 1936 on the two-classification problem, and is also called Fisher linear discrimination. The idea of linear discrimination is quite simple: giving a training sample set, and trying to project samples on a straight line, so that the projection points of the same type of samples are as close as possible, and the projection points of the different type of samples are as far away as possible; when the new samples are classified, the new samples are projected to the same straight line, and the classes of the new samples are determined according to the positions of the projection points. QDA is a variant of LDA where a single covariance matrix is estimated for each type of observation. QDA is particularly useful if it is known in advance that individual classes exhibit different covariances. The disadvantage of QDA is that it cannot be used as a dimension reduction technique. Since QDA estimates the covariance matrix for each class, it has more significant parameters than LDA. RDA is a compromise between LDA and QDA, and is more suitable in situations where there are many potentially relevant features because RDA is a regularization technique. Because the methods provide different decisions, the method which integrates the different decisions by fusing a plurality of methods is a feasible method so as to improve the accuracy rate of the whole classification.
Disclosure of Invention
The invention aims to provide a linear analysis-based electroencephalogram consciousness dynamic classification method based on feature decision fusion.
The invention relates to a linear analysis-based electroencephalogram consciousness dynamic classification method based on feature decision fusion, which comprises the following steps of:
s1, acquiring an electroencephalogram signal data set X (X1, X2, …, Xn) through a brain wave induction helmet;
s2, classifying the signal data set X ═ (X1, X2, …, Xn) by using canonical discriminant analysis RDA, and obtaining a correlation coefficient matrix ρnIs defined as ρRDA
Figure BDA0002894852130000021
Wherein the content of the first and second substances,
Figure BDA0002894852130000022
is a matrix transpose of weights, X is a data value,
Figure BDA0002894852130000023
is the average of the data values;
s3, classifying the signal data set X ═ (X1, X2, …, Xn) using quadratic discriminant analysis QDA to obtain a correlation coefficient matrix ρnIs defined as ρQDA
Figure BDA0002894852130000024
Wherein the content of the first and second substances,
Figure BDA0002894852130000025
is a matrix transpose of weights, X is a data value,
Figure BDA0002894852130000026
is the average of the data values;
s4, constructing a feature decision fusion device to perform feature integration and decision selection on the decisions and coefficients of the RDA and the QDA to obtain electroencephalogram consciousness dynamic classification, wherein the specific steps are as follows;
s41, constructing a feature decision fusion device, wherein the feature decision fusion device comprises a feature extraction unit, a projection classification unit and a decision selection unit:
s42, performing feature extraction on the correlation coefficients of the RDA and the QDA through a feature extraction unit to generate a feature vector F;
according to
Figure BDA0002894852130000031
The larger the value, the larger ρ, according to the expression:
Figure BDA0002894852130000032
wherein, wXIs a projection in the x direction, wX TFor transposed projection, wYIs a projection in the y direction, wY TFor transposed projection, X matrix as abscissa, XTIs a transposed matrix, Y matrix as abscissa, YTAs a transposed matrix, E [ w ]X TXYTwY]Represents wX TXYTwYExpectation of (1), E [ w ]X TXXTwX]Represents wX TXXTwXExpectation of (1), E [ w ]Y TXYTwY]Represents wY TXYTwY(iii) a desire;
respectively obtaining the maximum correlation coefficient and the second maximum correlation coefficient of the RDA and the QDA;
generating a feature vector F according to the obtained maximum correlation coefficient and the second maximum correlation coefficient of the RDA and QDA algorithms:
Figure BDA0002894852130000033
wherein the content of the first and second substances,
Figure BDA0002894852130000041
is the maximum correlation coefficient of the QDA,
Figure BDA0002894852130000042
is the second largest correlation coefficient of the QDA,
Figure BDA0002894852130000043
is the maximum correlation coefficient of the RDA,
Figure BDA0002894852130000044
is the second largest correlation coefficient of RDA;
s43, dividing the feature vector F into two categories of RDA-false and QDA-false through a projection classification unit;
the projection classification unit uses a linear SVM classifier to perform projection in the range of the soft boundary target function, projects the characteristic vector F into a scalar value, and forms individual points after the scalar value is projected to a plane, and the points are expressed as:
Figure BDA0002894852130000045
wherein v isjJ is 1,2, …, N is the support vector, which is used to determine the maximum edge plane of the classifier, ajIs a parameter which can be varied and adjusted, yjRefers to the jth support vector category, F is the feature vector, b is the bias,
Figure BDA0002894852130000046
is a linear kernel function;
the soft boundary objective function is:
Figure BDA0002894852130000047
wherein, deltajIs a relaxation variable, representing a sample vjWhether it is within the margin, the degree of adjustment is required, C is the control width and the misclassification weightThe adjustment factor of the scale, the relaxation variable is used to determine whether the point is within range.
Solving a soft boundary target function to obtain a maximum value of 2/| | W | |, which is used as a boundary line, and dividing the features in the feature vector F into two types of RDA-false and QDA-false according to the position of a scalar value projection point;
s44, selecting decision output of the RDA or QDA algorithm according to the classification result through a decision selection unit;
the decision selection unit is as follows:
Figure BDA0002894852130000051
if an RDA-false result is obtained, the module outputs a QDA decision; otherwise, the module outputs RDA decision, and the electroencephalogram consciousness dynamic classification with high accuracy is obtained.
Preferably, in step S1, the brain wave induction helmet is used to collect electroencephalogram signals for emotv;
preferably, ρ in step S2RDAThe solving step of (1) is specifically as follows;
the data set X ═ (X1, X2, …, Xn) is assigned to one of K sets of classes, in the training data the class of the data is known, so the prior probability and mean of class K are respectively:
Figure BDA0002894852130000052
Figure BDA0002894852130000053
where ω is the total number of samples, ωkIs the number of samples of class k, XnIs a point of the sample, and,
Figure BDA0002894852130000055
is the average of the values of class k;
the regular discriminant analysis RDA improves the influence of multiple collinearity by modifying the singular covariance value; the sample covariance estimate for each class is as follows:
Figure BDA0002894852130000054
where ω is the total number of samples, XnIs a point of the sample, and,
Figure BDA0002894852130000061
is the average of class k;
the covariance matrix is further adjusted by introducing a shrinkage parameter γ:
Figure BDA0002894852130000062
where λ is the regularization parameter, λ is 0 ≦ 1, p is the dimension of the argument, I is the identity matrix, and γ is the shrinkage parameter.
The optimization target is J (W):
Figure BDA0002894852130000063
the above formula is SkAnd a generalized Rayleigh quotient of S, wherein Sk=ωkk,
Figure BDA0002894852130000064
This is the goal of maximizing QDA, the maximum value of J being the matrix
Figure BDA0002894852130000065
Is the maximum eigenvalue of
Figure BDA0002894852130000066
The feature vector corresponding to the maximum feature value of (1). Solved to obtain
Figure BDA0002894852130000067
I.e. the determined optimal projection direction, WTIs transposedAnd projecting, namely projecting the samples in the training set to the w direction to obtain:
y=wTX (6)
Figure BDA0002894852130000068
wherein the content of the first and second substances,
Figure BDA0002894852130000069
is a matrix transpose of weights, X is a data value,
Figure BDA00028948521300000610
is the average of the data values;
preferably, ρ in step S3QDAThe solving step of (1) is specifically as follows;
let sample data set X ═ (X1, X2, …, Xn) obey a multivariate gaussian distribution, μiIs a mean vector, expressed as:
Figure BDA00028948521300000611
calculating a covariance matrix Σ j of the sample:
Figure BDA0002894852130000071
wherein j is 1, 2;
an intra-class divergence matrix S can be obtainedwComprises the following steps:
Figure BDA0002894852130000072
defining inter-class divergence matrix S simultaneouslybComprises the following steps:
Sb=(μ12)(μ12)T (12)
the optimization target is J (W):
Figure BDA0002894852130000073
the above formula is SwAnd SbThe generalized Rayleigh Quotient (QDA), which is the target to maximize the QDA, the maximum value of J is the matrix Sw -1SbIs the maximum eigenvalue of, and correspondingly is Sw -1SbThe feature vector corresponding to the maximum feature value of (1). Solving to obtain w ═ Sw -112) That is, the determined optimal projection direction is obtained by projecting the samples in the training set to the w direction:
y=wTX (14)
Figure BDA0002894852130000074
the invention has the following effects:
1. decision protocols of two different methods are integrated, so that the problems of low accuracy, poor self-adaptability and the like of a single decision are solved;
2. the decision of fusing the two algorithms is an effective method for improving the overall performance, and based on the idea, the two LDA-based algorithms are integrated, so that the classification accuracy of the electroencephalogram signals is improved.
Drawings
FIG. 1 is a schematic diagram of a linear analysis-based electroencephalogram consciousness dynamic classification method based on feature decision fusion of the invention;
FIG. 2 is a schematic diagram of the decision fusion test and training of the present invention;
fig. 3 is a general technical roadmap of the present invention.
Detailed Description
Embodiments of the present invention will be described below with reference to fig. 1 to 3.
The invention relates to a linear analysis-based electroencephalogram consciousness dynamic classification method based on feature decision fusion, wherein the general flow chart is shown in figure 3, and the steps are as follows:
s1, acquiring an electroencephalogram signal data set X (X1, X2, … and Xn) through a brain wave induction helmet, wherein n is a positive integer;
a dynamic task model is implemented in a virtual environment, the subject indirectly controls the ball by applying force to the bowl, and the ball can escape. The test is carried out in a room with good sound insulation effect, an Emotiv helmet is adopted in experimental equipment to collect 14-lead (AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8 and AF4) electroencephalogram signals of a testee, the electrode distribution adopts 10-20 international standard lead positioning, and the sampling frequency is 128 Hz. The test data is transmitted to the computer through the USB interface. A total of 10 (6 men, 4 women) healthy participants were enrolled in the trial, with exclusion criteria being visual, neurological or mental illness or any history of any existing medications, and all subjects read and signed informed consent.
In the test process, a subject needs to complete a boundary avoidance task, which is a high-difficulty sensorimotor task, the subject needs to operate a virtual bowl in a small ball shape within a specified range, and if the bowl exceeds the frame range from the right side and the ball does not overflow the bowl in the whole process, the task is successful; if the bowl is out of the frame from the left, or the ball overflows from the bowl during this process, the task fails. The initial position of the bowl-ball is on the left. According to the characteristics of the test, the following two groups of data are intercepted for analysis and processing. Assuming that a force is applied to the bowl in a direction to the left for a period of time, the bowl is moved to the right before the force is applied, and the ball is moved to the left relative to the bowl; due to inertia, the bowl will not move to the left after a period of time, and the data of the first 1s and the data of the last 1s in the period of time are respectively intercepted, and vice versa. The entire test comprised 120 test values, 60 for each of the left and right hands, and 1 observed set of measurements, X ═ 120 (X1, X2, …, Xn), was obtained for each subject.
S2, classifying the signal data set X ═ (X1, X2, …, Xn) by using canonical discriminant analysis RDA, and obtaining a correlation coefficient matrix ρnIs defined as ρRDA
Both LDA and QDA are boundary discrimination methods aimed at finding boundaries separating groups or classes of samples. The boundary divides the space into regions, each divided according to a different group or class, and it also depends on the type of classifier: namely LDA obtains a linear boundary, wherein a straight line or a hyperplane divides a variable space into areas; the QDA obtains quadratic boundaries where a quadric divides the variable space into regions. LDA assumes a single covariance matrix for all classes, QDA assumes a different covariance matrix for each class. QDA allows distinguishing classes with significantly different class-specific covariance matrices and forming a separate variance model for each class, while the class groups represent multivariate normal distributions with the same mean. RDA is a compromise between LDA and QDA, and is more suitable in situations where there are many potentially relevant features because RDA is a regularization technique. The dataset X ═ (X1, X2, …, Xn) is assigned to one of the K groups of classes. In the training data, the class of the data is known, so the prior probability and the mean of class k are:
Figure BDA0002894852130000091
Figure BDA0002894852130000092
where ω is the total number of samples, ωkIs the number of samples of class k, XnIs a point of the sample, and,
Figure BDA0002894852130000093
is the average of the values of class k.
The canonical discriminant analysis RDA ameliorates the effects of multiple collinearity by modifying the singular covariance values. The sample covariance estimate for each class is as follows:
Figure BDA0002894852130000101
where ω is the total number of samples, XnIs a point of the sample, and,
Figure BDA0002894852130000102
is the average of class k.
The covariance matrix is further adjusted by introducing a shrinkage parameter γ:
Figure BDA0002894852130000103
where λ is the regularization parameter, λ is 0 ≦ 1, p is the dimension of the argument, I is the identity matrix, and γ is the shrinkage parameter.
The optimization target is J (w)
Figure BDA0002894852130000104
The above formula is SkAnd a generalized Rayleigh quotient of S, wherein Sk=ωkk,
Figure BDA0002894852130000105
This is the goal of maximizing QDA, with the maximum value of J being the matrix Sk -1Maximum eigenvalue of S, and corresponding is Sk -1And the characteristic vector corresponding to the maximum characteristic value of the S. Solving to obtain w ═ Sk -11,-μ2) I.e. the determined optimal projection direction, WTThe transposed projection is obtained by projecting the samples in the training set to the w direction:
y=wTX (6)
Figure BDA0002894852130000106
wherein the content of the first and second substances,
Figure BDA0002894852130000107
is a matrix transpose of weights, X is a data value,
Figure BDA0002894852130000108
is the average of the data values;
s3, classifying the signal data set X ═ (X1, X2, …, Xn) using quadratic discriminant analysis QDA, and obtaining a correlationNumber matrix rhonIs defined as ρRDA
Quadratic discriminant analysis QDA aims at finding a transformation of the input features that best distinguishes classes in the data set. The signal data set X ═ (X1, X2, …, Xn) is classified using quadratic discriminant analysis QDA, and the correlation coefficient matrix ρ is obtainednIs defined as ρQDA
Let sample data set X ═ (X1, X2, …, Xn) obey a multivariate gaussian distribution, μiIs the mean vector, as follows:
Figure BDA0002894852130000111
the covariance matrix Σ j of the sample is calculated as:
Figure BDA0002894852130000112
wherein j is 1, 2;
an intra-class divergence matrix S can be obtainedwComprises the following steps:
Figure BDA0002894852130000113
defining inter-class divergence matrix S simultaneouslybComprises the following steps:
Sb=(μ12)(μ12)T (12)
the optimization target is J (w)
Figure BDA0002894852130000114
The above formula is SwAnd SbThe generalized Rayleigh Quotient (QDA), which is the target to maximize the QDA, the maximum value of J is the matrix Sw -1SbIs the maximum eigenvalue of, and correspondingly is Sw -1SbThe feature vector corresponding to the maximum feature value of (1). Solving to obtain w=Sw -112) I.e. the determined optimal projection direction, the samples in the training set are projected to the w direction
y=wTX (14)
Figure BDA0002894852130000121
Wherein the content of the first and second substances,
Figure BDA0002894852130000122
is a matrix transpose of weights, X is a data value,
Figure BDA0002894852130000123
is the average of the data values;
s4, constructing a feature decision fusion device to perform feature integration and decision selection on the decisions and coefficients of the RDA and the QDA to obtain electroencephalogram consciousness dynamic classification, wherein the specific steps are as follows;
s41, constructing a feature decision fusion device, wherein the feature decision fusion device comprises a feature extraction unit, a projection classification unit and a decision selection unit:
s42, performing feature extraction on the correlation coefficients of the RDA and the QDA through a feature extraction unit to generate a feature vector F;
according to
Figure BDA0002894852130000124
The larger the value is, the larger rho is, and the maximum correlation coefficient and the second maximum correlation coefficient of RDA and QDA are respectively obtained according to the following expressions;
Figure BDA0002894852130000125
wherein, wXIs a projection in the x direction, wX TFor transposed projection, wYIs a projection in the y direction, wY TFor transposed projection, X matrix as abscissa, XTAs a transposed matrix, the Y matrix as abscissa,YTas a transposed matrix, E [ w ]Y TXYTwY]Represents wX TXYTwYExpectation of (1), E [ w ]X TXXTwX]Represents wX TXXTwXExpectation of (1), E [ w ]Y TXYTwY]Represents wY TXYTwY(iii) a desire;
generating a feature vector F according to the obtained maximum correlation coefficient and the second maximum correlation coefficient of the RDA and QDA algorithms:
after the correlation coefficients of only two methods are extracted, a matrix is obtained according to the following form:
Figure BDA0002894852130000131
wherein the content of the first and second substances,
Figure BDA0002894852130000132
is the maximum correlation coefficient of the QDA,
Figure BDA0002894852130000133
is the second largest correlation coefficient of the QDA,
Figure BDA0002894852130000134
is the maximum correlation coefficient of the RDA,
Figure BDA0002894852130000135
is the second largest correlation coefficient of RDA;
s43, dividing the feature vector F into two categories of RDA-false and QDA-false through a projection classification unit
And (4) dividing the test of all training data into four classes of both-true, both-false, RDA-false and QDA-false according to the classification result of the RDA algorithm and the QDA algorithm. In one double correct trial, both algorithms make the same and correct decision. In the double-dummy test, the decision of both methods is inconsistent with the intention of the subject. Proposed error from twoThe decision fusion method of selecting one of the decisions may also give an erroneous result. So the RDA-false and QDA-false tests (where only one decision in RDA and QDA is correct) are chosen to train the decision fusion method. Delta is a relaxation variable in the soft boundary objective function, representing a sample vjWhether the point is within the edge or not, the degree of adjustment is needed, C is an adjustment coefficient for controlling the width and misclassification balance, and a relaxation variable is used for determining whether the point is within the range or not, namely only two types of RDA-false and QDA-false are selected.
The projection classification unit uses a linear SVM classifier to perform projection in the range of the soft boundary target function, projects the characteristic vector F into a scalar value, and forms individual points after the scalar value is projected to a plane, and the points are expressed as:
Figure BDA0002894852130000141
wherein v isjAnd (j ═ 1,2, …, N) is the support vector, which is used to determine the largest edge plane of the classifier, aj> 0, is a variable and adjustable parameter, yjRefers to the jth support vector category, F is the feature vector, b is the bias,
Figure BDA0002894852130000142
is a linear kernel function;
the soft boundary objective function is:
Figure BDA0002894852130000143
where δ is a relaxation variable, representing a sample vjIf it is within the edge, the degree of adjustment is needed, C is the adjustment factor that controls the width and misclassification tradeoff, and the slack variable is used to determine if the point is within range.
Solving a soft boundary target function to obtain a maximum value of 2/| | w | |, which is used as a boundary line, dividing the characteristics in the characteristic vector F into two types of RDA-false and QDA-false according to the position of a scalar value projection point, wherein one side of the boundary line is of the RDA-false type, and the other side of the boundary line is of the QDA-false type;
s44, selecting decision output of the RDA or QDA algorithm according to the classification result through a decision selection unit;
as shown in formula (20), RDA-false and QDA-false (wherein only one of RDA and QDA is correct) are selected to train the decision fusion method, and if an RDA-false result is obtained, the module outputs a QDA decision. Otherwise, the module outputs the RDA decision.
Figure BDA0002894852130000151
The feature decision fusion device mainly comprises a feature extraction unit, a projection classification unit and a decision selection unit. And the feature decision fusion device inputs the decisions and the correlation coefficients of the RDA and QDA algorithms, and selects and outputs a more correct decision.
The effect of the method of the invention is compared and verified as follows:
in the test and training phase of decision fusion, performance estimation uses a cross-validation where 5 blocks of the data set are selected for training decision fusion and 1 block is selected for testing. During the training phase, we use another cross-validation to extract the RDA-false and QDA-false features. Specifically, the RDA and QDA algorithms are trained using 4 blocks in the training dataset and classify the remaining blocks in each round. According to the classification result, the decision fusion characteristics F of the RDA-false test and the QDA-false test are extracted and recorded in 5 rounds of each training period, namely, the total test is 200 times. And training a decision fusion method by using the recorded RDA-false and QDA-false characteristics. In the method, 5 LDA-based classification algorithms of LDA, QDA, RDA, nearest mean and weighted nearest mean are synthesized by using the proposed decision fusion. The integrated performance is evaluated by estimating the classification accuracy of all combinations and estimating the classification accuracy and Information Transfer Rate (ITR) of all combinations.
Fig. 2 illustrates the process of training and testing of the proposed decision fusion method. The performance was estimated using a one-out-of-one cross-validation in which 5 blocks of the data set were fused by the selected training decision and 1 block was tested by the selected test. During the training phase, we use another cross-validation to extract the RDA-false and QDA-false features. Specifically, the RDA and QDA algorithms are trained using 4 blocks in the training dataset and classify the remaining blocks in each round. According to the classification result, the decision fusion characteristics F of the RDA-false test and the QDA-false test are extracted and recorded in 5 rounds of each training period, namely, the total test is 200 times. And training a decision fusion method by using the recorded RDAfals and QDA-false characteristics.
In the testing stage, 5 LDA-based classification algorithms of LDA, QDA, RDA, nearest mean and weighted nearest mean are synthesized by using the proposed decision fusion method. The classification accuracy of all combinations is estimated. The classification accuracy and Information Transfer Rate (ITR) of all combinations is estimated. ITRs in bits/minute are defined as follows:
Figure BDA0002894852130000161
where P is the accuracy, N is the class number (i.e., N is 120 in this study), and T is the time required for one selection.
In terms of integration results, the performance was evaluated at a data length of 1 second. The resulting data consists of 5 rows and 5 columns, one LDA-based algorithm for each row. The main diagonal cell represents the average accuracy of each algorithm, and the other cells represent the average accuracy of a decision fusion method that integrates the two corresponding algorithms together. For example, the decision fusion-QDA & LDA method with a data length of 1s has an accuracy of 90.56%, which is higher than 86.36% of the LDA method, but lower than 93.70% of the QDA method. However, there is a 7.34% difference in the accuracy of the QDA and LDA methods. The test result also shows that the performance of the decision fusion method combining the QDA or RDA algorithm and the low-precision algorithm is reduced. The classification precision of the two algorithms before and after each decision fusion combination is calculated in the research. These results show that the overall classification accuracy is not improved when the decision fusion method fuses two algorithms with greatly different precision. And the maximum accuracy rate is 94.21% when the data length is 1s by combining the decision fusion-QDA & RDA method of two algorithms with relatively close performance.
By integrating 5 LDA-based classification algorithms including LDA, QDA, RDA, recent mean and weighted recent mean, the classification accuracy of the proposed decision fusion method is shown in the following table:
Figure BDA0002894852130000162
performance was evaluated at a data length of 1 second. The resulting data consists of 5 rows and 5 columns, one LDA-based algorithm for each row. The main diagonal cell represents the average accuracy of each algorithm, and the other cells represent the average accuracy of a decision fusion method that integrates the two corresponding algorithms together.
In the results of the table, the accuracy of the decision fusion-QDA & LDA method with a data length of 1s is 90.56%, higher than 86.36% of the LDA method, but lower than 93.70% of the QDA method. However, there is a 7.34% difference in the accuracy of the QDA and LDA methods. The test results in fig. 3 also show that the decision fusion method combining the QDA or RDA algorithm with the low-precision algorithm has degraded performance. The classification precision of the two algorithms before and after each decision fusion combination is calculated in the research. These results show that the accuracy of the overall classification is not improved when the two algorithms are fused with great difference in accuracy. And the maximum accuracy rate is 94.21% when the data length is 1s by combining the decision fusion-QDA & RDA method of two algorithms with relatively close performance.
The above-mentioned embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solution of the present invention can be made by those skilled in the art without departing from the spirit of the present invention, and the technical solution of the present invention shall fall within the protection scope defined by the claims.

Claims (4)

1. A linear analysis based electroencephalogram consciousness dynamic classification method based on feature decision fusion is characterized by comprising the following steps:
s1, acquiring an electroencephalogram signal data set X (X1, X2, …, Xn) through a brain wave induction helmet;
s2, classifying the signal data set X ═ (X1, X2, …, Xn) by using canonical discriminant analysis RDA, and obtaining a correlation coefficient matrix ρnIs defined as ρRDA
Figure FDA0002894852120000011
Wherein the content of the first and second substances,
Figure FDA0002894852120000012
is a matrix transpose of weights, X is a data value,
Figure FDA0002894852120000013
is the average of the data values;
s3, classifying the signal data set X ═ (X1, X2, …, Xn) using quadratic discriminant analysis QDA to obtain a correlation coefficient matrix ρnIs defined as ρQDA
Figure FDA0002894852120000014
Wherein the content of the first and second substances,
Figure FDA0002894852120000015
is a matrix transpose of weights, X is a data value,
Figure FDA0002894852120000016
is the average of the data values;
s4, constructing a feature decision fusion device to perform feature integration and decision selection on the decisions and coefficients of the RDA and the QDA to obtain electroencephalogram consciousness dynamic classification, wherein the specific steps are as follows;
s41, constructing a feature decision fusion device, wherein the feature decision fusion device comprises a feature extraction unit, a projection classification unit and a decision selection unit;
s42, performing feature extraction on the correlation coefficients of the RDA and the QDA through a feature extraction unit to generate a feature vector F;
according to
Figure FDA0002894852120000017
The larger the value, the larger ρ, according to the expression:
Figure FDA0002894852120000018
wherein, wXIs a projection in the x direction, wX TFor transposed projection, wYIs a projection in the y direction, wY TFor transposed projection, X matrix as abscissa, XTIs a transposed matrix, Y matrix as abscissa, YTAs a transposed matrix, E [ w ]X TXYTwY]Represents wX TXYTwYExpectation of (1), E [ w ]X TXXTwX]Represents wX TXXTwXExpectation of (1), E [ w ]Y TXYTwY]Represents wY TXYTwY(iii) a desire;
respectively obtaining the maximum correlation coefficient and the second maximum correlation coefficient of the RDA and the QDA;
generating a feature vector F according to the obtained maximum correlation coefficient and the second maximum correlation coefficient of the RDA and QDA algorithms:
Figure FDA0002894852120000021
wherein the content of the first and second substances,
Figure FDA0002894852120000022
is the maximum correlation coefficient of the QDA,
Figure FDA0002894852120000023
is the second largest number of phase relationships of QDA,
Figure FDA0002894852120000024
Is the maximum correlation coefficient of the RDA,
Figure FDA0002894852120000025
is the second largest correlation coefficient of RDA;
s43, dividing the feature vector F into two categories of RDA-false and QDA-false through a projection classification unit;
the projection classification unit uses a linear SVM classifier to perform projection in the range of the soft boundary target function, projects the characteristic vector F into a scalar value, and forms individual points after the scalar value is projected to a plane, and the points are expressed as:
Figure FDA0002894852120000026
wherein v isjJ is 1,2, …, N is the support vector, which is used to determine the maximum edge plane of the classifier, aj> 0, is a variable and adjustable parameter, yjRefers to the jth support vector category, F is the feature vector, b is the bias,
Figure FDA0002894852120000031
is a linear kernel function;
the soft boundary objective function is:
Figure FDA0002894852120000032
wherein, deltajIs a relaxation variable, representing a sample vjWhether the point is in the edge or not, the degree of adjustment is needed, C is an adjustment coefficient for controlling the width and misclassification balance, a relaxation variable is used for determining whether the point is in the range or not, and W is the total number of samples;
solving a soft boundary target function to obtain a maximum value of 2/| | W | |, which is used as a boundary line, and dividing the features in the feature vector F into two types of RDA-false and QDA-false according to the position of a scalar value projection point;
s44, selecting decision output of the RDA or QDA algorithm according to the classification result through a decision selection unit;
the decision selection unit is as follows:
Figure FDA0002894852120000033
if an RDA-false result is obtained, the module outputs a QDA decision; otherwise, the module outputs RDA decision, and the electroencephalogram consciousness dynamic classification with high accuracy is obtained.
2. The linear analysis-based electroencephalogram consciousness dynamic classification method based on feature decision fusion of the electroencephalogram, which is characterized in that in the step S1, an electroencephalogram signal is collected for Emotiv by adopting a brain wave induction helmet.
3. The linear analysis-based electroencephalogram consciousness dynamic classification method for feature decision fusion according to claim 1, wherein in step S2, p isRDAThe solving step is as follows:
the data set X ═ (X1, X2, …, Xn) is assigned to one of K sets of classes, in the training data the class of the data is known, so the prior probability and mean of class K are respectively:
Figure FDA0002894852120000041
Figure FDA0002894852120000042
wherein the content of the first and second substances,
Figure FDA0002894852120000043
is the prior probability of the class k,
Figure FDA0002894852120000044
is the average of class k, ω is the total number of samples, ωkIs the number of samples of class k, XnIs a point of the sample, and,
Figure FDA0002894852120000045
is the average of the values of class k;
the regular discriminant analysis RDA improves the influence of multiple collinearity by modifying the singular covariance value; the sample covariance estimate for each class is as follows:
Figure FDA0002894852120000046
where ω is the total number of samples, XnIs a point of the sample, and,
Figure FDA0002894852120000047
is the average of class k;
the covariance matrix is further adjusted by introducing a shrinkage parameter γ:
Figure FDA0002894852120000048
wherein λ is a regularization parameter, λ is 0 ≦ 1, p is a dimension of an argument, I is an identity matrix, γ is a shrinkage parameter;
the optimization target is J (w):
Figure FDA0002894852120000051
the above formula is SkAnd a generalized Rayleigh quotient of S, wherein Sk=ωkk
Figure FDA0002894852120000052
This is the goal of maximizing QDA, the maximum of JThe value is a matrix Sk -1Maximum eigenvalue of S, and corresponding is Sk -1The characteristic vector corresponding to the maximum characteristic value of the S; solving to obtain w ═ Sk -112) I.e. the determined optimal projection direction, wTThe transposed projection is obtained by projecting the samples in the training set to the w direction:
y=wTX (6)
Figure FDA0002894852120000053
wherein the content of the first and second substances,
Figure FDA0002894852120000054
is a matrix transpose of weights, X is a data value,
Figure FDA0002894852120000055
is the average of the data values.
4. The linear analysis-based electroencephalogram consciousness dynamic classification method for feature decision fusion according to claim 1, wherein in step S3, p isQDAThe solving step is as follows:
let sample data set X ═ (X1, X2, …, Xn) obey a multivariate gaussian distribution, μiIs a mean vector, expressed as:
Figure FDA0002894852120000056
calculating a covariance matrix Σ j of the sample:
Figure FDA0002894852120000057
wherein j is 1, 2;
an intra-class divergence matrix S can be obtainedwComprises the following steps:
Figure FDA0002894852120000061
defining inter-class divergence matrix S simultaneouslybComprises the following steps:
Sb=(μ12)(μ12)T (12)
the optimization target is J (w):
Figure FDA0002894852120000062
the above formula is SwAnd SbThe generalized Rayleigh Quotient (QDA), which is the target to maximize the QDA, the maximum value of J is the matrix Sw -1SbIs the maximum eigenvalue of, and correspondingly is Sw -1SbThe feature vector corresponding to the maximum feature value of (1); solving to obtain w ═ Sw -112) That is, the determined optimal projection direction is obtained by projecting the samples in the training set to the w direction:
y=wTX (14)
Figure FDA0002894852120000063
CN202110037508.9A 2021-01-12 2021-01-12 Electroencephalogram consciousness dynamic classification method based on linear analysis and feature decision fusion Active CN112733727B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110037508.9A CN112733727B (en) 2021-01-12 2021-01-12 Electroencephalogram consciousness dynamic classification method based on linear analysis and feature decision fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110037508.9A CN112733727B (en) 2021-01-12 2021-01-12 Electroencephalogram consciousness dynamic classification method based on linear analysis and feature decision fusion

Publications (2)

Publication Number Publication Date
CN112733727A true CN112733727A (en) 2021-04-30
CN112733727B CN112733727B (en) 2022-04-19

Family

ID=75590657

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110037508.9A Active CN112733727B (en) 2021-01-12 2021-01-12 Electroencephalogram consciousness dynamic classification method based on linear analysis and feature decision fusion

Country Status (1)

Country Link
CN (1) CN112733727B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115018046A (en) * 2022-05-17 2022-09-06 海南政法职业学院 Deep learning method for detecting malicious traffic of mobile APP

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521505A (en) * 2011-12-08 2012-06-27 杭州电子科技大学 Brain electric and eye electric signal decision fusion method for identifying control intention
CN102629156A (en) * 2012-03-06 2012-08-08 上海大学 Method for achieving motor imagery brain computer interface based on Matlab and digital signal processor (DSP)
CN108231067A (en) * 2018-01-13 2018-06-29 福州大学 Sound scenery recognition methods based on convolutional neural networks and random forest classification
CN111259741A (en) * 2020-01-09 2020-06-09 燕山大学 Electroencephalogram signal classification method and system
CN111523601A (en) * 2020-04-26 2020-08-11 道和安邦(天津)安防科技有限公司 Latent emotion recognition method based on knowledge guidance and generation counterstudy

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521505A (en) * 2011-12-08 2012-06-27 杭州电子科技大学 Brain electric and eye electric signal decision fusion method for identifying control intention
CN102629156A (en) * 2012-03-06 2012-08-08 上海大学 Method for achieving motor imagery brain computer interface based on Matlab and digital signal processor (DSP)
CN108231067A (en) * 2018-01-13 2018-06-29 福州大学 Sound scenery recognition methods based on convolutional neural networks and random forest classification
CN111259741A (en) * 2020-01-09 2020-06-09 燕山大学 Electroencephalogram signal classification method and system
CN111523601A (en) * 2020-04-26 2020-08-11 道和安邦(天津)安防科技有限公司 Latent emotion recognition method based on knowledge guidance and generation counterstudy

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
VIKTOR ROZGIC等: "Robust EEG emotion classification using segment level decision fusion", 《2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS,SPEECH AND SIGNAL PROCESSING》 *
付荣荣等: "基于稀疏共空间模式和Fisher判别的单次运动想象脑电信号识别方法", 《生物医学工程学杂志》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115018046A (en) * 2022-05-17 2022-09-06 海南政法职业学院 Deep learning method for detecting malicious traffic of mobile APP
CN115018046B (en) * 2022-05-17 2023-09-15 海南政法职业学院 Deep learning method for detecting malicious flow of mobile APP

Also Published As

Publication number Publication date
CN112733727B (en) 2022-04-19

Similar Documents

Publication Publication Date Title
CN113011239B (en) Motor imagery classification method based on optimal narrow-band feature fusion
CN106108894A (en) A kind of emotion electroencephalogramrecognition recognition method improving Emotion identification model time robustness
CN105930663B (en) Hand tremor signal and audio signal classification method based on evolution fuzzy rule
CN105184325A (en) Human body action recognition method and mobile intelligent terminal
CN107977651B (en) Common spatial mode spatial domain feature extraction method based on quantization minimum error entropy
CN103092971B (en) A kind of sorting technique for brain-computer interface
WO2022183966A1 (en) Electroencephalogram signal classification method and apparatus, device, storage medium and program product
CN103262118A (en) Attribute value estimation device, attribute value estimation method, program, and recording medium
CN111091074A (en) Motor imagery electroencephalogram signal classification method based on optimal region common space mode
CN114237046B (en) Partial discharge pattern recognition method based on SIFT data feature extraction algorithm and BP neural network model
CN110399805A (en) The Mental imagery Method of EEG signals classification of semi-supervised learning optimization SVM
CN111951637A (en) Task scenario-related unmanned aerial vehicle pilot visual attention distribution mode extraction method
Giles et al. A subject-to-subject transfer learning framework based on Jensen-Shannon divergence for improving brain-computer interface
CN107992846A (en) Block face identification method and device
CN111582082B (en) Two-classification motor imagery electroencephalogram signal identification method based on interpretable clustering model
CN114492513A (en) Electroencephalogram emotion recognition method for adaptation to immunity domain based on attention mechanism in cross-user scene
CN104978569A (en) Sparse representation based incremental face recognition method
CN112733727B (en) Electroencephalogram consciousness dynamic classification method based on linear analysis and feature decision fusion
CN109543637A (en) A kind of face identification method, device, equipment and readable storage medium storing program for executing
Ramakrishnan et al. Epileptic eeg signal classification using multi-class convolutional neural network
CN111652138A (en) Face recognition method, device and equipment for wearing mask and storage medium
US10963805B2 (en) Regression analysis system and regression analysis method that perform discrimination and regression simultaneously
CN111611963B (en) Face recognition method based on neighbor preservation canonical correlation analysis
CN112233742A (en) Medical record document classification system, equipment and storage medium based on clustering
CN112438741A (en) Driving state detection method and system based on electroencephalogram feature transfer learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant