CN111723661B - Brain-computer interface migration learning method based on manifold embedded distribution alignment - Google Patents
Brain-computer interface migration learning method based on manifold embedded distribution alignment Download PDFInfo
- Publication number
- CN111723661B CN111723661B CN202010417830.XA CN202010417830A CN111723661B CN 111723661 B CN111723661 B CN 111723661B CN 202010417830 A CN202010417830 A CN 202010417830A CN 111723661 B CN111723661 B CN 111723661B
- Authority
- CN
- China
- Prior art keywords
- manifold
- distribution
- data
- matrix
- brain
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000009826 distribution Methods 0.000 title claims abstract description 88
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000013508 migration Methods 0.000 title claims abstract description 29
- 230000005012 migration Effects 0.000 title claims abstract description 29
- 238000012549 training Methods 0.000 claims abstract description 15
- 238000013507 mapping Methods 0.000 claims abstract description 12
- 238000013526 transfer learning Methods 0.000 claims abstract description 11
- 230000009466 transformation Effects 0.000 claims abstract description 11
- 238000007781 pre-processing Methods 0.000 claims abstract description 5
- 239000011159 matrix material Substances 0.000 claims description 40
- 230000006870 function Effects 0.000 claims description 35
- 238000012360 testing method Methods 0.000 claims description 21
- 239000013598 vector Substances 0.000 claims description 12
- 238000004422 calculation algorithm Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 3
- 239000003550 marker Substances 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 210000004556 brain Anatomy 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 2
- 102000008297 Nuclear Matrix-Associated Proteins Human genes 0.000 description 1
- 108010035916 Nuclear Matrix-Associated Proteins Proteins 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003925 brain function Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 210000000299 nuclear matrix Anatomy 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 210000000578 peripheral nerve Anatomy 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- MHRLWUPLSHYLOK-UHFFFAOYSA-N thiomorpholine-3,5-dicarboxylic acid Chemical compound OC(=O)C1CSCC(C(O)=O)N1 MHRLWUPLSHYLOK-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a brain-computer interface transfer learning method based on manifold embedding distribution alignment, which comprises the following steps: acquiring EEG data of a source subject and EEG data of a target subject, respectively; preprocessing EEG data and extracting features; constructing a migration learning model based on manifold embedding distribution alignment, and training the migration learning model by utilizing data to obtain a training model; the trained classifier is used to classify unlabeled EEG data of the target subject. The invention integrates the characteristic distribution alignment into the training of the classifier on the basis of Li Manqie plane mapping and manifold characteristic transformation, and the effective classifier is obtained by training. The invention can effectively improve the performance of the brain-computer interface system used by the target user and lighten the training burden of the user.
Description
Technical Field
The invention relates to the field of image super-resolution research in video monitoring, in particular to a brain-computer interface migration learning method based on manifold embedding distribution alignment.
Background
The brain-computer interface (BCI, brain Computer Interface) is a communication and control path between the brain and the external environment via a computer or other electronic device, independent of peripheral nerve and muscle tissue. The brain external control system acquires brain electrical signals, converts the brain electrical signals into control commands through signal processing, and transmits the control commands to external equipment, so that the external control of the brain is realized. The technology was developed in the 70 s of the 20 th century and is a crossover technology involving many fields such as neurology, medicine, signal detection, signal processing, pattern recognition, etc. The brain-computer interface is mainly used in the field of medical rehabilitation at present, and brings convenience for patients losing exercise function and having relatively perfect brain functions.
Because the brain-computer interface has the characteristics of poor stability and low signal-to-noise ratio, the brain-computer interface needs to consume long training time of a user to generate a training sample with a label in practical application so as to train and generate a reliable classification model, and then the brain-computer interface can be put into normal use. This boring training phase clearly places a burden on healthy users or medical patients on their use of brain-computer interface products. Transfer learning describes the process of using data recorded in one task to improve the performance of another related task. The transfer learning can be applied to brain-computer interfaces, and the initial performance of a model in the brain-computer interface of the current user is improved by utilizing electroencephalogram (EEG) data of other users, so that training samples of the current user are reduced. Therefore, there is a need to design an effective migration learning method for a brain-computer interface system. However, there are various limitations in the current transfer learning technology applied to the brain-computer interface, and the final effect is not ideal.
Disclosure of Invention
The invention aims to overcome the defects and shortcomings of the prior art and provides a brain-computer interface transfer learning method based on manifold embedding distribution alignment, which can effectively reduce labeled training samples required by brain-computer interface users when being applied. The technology integrates feature distribution alignment into the training of the classifier based on manifold tangential plane mapping and subspace learning by using the tagged data of other users and the untagged data of the current user, so that an effective classifier is learned, and the performance of a brain-computer interface system for the current user can be effectively improved.
The aim of the invention is achieved by the following technical scheme:
a brain-computer interface migration learning method based on manifold embedded distribution alignment comprises the following steps:
s1, respectively acquiring EEG data D of a source subject s And EEG data D of a target subject t ;
S2, preprocessing EEG data and extracting features;
s3, constructing a migration learning model based on manifold embedding distribution alignment, training the migration learning model by using data, and solving model parameters in the model, thereby obtaining a trained classifier;
s4, classifying the unlabeled EEG data of the target subject by using a classifier.
In step S1, EEG data D of the source subject s N test data are contained, and all the n test data are provided with labels; EEG data D of the target subject t M test data are contained, and none of the m test data are labeled; n is more than or equal to 1, and m is more than or equal to 1.
The step S2 specifically includes:
s21, carrying out band-pass filtering on the EEG signals by using a five-order Butterworth filter with the frequency band of 8-30 Hz;
s22, intercepting EEG signal samples generated in 0.5-2.5S after the user performs psychological tasksX i Samples representing the ith test, where n e Representing the number of recorded channels>Representing a real set, T s Representing the number of sampling time points;
s23, for the ith test, estimating a spatial covariance matrix by using the sample covariance matrix:
where T represents the transpose of the matrix.
In step S3, the building of the manifold embedding distribution alignment-based migration learning model includes the following steps:
s31, li Manqie plane mapping, which is to project test data sets (corresponding to a plurality of spatial covariance matrices) of each subject onto a tangent plane at the Riemann mean thereof to generate n e (n e Vector s of-1)/2 dimensions i Initial feature as a subsequent manifold feature transformation:
Wherein the upper operator refers to the upper triangle part of the reserved symmetric matrix, and the diagonal line elements are given unit weight, while the non-diagonal line elements are given unit weightWeights are thus vectorized, +.>Representing a Riemann mean;
the Riemann mean value is calculated by using the Riemann geodesic distance to calculate the center of a plurality of covariance matrixes, and the calculation formula is as follows:
where I represents the number of covariance matrices,representing covariance matrices P and P i Is the square of the Riemann geodesic distance;
wherein Riemann geodesic distance is defined as
Wherein F represents the Frobenius norm, lambda i N represents a group of compoundsIs a characteristic value of (2);
the Li Manqie plane mapping can effectively improve the class discrimination performance of the data domain by measuring the distance of the covariance matrix by utilizing the distance of the Riemann geodesic line, and the vector characteristic obtained by projection of the tangential plane of the Riemann center enables the center point of the data of the source domain and the target domain to be zero, so that the difference of the two data domains is reduced to a certain extent.
S32, adopting a GFK (Geodesic Flow Kernel) method to perform manifold feature transformation: embedding the source data set and the target data set into a Grassmann manifold, then constructing a geodesic flow between the two points, and integrating infinite subspaces along the flow phi;
in particular, the original features are projected into these subspaces to form infinite dimensional feature vectors; the inner product between these feature vectors defines a kernel function that can be computed in a closed form over the original feature space; the kernel encapsulates incremental changes between subspaces, which is the basis for differences and commonalities between the two domains. Thus, the learning algorithm uses the kernel to derive a low-dimensional representation that is invariant to the domain;
meanwhile, the features in the manifold space can be expressed as z=g(s) =Φ (t) T s, wherein g represents a manifold transformation function, phi (t) represents a geodesic between two points, and s is a characteristic obtained by Li Manqie plane mapping; transformed feature z i and zj Defining a semi-positive geodesic flow kernel:
wherein G represents a transform function;
features of the original space can be transformed into a Grassmann manifold:
s33, integrating a classifier aligned with distribution, which is a migration learning framework based on a structural risk minimization principle and a regularization theory; in particular, the classifier model aims at optimizing the following three objective functions:
1) Minimizing a structural risk function on the source domain marker data Ds;
2) Minimizing the distribution difference between the joint probability distributions Js and Jt;
3) Maximize the manifold consistency after the marginal distribution Ps and Pt backs.
Let the prediction function (i.e. classifier) be denoted as f=w T Phi (z), where w is the classifier parameter, phi z aProjection of the original feature vector into Hilbert space +.>Is a feature mapping function of (1); using the square loss, f can be formulated as
Where K is a kernel derived from phi such that < phi (z i ),φ(z j )>=K(z i ,z j ) And σ, λ, and γ are regularization parameters, the rest of the parameters in the formula meaning as described below;
the structural risk function on the source domain name data Ds means:
wherein Is a set of classifiers in the kernel space, +.>Is->The square norm of f, σ is the shrinkage regularization parameter, (y i -f(z i )) 2 Is the square loss function;
the minimizing of the distribution difference between the joint probability distributions Js and Jt refers to simultaneously minimizing the distribution distance between the edge distributions Ps and Pt and the distribution distance between the conditional distributions Qs and Qt:
wherein Df,K (P s ,P t ) For the distribution distance between the edge distributions Ps and Pt,c is the number of categories for the distribution distance between the conditional distributions Qs and Qt; measuring the distribution distance by taking the projected maximum mean difference MMD as a distance measure; regularization of structural risk by joint distribution, at +.>The sample moments of both the marginal distribution and the conditional distribution are pulled closer.
The maximized manifold consistency after marginal distribution Ps and Pt back means manifold regularization under geodesic smoothness
wherein Wij Is the element of the ith row and jth column of the graph affinity matrix W, L ij Is the element of the ith row and the jth column of the normalized graph Laplace matrix L;
by regularizing structural risks by manifold regularization, marginal distribution can be fully utilized to maximize consistency between the predicted structure of f and the inherent manifold structure of the data; this can substantially match the discriminative hyperplane between domains;
the learning algorithm of the classifier is as follows:
in order to effectively solve the optimization problem, the following expression theorem is used:
where K is a core derived from phi, alpha i Is a coefficient, w is a weight;
re-representing the three objective functions by using the representation theorem to obtain a final objective function:
where Y is the tag matrix, K is the kernel matrix, E is the diagonal tag indication matrix, and M is the MMD matrix.
Deriving the objective function and making the derivative be 0 to obtain
α=((E+λM+γL)K+σI) -1 EY T ,
Where I is the identity matrix.
The step S4 specifically includes: and (3) calculating to obtain the classification output f (z) of the unlabeled EEG data of the target subject according to the K and the alpha obtained in the step S33, wherein the final predicted label is the label type corresponding to the maximum value in the classification output.
Compared with the prior art, the invention has the following advantages and beneficial effects:
according to the invention, the covariance matrix is used as an initial characteristic of data, the distances between the covariance matrices are accurately measured through the Riemann geodesic distance, high-precision classification recognition can be obtained, and the difference between EEG data of a source subject and EEG data of a target subject are preliminarily reduced after Li Manqie plane projection. The distribution variance is then further reduced in combination with manifold feature transformation in subspace learning, while feature dimensions are reduced. Finally, the distribution alignment is integrated into the training of the classifier, so that the classification accuracy of the EEG data of the brain-computer interface of the target subject is improved. In summary, the present invention utilizes the labeled data of other subjects and the unlabeled data of the current subject, and combines the characteristics of the EEG data, so that the classification performance of the brain-computer interface system for the current subject is effectively improved by advanced transfer learning technology, and the burden of the current subject is reduced to a certain extent.
Drawings
FIG. 1 is a flow chart of a brain-computer interface transfer learning method based on manifold embedding distribution alignment according to the present invention;
FIG. 2 is a diagram showing classification accuracy on the BCI CompetitionIV-2a dataset using the three methods employed in this example.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but embodiments of the present invention are not limited thereto.
As shown in fig. 1, the brain-computer interface migration learning method based on manifold embedding distribution alignment of the invention comprises the following steps:
s1, respectively acquiring EEG data D of a source subject s And EEG data D of a target subject t ;
S2, preprocessing EEG data and extracting features;
s3, constructing a migration learning model based on manifold embedding distribution alignment, training the migration learning model by using data, and solving model parameters in the model, thereby obtaining a trained classifier;
s4, classifying the unlabeled EEG data of the target subject by using a classifier.
In step S1, EEG data D of the source subject s N test data are contained, and all the n test data are provided with labels; EEG data D of the target subject t M test data are contained, and none of the m test data are labeled;
in step S2, the steps of data preprocessing and feature extraction of EEG data include:
s21, carrying out band-pass filtering on the EEG signals by using a five-order Butterworth filter with the frequency band of 8-30 Hz;
s22, intercepting EEG signal samples generated in 0.5-2.5S after the user performs psychological tasksIndicating the ith testWherein n is e Representing the number of recorded channels>Representing a real set, T s Representing the number of sampling time points.
S23, for the ith test, estimating a spatial covariance matrix by using the sample covariance matrix:
in step S3, a migration learning model based on manifold embedding distribution alignment is constructed, including the steps of:
s31, li Manqie plane mapping, which is to project test data sets (corresponding to a plurality of spatial covariance matrices) of each subject onto a tangent plane at the Riemann mean thereof to generate n e (n e -1)/2-dimensional vector as the initial feature for the following manifold feature transformation:
wherein the upper operator refers to the upper triangle part of the reserved symmetric matrix, and the diagonal line elements are given unit weight, while the non-diagonal line elements are given unit weightWeights are thus vectorized, +.>Representing a Riemann mean;
the Riemann mean value is calculated by using the Riemann geodesic distance to calculate the center of a plurality of covariance matrixes, and the calculation formula is as follows:
where I represents the number of covariance matrices,representing covariance matrices P and P i Is the square of the Riemann geodesic distance;
wherein Riemann geodesic distance is defined as
Wherein F represents the Frobenius norm, lambda i N represents a group of compoundsIs a characteristic value of (2);
the Li Manqie plane mapping can effectively improve the class discrimination performance of the data domain by measuring the distance of the covariance matrix by utilizing the distance of the Riemann geodesic line, and the vector characteristic obtained by projection of the tangential plane of the Riemann center enables the center point of the data of the source domain and the target domain to be zero, so that the difference of the two data domains is reduced to a certain extent.
S32, manifold feature transformation adopts a GFK (Geodesic Flow Kernel) method, and the main idea is as follows: the source and target data sets are embedded in a Grassmann manifold, and then a geodesic stream is constructed between the two points and the infinite subspaces are integrated along the stream Φ. In particular, the original features are projected into these subspaces to form an infinite dimensional feature vector. The inner product between these feature vectors defines a kernel function that can be computed in a closed form over the original feature space. The kernel encapsulates incremental changes between subspaces, which is the basis for differences and commonalities between the two domains. Thus, the learning algorithm uses the kernel to derive a low-dimensional representation that is invariant to the domain.
In particular, features in the manifold space may be expressed as z=g(s) =Φ (t) T s, wherein g represents a manifold transformation function, phi (t) represents a geodesic between two points, and s is a characteristic obtained by Li Manqie plane mapping; transformedFeature z i and zj Defining a semi-positive geodesic flow kernel
Wherein G represents a transform function;
features of the original space can be transformed into a Grassmann manifold:
s33, integrating the distribution aligned classifier, which is a migration learning framework based on a structural risk minimization principle and a regularization theory. In particular, the classifier model aims at optimizing the following three objective functions:
1) Minimizing a structural risk function on the source domain marker data Ds;
2) Minimizing the distribution difference between the joint probability distributions Js and Jt;
3) Maximize the manifold consistency after the marginal distribution Ps and Pt backs.
Let the prediction function (i.e. classifier) be denoted as f=w T Phi (z), where w is the classifier parameter, phi: zaProjection of the original feature vector into Hilbert space +.>Is described. Using the square loss, f can be formulated as
Where K is a kernel derived from phi such that < phi (z i ),φ(z j )>=K(z i ,z j ) And σ, λ, and γ are regularization parameters.
1) The structural risk function on the source domain name data Ds means:
wherein Is a set of classifiers in the kernel space, +.>Is->The square norm of f, σ is the shrinkage regularization parameter, (y i -f(z i )) 2 Is the square loss function.
2) The minimizing of the distribution difference between the joint probability distributions Js and Jt refers to simultaneously minimizing the distribution distance between the edge distributions Ps and Pt and the distribution distance between the conditional distributions Qs and Qt.
The distribution distance between the joint probability distributions Js and Jt is minimized. By the probability theorem, j=p·q, therefore, we try to minimize the distribution distance between the edge distributions Ps and Pt and the distribution distance between the conditional distributions Qs and Qt at the same time.
a. Edge distribution alignment
The projected maximum mean difference MMD is used as a distance measure to minimize the distribution distance between the edge distributions Ps and Pt:
b. conditional distribution alignment
The projected MMD for each class C e { 1..once, C } is calculated separately using both the true and false labels, and the two distributions Q are made s (z s |y s) and Qt (z t |y t ) Is within the class of centroidsMore closely:
wherein Is a set of samples belonging to class c in the source data, y (z i ) Is z i Is (are) true tags->Accordingly, the +>Is a sample set belonging to class c in the target data,/->Is z j Pseudo (predictive) tag of>
Combining the above formulas can yield a regularization of joint distribution adaptation, calculated as follows
Regularizing structural risks by joint distribution inThe sample moments of both the marginal distribution and the conditional distribution are pulled closer.
3) The maximized manifold consistency after marginal distribution Ps and Pt back means manifold regularization under geodesic smoothness
Where W is the graph affinity matrix and L is the normalized graph Laplace matrix. W is defined as
wherein Point z i P nearest neighbor set of (c). The calculation formula of L is l=i-D -1/2 WD -1/2 Wherein D is a diagonal matrix, each +.>
By regularizing structural risk with manifold regularization, marginal distributions can be fully exploited to maximize the consistency between the predicted structure of f and the inherent manifold structure of the data. This may substantially match the discriminatory hyperplane between domains.
The learning algorithm of the classifier is as follows:
in order to effectively solve the optimization problem, the following expression theorem is used:
where K is a core derived from phi, alpha i Is a coefficient and w is a weight.
The structural risk is first reformulated using the representational theorem:
where E is the diagonal label indication matrix ifEach element E ii =1, otherwise E ii =0。Y=[y 1 ,…,y n+m ]Is a tag matrix, although the target tags are unknown, they are filtered out by the tag indication matrix E.Is a nuclear matrix, and K ij =K(z i ,z j )。α=(α 1 ,...,α n+m ) Is a classifier parameter.
Representational joint distribution alignment regularization:
wherein Mc C e {0, 1..c } is an MMD matrix, calculated as follows:
Similarly, manifold regularization is re-represented:
M f,K (P s ,P t )=tr(α T KLKα)
integrating the three parts to obtain an objective function:
where M is an MMD matrix.
Deriving the objective function and making the derivative be 0 to obtain
α=((E+λM+γL)K+σI) -1 EY T
Wherein I is an identity matrix;
multi-class extension: representation ofY if y (z) =c c =1, otherwise y c =0. The tag matrix isThe parameter matrix is->. In this way, the algorithm can be extended to multiple classes of problems.
In step S4, the classifier is used to classify the unlabeled EEG data of the target subject, that is, the classification output f (z) of the unlabeled EEG data of the target subject is calculated according to K and α obtained in step S33, and the final predicted label is the label class corresponding to the maximum value in the classification output.
As shown in fig. 2, this example enumerates the classification accuracy of the three methods on the BCI CompetitionIV-2a dataset, using the BCI CompetitionIV-2a dataset of subjects S1, S3, S7, S8, and S9, two subjects at a time selected as the target subject and the source subject, respectively, and refers to the mean of the results of 4 trials of the learning method on the specific target subject dataset. The three methods are MDM (classifier with minimum distance to Riemann center), MDM_RC (MDM after Riemann center alignment is made first), TMDA (transfer learning method of the invention).
For MDM, since migration is not performed, the learned features of MDM have no migration, so that the accuracy is low when the trained model is directly applied to the target domain data. For MDM_RC, after migration, the accuracy is improved by about 20% compared with the case without migration, which indicates that the learned feature has migration. For the migration learning method based on manifold embedding distribution alignment, the diagnosis accuracy is higher than that of other two methods, and compared with MDM_RC, the migration learning method is improved by about 5%, and the recognition rate is up to more than 66%. The experimental result verifies the effectiveness of the method of the invention, and can be used for the problem of migration learning of brain-computer interfaces.
The above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principle of the present invention should be made in the equivalent manner, and the embodiments are included in the protection scope of the present invention.
Claims (4)
1. A brain-computer interface migration learning method based on manifold embedded distribution alignment is characterized in that: the method comprises the following steps:
s1, respectively acquiring EEG data D of a source subject s And EEG data D of a target subject t ;
S2, preprocessing EEG data and extracting features;
s3, constructing a migration learning model based on manifold embedding distribution alignment, training the migration learning model by using data, and solving model parameters in the model, thereby obtaining a trained classifier; the method for constructing the migration learning model based on manifold embedding distribution alignment comprises the following steps:
s31, li Manqie plane mapping, which means that the test data set of each subject is projected onto a tangent plane at the Riemann mean value thereof to generate n e (n e Vector s of-1)/2 dimensions i As an initial feature for the following manifold feature transformation:
wherein the upper operator refers to the upper triangle part of the reserved symmetric matrix, and the diagonal line elements are given unit weight, while the non-diagonal line elements are given unit weightWeights are thus vectorized, +.>Representing a Riemann mean; p (P) i A sample covariance matrix representing the ith test;
the Riemann mean value is calculated by using the Riemann geodesic distance to calculate the center of a plurality of covariance matrixes, and the calculation formula is as follows:
wherein I represents the number of covariance matrices,representing covariance matrices P and P i Is the square of the Riemann geodesic distance;
wherein Riemann geodesic distance is defined as
Wherein F represents the Frobenius norm, lambda i N represents P 1 -1 P 2 Is a characteristic value of (2);
s32, performing manifold feature transformation by adopting a GFK method: embedding the source data set and the target data set into a Grassmann manifold, then constructing a geodesic flow between the two points, and integrating infinite subspaces along the flow phi;
meanwhile, the features in the manifold space can be expressed as z=g(s) =Φ (t) T s, wherein g represents a manifold transformation function, phi (t) represents a geodesic between two points, and s is a characteristic obtained by Li Manqie plane mapping; transformed feature z i and zj Defining a semi-positive geodesic flow kernel:
wherein G represents a transform function;
features of the original space can be transformed into a Grassmann manifold:
s33, integrating a classifier aligned with distribution, which is a migration learning framework based on a structural risk minimization principle and a regularization theory; in particular, the classifier model aims at optimizing the following three objective functions:
1) Minimizing a structural risk function on the source domain marker data Ds;
2) Minimizing the distribution difference between the joint probability distributions Js and Jt;
3) Maximizing manifold consistency after the marginal distribution Ps and Pt backs;
let the prediction function be expressed as f=w T Phi (z), where w is the classifier parameter,projection of the original feature vector into Hilbert space +.>Is a feature mapping function of (1); using the square loss, f can be formulated as
Where K is a phi-derived kernel function such that<φ(z i ),φ(z j )>=K(z i ,z j ) And σ, λ, and γ are regularization parameters;
the structural risk function on the source domain name data Ds means:
wherein ,is a set of classifiers in the kernel space, +.>Is->The square norm of f, σ is the shrinkage regularization parameter, (y i -f(z i )) 2 Is the square loss function;
the minimizing of the distribution difference between the joint probability distributions Js and Jt refers to simultaneously minimizing the distribution distance between the edge distributions Ps and Pt and the distribution distance between the conditional distributions Qs and Qt:
wherein Df,K (P s ,P t ) For the distribution distance between the edge distributions Ps and Pt,c is the number of categories for the distribution distance between the conditional distributions Qs and Qt; measuring the distribution distance by taking the projected maximum mean difference MMD as a distance measure;
the maximized manifold consistency after marginal distribution Ps and Pt back means manifold regularization under geodesic smoothness
wherein Wij Is the element of the ith row and jth column of the graph affinity matrix W, L ij Is the element of the ith row and the jth column of the normalized graph Laplace matrix L;
the learning algorithm of the classifier is as follows:
in order to effectively solve the optimization problem, the following expression theorem is used:
where K is a core derived from φ, α i Is a coefficient, w is a weight;
re-representing the three objective functions by using the representation theorem to obtain a final objective function:
wherein Y is a tag matrix, K is a kernel matrix, E is a diagonal tag indication matrix, and M is an MMD matrix;
deriving the objective function and making the derivative be 0 to obtain
α=((E+λM+γL)K+σI) -1 EY T ,
Wherein I is an identity matrix;
s4, classifying the unlabeled EEG data of the target subject by using a classifier.
2. The brain-computer interface transfer learning method based on manifold embedding distribution alignment according to claim 1, wherein the method is characterized by comprising the following steps: in step S1, EEG data D of the source subject s N test data are contained, and all the n test data are provided with labels; EEG data D of the target subject t M test data are contained, and none of the m test data are labeled; n is more than or equal to 1, and m is more than or equal to 1.
3. The brain-computer interface transfer learning method based on manifold embedding distribution alignment according to claim 1, wherein the method is characterized by comprising the following steps: the step S2 specifically includes:
s21, carrying out band-pass filtering on the EEG signals by using a five-order Butterworth filter with the frequency band of 8-30 Hz;
s22, intercepting EEG signal samples generated in 0.5-2.5S after the user performs psychological tasksX i Samples representing the ith test, where n e Representing the number of recorded channels>Representing a real set, T s Representing the number of sampling time points;
s23, for the ith test, estimating a spatial covariance matrix by using the sample covariance matrix:
where T represents the transpose of the matrix.
4. The brain-computer interface transfer learning method based on manifold embedding distribution alignment according to claim 3, wherein: the step S4 specifically includes: and (3) calculating to obtain the classification output f (z) of the unlabeled EEG data of the target subject according to the K and the alpha obtained in the step S33, wherein the final predicted label is the label type corresponding to the maximum value in the classification output.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010417830.XA CN111723661B (en) | 2020-05-18 | 2020-05-18 | Brain-computer interface migration learning method based on manifold embedded distribution alignment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010417830.XA CN111723661B (en) | 2020-05-18 | 2020-05-18 | Brain-computer interface migration learning method based on manifold embedded distribution alignment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111723661A CN111723661A (en) | 2020-09-29 |
CN111723661B true CN111723661B (en) | 2023-06-16 |
Family
ID=72564530
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010417830.XA Active CN111723661B (en) | 2020-05-18 | 2020-05-18 | Brain-computer interface migration learning method based on manifold embedded distribution alignment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111723661B (en) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112348081B (en) * | 2020-11-05 | 2024-04-02 | 平安科技(深圳)有限公司 | Migration learning method for image classification, related device and storage medium |
CN112364916B (en) * | 2020-11-10 | 2023-10-27 | 中国平安人寿保险股份有限公司 | Image classification method based on transfer learning, related equipment and storage medium |
CN112580436B (en) * | 2020-11-25 | 2022-05-03 | 重庆邮电大学 | Electroencephalogram signal domain adaptation method based on Riemann manifold coordinate alignment |
CN112465152B (en) * | 2020-12-03 | 2022-11-29 | 中国科学院大学宁波华美医院 | Online migration learning method suitable for emotional brain-computer interface |
CN112560937B (en) * | 2020-12-11 | 2024-03-19 | 杭州电子科技大学 | Method for moving and learning by utilizing motor imagery aligned in resting state |
CN112651432A (en) * | 2020-12-15 | 2021-04-13 | 华南师范大学 | P300 brain-computer interface system based on XDAWN spatial filter and Riemann geometry transfer learning |
CN113191206B (en) * | 2021-04-06 | 2023-09-29 | 华南理工大学 | Navigator signal classification method, device and medium based on Riemann feature migration |
CN113288170A (en) * | 2021-05-13 | 2021-08-24 | 浙江大学 | Electroencephalogram signal calibration method based on fuzzy processing |
CN113392733B (en) * | 2021-05-31 | 2022-06-21 | 杭州电子科技大学 | Multi-source domain self-adaptive cross-tested EEG cognitive state evaluation method based on label alignment |
CN114224341B (en) * | 2021-12-02 | 2023-12-15 | 浙大宁波理工学院 | Wearable forehead electroencephalogram-based depression rapid diagnosis and screening system and method |
CN114305453A (en) * | 2021-12-20 | 2022-04-12 | 杭州电子科技大学 | Multi-source manifold electroencephalogram feature transfer learning method |
CN114358066B (en) * | 2021-12-24 | 2024-08-09 | 华中科技大学 | Privacy protection migration learning method for motor imagery brain-computer interface |
CN116863216B (en) * | 2023-06-30 | 2024-08-06 | 国网湖北省电力有限公司武汉供电公司 | Depth field adaptive image classification method, system and medium based on data manifold geometry |
CN117195040B (en) * | 2023-08-25 | 2024-05-17 | 浙江大学 | Brain-computer interface transfer learning method based on resting state electroencephalogram data calibration |
CN118395245B (en) * | 2024-07-01 | 2024-09-27 | 湘江实验室 | Real-time electroencephalogram signal self-adaptive classification method and system based on Riemann manifold |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109598292A (en) * | 2018-11-23 | 2019-04-09 | 华南理工大学 | A kind of transfer learning method of the positive negative ratio of difference aid sample |
CN109657642A (en) * | 2018-12-29 | 2019-04-19 | 山东建筑大学 | A kind of Mental imagery Method of EEG signals classification and system based on Riemann's distance |
CN110851783A (en) * | 2019-11-12 | 2020-02-28 | 华中科技大学 | Heterogeneous label space migration learning method for brain-computer interface calibration |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
BR112017009991A2 (en) * | 2014-11-13 | 2018-02-14 | Mensia Technologies | enhanced signal analysis based scoring method |
-
2020
- 2020-05-18 CN CN202010417830.XA patent/CN111723661B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109598292A (en) * | 2018-11-23 | 2019-04-09 | 华南理工大学 | A kind of transfer learning method of the positive negative ratio of difference aid sample |
CN109657642A (en) * | 2018-12-29 | 2019-04-19 | 山东建筑大学 | A kind of Mental imagery Method of EEG signals classification and system based on Riemann's distance |
CN110851783A (en) * | 2019-11-12 | 2020-02-28 | 华中科技大学 | Heterogeneous label space migration learning method for brain-computer interface calibration |
Non-Patent Citations (1)
Title |
---|
脑机接口中基于黎曼几何的机器学习方法研究;李绍锋;《中国优秀硕士学位论文全文数据库信息科技辑》;20200115;正文第4-7、32-46页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111723661A (en) | 2020-09-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111723661B (en) | Brain-computer interface migration learning method based on manifold embedded distribution alignment | |
Huang et al. | Modality-adaptive mixup and invariant decomposition for RGB-infrared person re-identification | |
Zuo et al. | Multimodal representations learning and adversarial hypergraph fusion for early Alzheimer’s disease prediction | |
CN110139597A (en) | The system and method for being iterated classification using neuro-physiological signals | |
CN115019405A (en) | Multi-modal fusion-based tumor classification method and system | |
Li et al. | Robust ECG biometrics using GNMF and sparse representation | |
CN110534195B (en) | Alzheimer disease detection method based on data space transformation | |
CN109598219B (en) | Adaptive electrode registration method for robust electromyography control | |
CN105117708A (en) | Facial expression recognition method and apparatus | |
CN109330613A (en) | Human body Emotion identification method based on real-time brain electricity | |
Yilmaz et al. | Diversity in a signal-to-image transformation approach for EEG-based motor imagery task classification | |
Hua et al. | Direct arrhythmia classification from compressive ECG signals in wearable health monitoring system | |
CN113010013A (en) | Wasserstein distance-based motor imagery electroencephalogram migration learning method | |
CN110175511A (en) | It is a kind of to be embedded in positive negative sample and adjust the distance pedestrian's recognition methods again of distribution | |
CN117290730A (en) | Optimization method of individual emotion recognition model | |
CN105184794A (en) | CSM assistant analysis system and method based on tensor image | |
CN110477907A (en) | A kind of intelligence assists in identifying the method for epilepsy outbreak | |
CN114093507B (en) | Intelligent dermatological classification method based on contrast learning in edge computing network | |
CN114305453A (en) | Multi-source manifold electroencephalogram feature transfer learning method | |
CN111611963B (en) | Face recognition method based on neighbor preservation canonical correlation analysis | |
CN110432899B (en) | Electroencephalogram signal identification method based on depth stacking support matrix machine | |
CN112329698A (en) | Face recognition method and system based on intelligent blackboard | |
Fei-Yue | Volumn Content | |
CN114254676A (en) | Source domain selection method for multi-source electroencephalogram migration | |
Woo et al. | Determining functional units of tongue motion via graph-regularized sparse non-negative matrix factorization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20220321 Address after: 510530 No. 39, Ruihe Road, Huangpu District, Guangzhou, Guangdong Applicant after: Guangzhou Guangda Innovation Technology Co.,Ltd. Address before: 510640 No. five, 381 mountain road, Guangzhou, Guangdong, Tianhe District Applicant before: SOUTH CHINA University OF TECHNOLOGY |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |