CN111723661A - Brain-computer interface transfer learning method based on manifold embedding distribution alignment - Google Patents

Brain-computer interface transfer learning method based on manifold embedding distribution alignment Download PDF

Info

Publication number
CN111723661A
CN111723661A CN202010417830.XA CN202010417830A CN111723661A CN 111723661 A CN111723661 A CN 111723661A CN 202010417830 A CN202010417830 A CN 202010417830A CN 111723661 A CN111723661 A CN 111723661A
Authority
CN
China
Prior art keywords
manifold
distribution
data
matrix
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010417830.XA
Other languages
Chinese (zh)
Other versions
CN111723661B (en
Inventor
杨飞宇
顾正晖
俞祝良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Guangda Innovation Technology Co ltd
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202010417830.XA priority Critical patent/CN111723661B/en
Publication of CN111723661A publication Critical patent/CN111723661A/en
Application granted granted Critical
Publication of CN111723661B publication Critical patent/CN111723661B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a brain-computer interface transfer learning method based on manifold embedding distribution alignment, which comprises the following steps: obtaining EEG data of a source subject and EEG data of a target subject, respectively; preprocessing EEG data and extracting characteristics; constructing a transition learning model based on manifold embedding distribution alignment, and training the transition learning model by using data to obtain a training model; the trained classifier is used to classify unlabeled EEG data for the target subject. The invention integrates the feature distribution alignment into the training of the classifier on the basis of Riemannian tangent plane mapping and manifold feature transformation, and an effective classifier is obtained by training. The invention can effectively improve the performance of the brain-computer interface system used by the target user and reduce the training burden of the user.

Description

Brain-computer interface transfer learning method based on manifold embedding distribution alignment
Technical Field
The invention relates to the field of image super-resolution research in video monitoring, in particular to a brain-computer interface transfer learning method based on manifold embedding distribution alignment.
Background
Brain Computer Interface (BCI) is a communication and control path for external information between the human Brain and the external environment, which is independent of peripheral nerves and muscle tissues, established by computers or other electronic devices. The brain-electrical signal is collected and converted into a control command through signal processing and transmitted to external equipment, so that the external control of the brain of a human is realized. The technology is formed in the 70 s of the 20 th century, and is a cross technology which relates to a plurality of fields such as neurology, medicine, signal detection, signal processing, pattern recognition and the like. The brain-computer interface is mainly used in the field of medical rehabilitation at present, and brings convenience to patients who lose motor functions and have relatively perfect brain functions.
Because the electroencephalogram signal has the characteristics of poor stability and low signal-to-noise ratio, the brain-computer interface needs to consume a long training time of a user to generate a training sample with a label in practical application so as to train and generate a reliable classification model, and then the brain-computer interface can be put into normal use. This tedious training phase undoubtedly burdens healthy users and medical patients with their use of brain-computer interface products. Migration learning describes the process of using data recorded in one task to improve the performance of another related task. The transfer learning can be applied to the brain-computer interface, and the initial performance of the model in the brain-computer interface of the current user is improved by utilizing electroencephalogram (EEG) data of other users, so that training samples of the current user are reduced. Therefore, an effective transfer learning method needs to be designed for the brain-computer interface system. However, various limitations exist in the current migration learning technology applied to the brain-computer interface, and the final effect is not ideal.
Disclosure of Invention
The invention mainly aims to overcome the defects of the prior art and provide a brain-computer interface transfer learning method based on manifold embedding distribution alignment, and when the brain-computer interface transfer learning method is applied, labeled training samples required by a brain-computer interface user can be effectively reduced. The technology utilizes labeled data of other users and unlabeled data of the current user, integrates feature distribution alignment into the training of a classifier on the basis of manifold tangent plane mapping and subspace learning, learns an effective classifier, and can effectively improve the performance of a brain-computer interface system for the current user.
The purpose of the invention is realized by the following technical scheme:
a brain-computer interface transfer learning method based on manifold embedding distribution alignment comprises the following steps:
s1, obtaining EEG data D of source subjects respectivelysAnd target subject EEG data Dt
S2, preprocessing the EEG data and extracting the features;
s3, constructing a manifold embedding distribution alignment-based transfer learning model, training the transfer learning model by using data, and solving model parameters in the model to obtain a trained classifier;
s4, the unlabeled EEG data of the target subject is classified using a classifier.
In step S1, the EEG data D of the source subjectsThe method comprises the following steps of (1) containing n test data, wherein the n test data are provided with labels; EEG data D of the target subjecttThe method comprises the following steps of (1) containing m test data, wherein the m test data are not provided with labels; n is more than or equal to 1, and m is more than or equal to 1.
The step S2 specifically includes:
s21, carrying out band-pass filtering on the EEG signal by using a fifth-order Butterworth filter with the frequency band of 8-30 Hz;
s22, intercepting EEG signal samples generated by 0.5-2.5S after the user executes psychological tasks
Figure BDA0002495741620000021
XiDenotes the sample of the i-th test, where neIndicates the number of channels to be recorded,
Figure BDA0002495741620000022
representing a set of real numbers, TsRepresenting the number of sampling time points;
s23, for the ith experiment, estimating the spatial covariance matrix using the sample covariance matrix:
Figure BDA0002495741620000031
where T represents the transpose of the matrix.
In step S3, the constructing a manifold embedding distribution alignment-based transfer learning model includes the following steps:
s31 Riemann tangent plane mapping, which is to project the test data set (corresponding to a plurality of spatial covariance matrices) of each subject onto a tangent plane located at the Riemann mean value thereof to generate ne(ne-1)/2-dimensional vector siAs an initial feature of the next manifold feature transformation:
Figure BDA0002495741620000032
wherein the upper operator means to reserve the upper triangular part of the symmetric matrix and to assign unit weight to its diagonal elements and assign unit weight to off-diagonal elements
Figure BDA0002495741620000033
The weights are thus vectorized for them,
Figure BDA0002495741620000034
representing a Riemann mean;
the Riemann mean means that the centers of a plurality of covariance matrixes are calculated by using the Riemann geodesic distance, and the calculation formula is as follows:
Figure BDA0002495741620000035
where I denotes the number of covariance matrices,
Figure BDA0002495741620000036
representing covariance matrices P and PiSquare of the Riemann geodesic distance;
wherein the Riemann geodesic distance is defined as
Figure BDA0002495741620000037
Wherein F represents the Frobenius norm, lambdaiN represents
Figure BDA0002495741620000038
A characteristic value of (d);
the Riemann tangent plane mapping can effectively improve the category discrimination performance of the data domain by measuring the distance of the covariance matrix by utilizing the distance of a Riemann geodesic line, and the vector characteristics obtained by projecting the Riemann tangent plane to the Riemann center enable the center points of the source domain data and the target domain data to be zero, so that the difference between the two data domains is reduced to a certain extent.
S32, performing manifold feature transformation by adopting a GFK (Geodesic Flow Kernel) method: embedding a source data set and a target data set into a Grassmann manifold, then constructing a geodesic flow between two points, and integrating infinite subspaces along a flow phi;
in particular, the original features are projected into these subspaces to form an infinite-dimensional feature vector; the inner product between these feature vectors defines a kernel function that can be computed in a closed form over the original feature space; the kernel encapsulates incremental changes between subspaces, which is the basis for differences and commonalities between the two domains. Thus, the learning algorithm uses the kernel to derive a domain-invariant low-dimensional representation;
meanwhile, a feature in manifold space can be expressed as z ═ g(s) ═ Φ (t)Ts, wherein g represents a manifold transformation function, phi (t) represents a geodesic between two points, and s is a feature obtained by Riemann tangent plane mapping; transformed feature zi and zjDefines a semi-positive geodesic kernel:
Figure BDA0002495741620000041
wherein G represents a transformation function;
the features of the original space can be transformed into a Grassmann manifold:
Figure BDA0002495741620000042
s33, integrating the distributed aligned classifier, and using the structure risk minimization principle and the regularization theory as the basis to transfer the learning frame; in particular, the classifier model aims to optimize the following three objective functions:
1) minimizing the structure risk function on the source domain marking data Ds;
2) minimizing a distribution difference between the joint probability distributions Js and Jt;
3) and the consistency of the marginal distribution Ps and the manifold behind the Pt is maximized.
Let the prediction function (i.e. classifier) be denoted as f-wTPhi (z), where w is the classifier parameter phi z a
Figure BDA0002495741620000051
Projecting the original feature vector to Hilbert space
Figure BDA00024957416200000511
A feature mapping function of (a); with square loss, f can be formulated as
Figure BDA0002495741620000052
Where K is a kernel function derived from phi such that < phi (z)i),φ(zj)>=K(zi,zj) And σ, λ and γ are regularization parameters, the remaining parameters in the formula are as described below;
the structural risk function on the source domain marking data Ds refers to:
Figure BDA0002495741620000053
wherein
Figure BDA0002495741620000054
Is a set of classifiers in the kernel space,
Figure BDA0002495741620000055
is that
Figure BDA0002495741620000056
The square norm of f, σ, is the contraction regularization parameter, (y)i-f(zi))2Is the square loss function;
the minimizing of the distribution difference between the joint probability distributions Js and Jt means to simultaneously minimize the distribution distance between the edge distributions Ps and Pt and the distribution distance between the conditional distributions Qs and Qt:
Figure BDA0002495741620000057
wherein Df,K(Ps,Pt) The distribution distance between the edge distribution Ps and Pt,
Figure BDA0002495741620000058
c is the distribution distance between the condition distribution Qs and Qt, and is the number of categories; measuring the distribution distance by adopting the projection maximum mean difference MMD as distance measurement; regularization of structural risk by joint distribution adaptation, in
Figure BDA0002495741620000059
The sample moments of both the marginal and conditional distributions in (1) are pulled closer.
The maximum manifold consistency of the marginal distribution Ps and the Pt back is that the manifold is normalized to be in geodesic smoothness
Figure BDA00024957416200000510
wherein WijIs an element of the ith row and jth column of the graph affinity matrix W, LijIs the element of the ith row and the jth column of the normalized graph Laplace matrix L;
by regularizing the structural risk with manifold regularization, the marginal distribution can be fully utilized to maximize the consistency between the predicted structure of f and the inherent manifold structure of the data; this enables a substantially matching discriminative hyperplane between domains;
the learning algorithm of the classifier is as follows:
to solve the optimization problem efficiently, the following expression theorem is used:
Figure BDA0002495741620000061
where K is a kernel derived from phi, αiIs a coefficient, w is a weight;
re-representing the three objective functions by using the representation theorem to obtain a final objective function:
Figure BDA0002495741620000062
where Y is the label matrix, K is the kernel matrix, E is the diagonal label indication matrix, and M is the MMD matrix.
Derivation of the objective function and making the derivative 0 can be obtained
α=((E+λM+γL)K+σI)-1EYT
Where I is the identity matrix.
The step S4 specifically includes: and (f) (z) calculating the classification output f of the unlabeled EEG data of the target subject according to the K and the alpha obtained in the step (S33), wherein the final prediction label is the label class corresponding to the maximum value in the classification output.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the invention uses the covariance matrix as the initial characteristic of the data, accurately measures the distance between the covariance matrices through the Riemann geodesic distance, can obtain high-precision classification identification, and preliminarily reduces the difference between EEG data of a source subject and a target subject after the Riemann tangent plane projection. And then, the distribution difference is further reduced by combining manifold feature transformation in subspace learning, and meanwhile, the feature dimension is reduced. Finally, distribution alignment is integrated into the training of the classifier, and the classification accuracy of the brain-computer interface EEG data of the target subject is improved. In summary, the present invention utilizes the labeled data of other subjects and the unlabeled data of the current subject, combines the features of the EEG data itself, and effectively improves the classification performance of the brain-computer interface system for the current subject through the advanced transfer learning technique, thereby reducing the burden of the current subject to a certain extent.
Drawings
FIG. 1 is a flow chart of a brain-computer interface transfer learning method based on manifold embedding distribution alignment according to the present invention;
FIG. 2 is a diagram illustrating the classification accuracy of the BCI CompetitioonIV-2 a data set by the three methods employed in the present embodiment.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
As shown in fig. 1, a brain-computer interface migration learning method based on manifold embedding distribution alignment according to the present invention includes the following steps:
s1, obtaining EEG data D of source subjects respectivelysAnd target subject EEG data Dt
S2, preprocessing the EEG data and extracting the features;
s3, constructing a manifold embedding distribution alignment-based transfer learning model, training the transfer learning model by using data, and solving model parameters in the model to obtain a trained classifier;
s4, the unlabeled EEG data of the target subject is classified using a classifier.
In step S1, the EEG data D of the source subjectsThe method comprises the following steps of (1) containing n test data, wherein the n test data are provided with labels; EEG data D of the target subjecttThe method comprises the following steps of (1) containing m test data, wherein the m test data are not provided with labels;
in step S2, the step of performing data preprocessing and feature extraction on the EEG data comprises:
s21, carrying out band-pass filtering on the EEG signal by using a fifth-order Butterworth filter with the frequency band of 8-30 Hz;
s22, intercepting EEG signal samples generated by 0.5-2.5S after the user executes psychological tasks
Figure BDA0002495741620000081
Denotes the sample of the i-th test, where neIndicates the number of channels to be recorded,
Figure BDA0002495741620000082
representing a set of real numbers, TsRepresenting the number of sampling time points.
S23, for the ith experiment, estimating the spatial covariance matrix using the sample covariance matrix:
Figure BDA0002495741620000083
in step S3, a manifold embedding distribution alignment-based transfer learning model is constructed, including the following steps:
s31 Riemann tangent plane mapping, which is to project the test data set (corresponding to a plurality of spatial covariance matrices) of each subject onto a tangent plane located at the Riemann mean value thereof to generate ne(ne-1)/2 dimensional vector as initial feature for next manifold feature transformation:
Figure BDA0002495741620000084
wherein the upper operator means to reserve the upper triangular part of the symmetric matrix and to assign unit weight to its diagonal elements and assign unit weight to off-diagonal elements
Figure BDA0002495741620000085
The weights are thus vectorized for them,
Figure BDA0002495741620000086
representing a Riemann mean;
the Riemann mean means that the centers of a plurality of covariance matrixes are calculated by using the Riemann geodesic distance, and the calculation formula is as follows:
Figure BDA0002495741620000087
where I denotes the number of covariance matrices,
Figure BDA0002495741620000088
representing covariance matrices P and PiSquare of the Riemann geodesic distance;
wherein the Riemann geodesic distance is defined as
Figure BDA0002495741620000091
Wherein F represents the Frobenius norm, lambdaiN represents
Figure BDA0002495741620000094
A characteristic value of (d);
the Riemann tangent plane mapping can effectively improve the category discrimination performance of the data domain by measuring the distance of the covariance matrix by utilizing the distance of a Riemann geodesic line, and the vector characteristics obtained by projecting the Riemann tangent plane to the Riemann center enable the center points of the source domain data and the target domain data to be zero, so that the difference between the two data domains is reduced to a certain extent.
S32, carrying out manifold feature transformation, wherein a GFK (geodesic Flow kernel) method is adopted, and the main idea is as follows: the source and target data sets are embedded in a Grassmann manifold, and then a geodesic flow is constructed between the two points and an infinite subspace is integrated along the flow phi. In particular, the original features are projected into these subspaces to form an infinite-dimensional feature vector. The inner product between these feature vectors defines a kernel function that can be computed in a closed form over the original feature space. The kernel encapsulates incremental changes between subspaces, which is the basis for differences and commonalities between the two domains. Thus, the learning algorithm uses the kernel to derive a low-dimensional representation that is invariant to the domain.
In particular, it is possible to use, for example,a feature in manifold space may be denoted as z ═ g(s) ═ Φ (t)Ts, wherein g represents a manifold transformation function, phi (t) represents a geodesic between two points, and s is a feature obtained by Riemann tangent plane mapping; transformed feature zi and zjThe inner product of (a) defines a semi-positive geodesic kernel
Figure BDA0002495741620000092
Wherein G represents a transformation function;
the features of the original space can be transformed into a Grassmann manifold:
Figure BDA0002495741620000093
s33, integrating the distributed aligned classifier, and is a migration learning framework based on the structural risk minimization principle and the regularization theory. In particular, the classifier model aims to optimize the following three objective functions:
1) minimizing the structure risk function on the source domain marking data Ds;
2) minimizing a distribution difference between the joint probability distributions Js and Jt;
3) and the consistency of the marginal distribution Ps and the manifold behind the Pt is maximized.
Let the prediction function (i.e. classifier) be denoted as f-wTPhi (z), where w is the classifier parameter phi: za
Figure BDA0002495741620000101
Projecting the original feature vector to Hilbert space
Figure BDA0002495741620000102
The feature mapping function of (1). With square loss, f can be formulated as
Figure BDA0002495741620000103
Where K is a kernel function derived from phi such that < phi (z)i),φ(zj)>=K(zi,zj) And σ, λ, and γ are regularization parameters.
1) The structural risk function on the source domain marking data Ds refers to:
Figure BDA0002495741620000104
wherein
Figure BDA0002495741620000105
Is a set of classifiers in the kernel space,
Figure BDA0002495741620000106
is that
Figure BDA0002495741620000107
The square norm of f, σ, is the contraction regularization parameter, (y)i-f(zi))2Is a squared loss function.
2) The minimizing of the distribution difference between the joint probability distributions Js and Jt means to simultaneously minimize the distribution distance between the edge distributions Ps and Pt and the distribution distance between the conditional distributions Qs and Qt.
The distribution distance between the joint probability distributions Js and Jt is minimized. By the probability theorem, J ═ P · Q, therefore, we try to minimize both the distribution distance between the edge distribution Ps and Pt and the distribution distance between the conditional distributions Qs and Qt.
a. Edge distribution alignment
The distribution distance between the edge distributions Ps and Pt is minimized using the projected maximum mean difference MMD as a distance measure:
Figure BDA0002495741620000111
b. conditional distribution alignment
The projected MMD of each class C ∈ { 1.,. C } is computed separately using both true and false labels, and two distributions Q are mades(zs|ys) and Qt(zt|yt) The inner centroid of
Figure BDA0002495741620000112
The middle is closer:
Figure BDA0002495741620000113
wherein
Figure BDA0002495741620000114
Is a set of samples belonging to class c in the source data, y (z)i) Is ziThe real label of (a) is,
Figure BDA0002495741620000115
accordingly, the number of the first and second electrodes,
Figure BDA0002495741620000116
is a set of samples in the target data that belong to class c,
Figure BDA0002495741620000117
is zjThe pseudo (predictive) tag of (a) is,
Figure BDA0002495741620000118
the regularization for joint distribution adaptation can be derived by combining the above equations, and is calculated as follows
Figure BDA0002495741620000119
Regularization of structural risk by joint distribution adaptation, in
Figure BDA00024957416200001110
The sample moments of both the marginal and conditional distributions in (1) are pulled closer.
3) The maximum manifold consistency of the marginal distribution Ps and the Pt back is that the manifold is normalized to be in geodesic smoothness
Figure BDA00024957416200001111
Where W is the graph affinity matrix and L is the normalized graph Laplace matrix. W is defined as
Figure BDA00024957416200001112
wherein
Figure BDA0002495741620000121
Point ziP nearest neighbor set. The formula for L is L ═ I-D-1/2WD-1/2Where D is a diagonal matrix, each
Figure BDA0002495741620000122
By regularizing the structural risk with manifold regularization, the marginal distribution can be exploited to maximize the consistency between the predicted structure of f and the inherent manifold structure of the data. This may substantially match the discriminating hyperplane between domains.
The learning algorithm of the classifier is as follows:
to solve the optimization problem efficiently, the following expression theorem is used:
Figure BDA0002495741620000123
where K is a kernel derived from phi, αiIs a coefficient and w is a weight.
The structural risk is first reformulated using the expression theorem:
Figure BDA0002495741620000124
where E is the diagonal label indication matrix if
Figure BDA0002495741620000125
Then each element E ii1, otherwise Eii=0。Y=[y1,…,yn+m]Is a label matrix, although the target labels are unknown, they are filtered out by the label indication matrix E.
Figure BDA0002495741620000127
Is a kernel matrix, and Kij=K(zi,zj)。α=(α1,...,αn+m) Is the classifier parameter.
Re-represent joint distribution alignment regularization:
Figure BDA0002495741620000126
wherein McC ∈ {0, 1., C } is a MMD matrix, which is calculated as follows:
Figure BDA0002495741620000131
calculating M using the above equation0, wherein n(0)=n,m(0)=m,
Figure BDA0002495741620000132
Similarly, re-represent manifold regularization:
Mf,K(Ps,Pt)=tr(αTKLKα)
integrating the three parts to obtain an objective function:
Figure BDA0002495741620000133
where M is the MMD matrix.
Derivation of the objective function and making the derivative 0 can be obtained
α=((E+λM+γL)K+σI)-1EYT
Wherein I is an identity matrix;
multiple types of extensions: to represent
Figure BDA0002495741620000134
If y (z) is equal to c, then y c1, otherwise y c0. The label matrix is
Figure BDA0002495741620000135
The parameter matrix is. In this way, the algorithm can be extended to multiple classes of problems.
In step S4, the classification of the unlabeled EEG data of the target subject by the classifier is performed by calculating the classification output f (z) of the unlabeled EEG data of the target subject from K and α obtained in step S33, and the final prediction label is the label type corresponding to the maximum value in the classification output.
As shown in fig. 2, the present embodiment respectively enumerates the classification accuracy of the three methods on the BCI competioniv-2 a dataset, and in this embodiment, the data sets of the subjects S1, S3, S7, S8 and S9 in the BCI competioniv-2 a are used, and two subjects are selected as the target subject and the source subject respectively at a time, and the classification accuracy refers to the average of the results of 4 trials of the learning method on the data set of a specific target subject. The three methods are MDM (classifier with minimum distance to the riemann center), MDM _ RC (alignment of the riemann center and then MDM), and TMDA (transition learning method of the present invention), respectively.
For MDM, as the MDM is not migrated, the learned features of the MDM are not migrated, so that the accuracy of directly applying a trained model to target domain data is low. For the MDM _ RC, after migration is added, compared with the case of no migration, the accuracy is improved, the average improvement is about 20%, and the learned characteristics are migratable. The manifold embedding distribution alignment-based transfer learning method provided by the invention has higher diagnosis accuracy than other two methods, is improved by about 5% compared with an MDM _ RC (multiple driver management controller) and has a recognition rate of more than 66%. The experimental result verifies the effectiveness of the method of the invention, and the method can be used for the transfer learning problem of the brain-computer interface.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (5)

1. A brain-computer interface transfer learning method based on manifold embedding distribution alignment is characterized in that: the method comprises the following steps:
s1, obtaining EEG data D of source subjects respectivelysAnd target subject EEG data Dt
S2, preprocessing the EEG data and extracting the features;
s3, constructing a manifold embedding distribution alignment-based transfer learning model, training the transfer learning model by using data, and solving model parameters in the model to obtain a trained classifier;
s4, the unlabeled EEG data of the target subject is classified using a classifier.
2. The brain-computer interface migration learning method based on manifold embedding distribution alignment according to claim 1, characterized in that: in step S1, the EEG data D of the source subjectsThe method comprises the following steps of (1) containing n test data, wherein the n test data are provided with labels; EEG data D of the target subjecttThe method comprises the following steps of (1) containing m test data, wherein the m test data are not provided with labels; n is more than or equal to 1, and m is more than or equal to 1.
3. The brain-computer interface migration learning method based on manifold embedding distribution alignment according to claim 1, characterized in that: the step S2 specifically includes:
s21, carrying out band-pass filtering on the EEG signal by using a fifth-order Butterworth filter with the frequency band of 8-30 Hz;
s22, intercepting EEG signal samples generated by 0.5-2.5S after the user executes psychological tasks
Figure FDA0002495741610000011
XiDenotes the sample of the i-th test, where neIndicates the number of channels to be recorded,
Figure FDA0002495741610000012
representing a set of real numbers, TsRepresenting the number of sampling time points;
s23, for the ith experiment, estimating the spatial covariance matrix using the sample covariance matrix:
Figure FDA0002495741610000013
where T represents the transpose of the matrix.
4. The brain-computer interface migration learning method based on manifold embedding distribution alignment according to claim 1, characterized in that: in step S3, the constructing a manifold embedding distribution alignment-based transfer learning model includes the following steps:
s31 Riemann tangent plane mapping, which is to project the test data set of each subject onto the tangent plane at the Riemann mean value to generate ne(ne-1)/2-dimensional vector siAs an initial feature of the next manifold feature transformation:
Figure FDA0002495741610000021
wherein the upper operator means to reserve the upper triangular part of the symmetric matrix and to assign unit weight to its diagonal elements and assign unit weight to off-diagonal elements
Figure FDA0002495741610000022
The weights are thus vectorized for them,
Figure FDA0002495741610000023
representing a Riemann mean; piRepresenting the sample covariance matrix of the ith experiment;
the Riemann mean means that the centers of a plurality of covariance matrixes are calculated by using the Riemann geodesic distance, and the calculation formula is as follows:
Figure FDA0002495741610000024
wherein I represents the number of covariance matrices,
Figure FDA0002495741610000025
representing covariance matrices P and PiSquare of the Riemann geodesic distance;
wherein the Riemann geodesic distance is defined as
Figure FDA0002495741610000026
Wherein F represents Frobenius norm, lambdaiN represents
Figure FDA0002495741610000027
A characteristic value of (d);
s32, performing manifold feature transformation by adopting a GFK method: embedding a source data set and a target data set into a Grassmann manifold, then constructing a geodesic flow between two points, and integrating infinite subspaces along a flow phi;
in particular, the original features are projected into these subspaces to form an infinite-dimensional feature vector; the inner product between these feature vectors defines a kernel function that can be computed in a closed form over the original feature space; the kernel encapsulates incremental changes between subspaces, which is the basis for differences and commonalities between the two domains; thus, the learning algorithm uses the kernel to derive a domain-invariant low-dimensional representation;
meanwhile, a feature in manifold space can be expressed as z ═ g(s) ═ Φ (t)Ts, wherein g represents a manifold transformation function, phi (t) represents a geodesic between two points, and s is a feature obtained by Riemann tangent plane mapping; transformed feature zi and zjThe inner product of (A) definesHalf-positive geodesic flow kernel:
Figure FDA0002495741610000031
wherein G represents a transformation function;
the features of the original space can be transformed into a Grassmann manifold:
Figure FDA0002495741610000032
s33, integrating the distributed aligned classifier, and using the structure risk minimization principle and the regularization theory as the basis to transfer the learning frame; in particular, the classifier model aims to optimize the following three objective functions:
1) minimizing the structure risk function on the source domain marking data Ds;
2) minimizing a distribution difference between the joint probability distributions Js and Jt;
3) the consistency of the marginal distribution Ps and the manifold behind the Pt back is maximized;
let the prediction function be expressed as f-wTPhi (z), where w is the classifier parameter phi z a
Figure FDA0002495741610000033
Projecting the original feature vector to Hilbert space
Figure FDA0002495741610000034
A feature mapping function of (a); with square loss, f can be formulated as
Figure FDA0002495741610000035
Where K is a phi derived kernel function, such that<φ(zi),φ(zj)>=K(zi,zj) And σ, λ, and γ are regularization parameters;
the structural risk function on the source domain marking data Ds refers to:
Figure FDA0002495741610000036
wherein ,
Figure FDA0002495741610000037
is a set of classifiers in the kernel space,
Figure FDA0002495741610000038
is that
Figure FDA0002495741610000039
The square norm of f, σ, is the contraction regularization parameter, (y)i-f(zi))2Is the square loss function;
the minimizing of the distribution difference between the joint probability distributions Js and Jt means to simultaneously minimize the distribution distance between the edge distributions Ps and Pt and the distribution distance between the conditional distributions Qs and Qt:
Figure FDA00024957416100000310
wherein Df,K(Ps,Pt) The distribution distance between the edge distribution Ps and Pt,
Figure FDA00024957416100000311
c is the distribution distance between the condition distribution Qs and Qt, and is the number of categories; measuring the distribution distance by adopting the projection maximum mean difference MMD as distance measurement; regularization of structural risk by joint distribution adaptation, in
Figure FDA0002495741610000044
The sample moments of both the marginal and conditional distributions in (1) are pulled closer;
the maximum manifold consistency of the marginal distribution Ps and the Pt back is that the manifold is normalized to be in geodesic smoothness
Figure FDA0002495741610000041
wherein WijIs an element of the ith row and jth column of the graph affinity matrix W, LijIs the element of the ith row and the jth column of the normalized graph Laplace matrix L;
by regularizing the structural risk with manifold regularization, the marginal distribution can be fully utilized to maximize the consistency between the predicted structure of f and the inherent manifold structure of the data; this enables a substantially matching discriminative hyperplane between domains;
the learning algorithm of the classifier is as follows:
to solve the optimization problem efficiently, the following expression theorem is used:
Figure FDA0002495741610000042
where K is a kernel derived from φ, αiIs a coefficient, w is a weight;
re-representing the three objective functions by using the representation theorem to obtain a final objective function:
Figure FDA0002495741610000043
wherein Y is a label matrix, K is a kernel matrix, E is a diagonal label indication matrix, and M is an MMD matrix;
derivation of the objective function and making the derivative 0 can be obtained
α=((E+λM+γL)K+σI)-1EYT
Where I is the identity matrix.
5. The brain-computer interface migration learning method based on manifold embedding distribution alignment according to claim 4, characterized in that: the step S4 specifically includes: and (f) (z) calculating the classification output f of the unlabeled EEG data of the target subject according to the K and the alpha obtained in the step (S33), wherein the final prediction label is the label class corresponding to the maximum value in the classification output.
CN202010417830.XA 2020-05-18 2020-05-18 Brain-computer interface migration learning method based on manifold embedded distribution alignment Active CN111723661B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010417830.XA CN111723661B (en) 2020-05-18 2020-05-18 Brain-computer interface migration learning method based on manifold embedded distribution alignment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010417830.XA CN111723661B (en) 2020-05-18 2020-05-18 Brain-computer interface migration learning method based on manifold embedded distribution alignment

Publications (2)

Publication Number Publication Date
CN111723661A true CN111723661A (en) 2020-09-29
CN111723661B CN111723661B (en) 2023-06-16

Family

ID=72564530

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010417830.XA Active CN111723661B (en) 2020-05-18 2020-05-18 Brain-computer interface migration learning method based on manifold embedded distribution alignment

Country Status (1)

Country Link
CN (1) CN111723661B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348081A (en) * 2020-11-05 2021-02-09 平安科技(深圳)有限公司 Transfer learning method for image classification, related device and storage medium
CN112364916A (en) * 2020-11-10 2021-02-12 中国平安人寿保险股份有限公司 Image classification method based on transfer learning, related equipment and storage medium
CN112465152A (en) * 2020-12-03 2021-03-09 中国科学院大学宁波华美医院 Online migration learning method suitable for emotional brain-computer interface
CN112560937A (en) * 2020-12-11 2021-03-26 杭州电子科技大学 Method for motor imagery transfer learning by using resting state alignment
CN112580436A (en) * 2020-11-25 2021-03-30 重庆邮电大学 Electroencephalogram signal domain adaptation method based on Riemann manifold coordinate alignment
CN112651432A (en) * 2020-12-15 2021-04-13 华南师范大学 P300 brain-computer interface system based on XDAWN spatial filter and Riemann geometry transfer learning
CN113191206A (en) * 2021-04-06 2021-07-30 华南理工大学 Riemann feature migration-based magnetoencephalogram signal classification method, device and medium
CN113288170A (en) * 2021-05-13 2021-08-24 浙江大学 Electroencephalogram signal calibration method based on fuzzy processing
CN113392733A (en) * 2021-05-31 2021-09-14 杭州电子科技大学 Multi-source domain self-adaptive cross-tested EEG cognitive state evaluation method based on label alignment
CN114224341A (en) * 2021-12-02 2022-03-25 浙大宁波理工学院 Wearable forehead electroencephalogram-based depression rapid diagnosis and screening system and method
CN114305453A (en) * 2021-12-20 2022-04-12 杭州电子科技大学 Multi-source manifold electroencephalogram feature transfer learning method
CN116863216A (en) * 2023-06-30 2023-10-10 国网湖北省电力有限公司武汉供电公司 Depth field adaptive image classification method, system and medium based on data manifold geometry
CN117195040A (en) * 2023-08-25 2023-12-08 浙江大学 Brain-computer interface transfer learning method based on resting state electroencephalogram data calibration

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170311832A1 (en) * 2014-11-13 2017-11-02 Mensia Technologies Scoring method based on improved signals analysis
CN109598292A (en) * 2018-11-23 2019-04-09 华南理工大学 A kind of transfer learning method of the positive negative ratio of difference aid sample
CN109657642A (en) * 2018-12-29 2019-04-19 山东建筑大学 A kind of Mental imagery Method of EEG signals classification and system based on Riemann's distance
CN110851783A (en) * 2019-11-12 2020-02-28 华中科技大学 Heterogeneous label space migration learning method for brain-computer interface calibration

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170311832A1 (en) * 2014-11-13 2017-11-02 Mensia Technologies Scoring method based on improved signals analysis
CN109598292A (en) * 2018-11-23 2019-04-09 华南理工大学 A kind of transfer learning method of the positive negative ratio of difference aid sample
CN109657642A (en) * 2018-12-29 2019-04-19 山东建筑大学 A kind of Mental imagery Method of EEG signals classification and system based on Riemann's distance
CN110851783A (en) * 2019-11-12 2020-02-28 华中科技大学 Heterogeneous label space migration learning method for brain-computer interface calibration

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李绍锋: "脑机接口中基于黎曼几何的机器学习方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348081B (en) * 2020-11-05 2024-04-02 平安科技(深圳)有限公司 Migration learning method for image classification, related device and storage medium
CN112348081A (en) * 2020-11-05 2021-02-09 平安科技(深圳)有限公司 Transfer learning method for image classification, related device and storage medium
CN112364916B (en) * 2020-11-10 2023-10-27 中国平安人寿保险股份有限公司 Image classification method based on transfer learning, related equipment and storage medium
CN112364916A (en) * 2020-11-10 2021-02-12 中国平安人寿保险股份有限公司 Image classification method based on transfer learning, related equipment and storage medium
CN112580436A (en) * 2020-11-25 2021-03-30 重庆邮电大学 Electroencephalogram signal domain adaptation method based on Riemann manifold coordinate alignment
CN112580436B (en) * 2020-11-25 2022-05-03 重庆邮电大学 Electroencephalogram signal domain adaptation method based on Riemann manifold coordinate alignment
CN112465152A (en) * 2020-12-03 2021-03-09 中国科学院大学宁波华美医院 Online migration learning method suitable for emotional brain-computer interface
CN112465152B (en) * 2020-12-03 2022-11-29 中国科学院大学宁波华美医院 Online migration learning method suitable for emotional brain-computer interface
CN112560937A (en) * 2020-12-11 2021-03-26 杭州电子科技大学 Method for motor imagery transfer learning by using resting state alignment
CN112560937B (en) * 2020-12-11 2024-03-19 杭州电子科技大学 Method for moving and learning by utilizing motor imagery aligned in resting state
CN112651432A (en) * 2020-12-15 2021-04-13 华南师范大学 P300 brain-computer interface system based on XDAWN spatial filter and Riemann geometry transfer learning
CN113191206A (en) * 2021-04-06 2021-07-30 华南理工大学 Riemann feature migration-based magnetoencephalogram signal classification method, device and medium
CN113191206B (en) * 2021-04-06 2023-09-29 华南理工大学 Navigator signal classification method, device and medium based on Riemann feature migration
CN113288170A (en) * 2021-05-13 2021-08-24 浙江大学 Electroencephalogram signal calibration method based on fuzzy processing
CN113392733A (en) * 2021-05-31 2021-09-14 杭州电子科技大学 Multi-source domain self-adaptive cross-tested EEG cognitive state evaluation method based on label alignment
CN113392733B (en) * 2021-05-31 2022-06-21 杭州电子科技大学 Multi-source domain self-adaptive cross-tested EEG cognitive state evaluation method based on label alignment
CN114224341A (en) * 2021-12-02 2022-03-25 浙大宁波理工学院 Wearable forehead electroencephalogram-based depression rapid diagnosis and screening system and method
CN114224341B (en) * 2021-12-02 2023-12-15 浙大宁波理工学院 Wearable forehead electroencephalogram-based depression rapid diagnosis and screening system and method
CN114305453A (en) * 2021-12-20 2022-04-12 杭州电子科技大学 Multi-source manifold electroencephalogram feature transfer learning method
CN116863216A (en) * 2023-06-30 2023-10-10 国网湖北省电力有限公司武汉供电公司 Depth field adaptive image classification method, system and medium based on data manifold geometry
CN117195040A (en) * 2023-08-25 2023-12-08 浙江大学 Brain-computer interface transfer learning method based on resting state electroencephalogram data calibration
CN117195040B (en) * 2023-08-25 2024-05-17 浙江大学 Brain-computer interface transfer learning method based on resting state electroencephalogram data calibration

Also Published As

Publication number Publication date
CN111723661B (en) 2023-06-16

Similar Documents

Publication Publication Date Title
CN111723661A (en) Brain-computer interface transfer learning method based on manifold embedding distribution alignment
Venkatachalam et al. A Novel Method of motor imagery classification using eeg signal
Chen et al. Hyperspectral image classification using dictionary-based sparse representation
CN110139597A (en) The system and method for being iterated classification using neuro-physiological signals
CN104361318B (en) A kind of medical diagnosis on disease accessory system based on diffusion tensor technology
CN110534195B (en) Alzheimer disease detection method based on data space transformation
Huan et al. Deep convolutional neural networks for classifying body constitution based on face image
Wang et al. Penalized fisher discriminant analysis and its application to image-based morphometry
CN109330613A (en) Human body Emotion identification method based on real-time brain electricity
Yilmaz et al. Diversity in a signal-to-image transformation approach for EEG-based motor imagery task classification
CN115100709B (en) Feature separation image face recognition and age estimation method
Ye et al. Regional manifold learning for disease classification
CN113010013A (en) Wasserstein distance-based motor imagery electroencephalogram migration learning method
KR20110037726A (en) Method of analysing composite common spatial pattern for brain computer interface and method of analysing electroencephalogram using the same
CN105184794A (en) CSM assistant analysis system and method based on tensor image
Yang et al. PDNet: a convolutional neural network has potential to be deployed on small intelligent devices for arrhythmia diagnosis
Li et al. Robust neural decoding by kernel regression with Siamese representation learning
CN113191206B (en) Navigator signal classification method, device and medium based on Riemann feature migration
CN113180695B (en) Brain-computer interface signal classification method, system, equipment and storage medium
Fadel et al. Chessboard EEG images classification for BCI systems using deep neural network
CN111611963B (en) Face recognition method based on neighbor preservation canonical correlation analysis
CN117290730A (en) Optimization method of individual emotion recognition model
CN116821764A (en) Knowledge distillation-based multi-source domain adaptive EEG emotion state classification method
CN112329698A (en) Face recognition method and system based on intelligent blackboard
Zhao et al. E3GCAPS: Efficient EEG-based multi-capsule framework with dynamic attention for cross-subject cognitive state detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220321

Address after: 510530 No. 39, Ruihe Road, Huangpu District, Guangzhou, Guangdong

Applicant after: Guangzhou Guangda Innovation Technology Co.,Ltd.

Address before: 510640 No. five, 381 mountain road, Guangzhou, Guangdong, Tianhe District

Applicant before: SOUTH CHINA University OF TECHNOLOGY

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant