CN117193537A - Double-branch convolutional neural network motor imagery intention decoding method based on self-adaptive transfer learning - Google Patents

Double-branch convolutional neural network motor imagery intention decoding method based on self-adaptive transfer learning Download PDF

Info

Publication number
CN117193537A
CN117193537A CN202311236959.0A CN202311236959A CN117193537A CN 117193537 A CN117193537 A CN 117193537A CN 202311236959 A CN202311236959 A CN 202311236959A CN 117193537 A CN117193537 A CN 117193537A
Authority
CN
China
Prior art keywords
branch
electroencephalogram
feature
data
domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311236959.0A
Other languages
Chinese (zh)
Inventor
李阳
严伟栋
向岩松
孙茗逸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202311236959.0A priority Critical patent/CN117193537A/en
Publication of CN117193537A publication Critical patent/CN117193537A/en
Pending legal-status Critical Current

Links

Landscapes

  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention provides a motor imagery intention decoding method of a double-branch convolutional neural network based on self-adaptive transfer learning. Validity verification is carried out by adopting the disclosed motor imagery data sets BCIIV 2a and BCIIV 2 b: firstly, preprocessing a public data set, and establishing a training set and a testing set; secondly, constructing a model structure of the double-branch convolutional neural network; inputting the preprocessed training set into a model, and training the model by adopting a self-adaptive transfer learning method; and finally, inputting the test set into a trained model to test the performance of the model. The advantages of the invention include: the common characteristics of the source domain and the target domain are deeply excavated by designing the double-branch convolutional neural network, so that the decoding accuracy of the motor imagery intention is improved; by adopting the self-adaptive migration learning training method, the generalization performance of the model is enhanced by formulating an individualized migration strategy. The method is validated on BCIIV 2a and BCIIV 2b, and the average recognition accuracy reaches 81.3% and 85.6% respectively, which are superior to the existing optimal method.

Description

Double-branch convolutional neural network motor imagery intention decoding method based on self-adaptive transfer learning
Technical Field
The invention relates to a motor imagery intention decoding method of a double-branch convolutional neural network based on self-adaptive transfer learning.
Background
The brain-computer interface (Brain Computer Inter-face, BCI) technology realizes the function of converting human intention into computer commands or external device control instructions by recording and interpreting electrical signals of human brain. Among them, motor Imagery (MI) is a common active paradigm that can be used to identify brain Motor intent to limbs and tongue, and induce specific responses to brain sensory Motor cortex without external condition stimulus and apparent motion output. The motor imagery brain-computer interface (MI-BCI) is widely applied to the scenes of equipment control, character spelling, clinical stroke rehabilitation and the like of artificial limbs, mechanical arms, wheelchairs and the like. Electroencephalogram (EEG) signal decoding is one of key technologies for achieving MI-BCI intention control. However, the decoding performance of the existing electroencephalogram signal decoding algorithm is greatly reduced when the decoding algorithm faces different tested or the same tested with different time intention. This is because the electroencephalogram signal decoding algorithm is usually trained based on specific tests and tasks, and the electroencephalogram signals acquired for different tests and tasks are quite different. In addition, the electroencephalogram signals are affected by factors such as noise and interference, so that the difference of the recognition performance of the algorithm on different tested objects and different times is caused. The transfer learning aims to improve the learning process of the prediction function in the target domain by using prior information in the source domain, and can solve the problem of variability across the tested.
In recent years, deep learning algorithms have also begun to be applied in the field of transfer learning. However, because of the difficulty in acquiring and labeling the electroencephalogram signals, the number of individual electroencephalogram signal label samples is too small, and the decoding accuracy of the existing deep learning migration method individuation analysis method is not high; meanwhile, the electroencephalogram signals are affected by acquisition equipment, individuation differences, noise and the like, so that data distribution among electroencephalograms acquired at different times and acquired by different subjects is inconsistent, the existing deep migration learning individuation analysis method is difficult to learn the public features with the most discriminative power from the existing data, reliable electroencephalogram signal identification is difficult to perform, generalization across the tested is not strong, and therefore floor conversion of brain-computer interface technology is limited.
Disclosure of Invention
The invention provides a double-branch convolutional neural network motor imagery intention decoding method based on self-adaptive migration learning, which is applied to the fields of equipment control such as artificial limb and the like, and clinical stroke exoskeleton rehabilitation treatment and the like and is used for the intention decoding and man-machine interaction, and aims at the problems of low accuracy and low generalization of the intention decoding of the existing deep learning-based electroencephalogram signal decoding method. The method is subjected to four steps of data processing, model construction, model training and model testing, and performance testing is conducted on the motor imagery public data sets BCI IV 2a and BCI IV 2 b. Preprocessing the acquired two public data sets BCI IV 2a and BCI IV 2b, and establishing respective corresponding training sets and test sets; secondly, constructing an electroencephalogram double-branch convolutional neural network model; and then respectively inputting the preprocessed training sets into the built models, training the models through a self-adaptive migration learning method, respectively inputting the preprocessed testing sets into the trained models for performance test of the method, finally outputting the intended decoding accuracy and Kappa value, and comparing decoding results.
According to one embodiment of the invention, the motor imagery intention decoding method of the double-branch convolutional neural network based on adaptive migration learning comprises the following steps of:
step 1: preprocessing a pre-acquired public motor imagery data set BCI IV 2a and BCI IV 2b, establishing a training set and a testing set corresponding to the two data sets respectively, taking all data in source domain data and part of data in target domain data as the training set, and taking the other part of data in target domain data as the testing set;
step 2: constructing a model structure of an electroencephalogram double-branch convolutional neural network by using a Pytorch deep learning framework;
step 3: respectively inputting the two preprocessed training sets preprocessed in the step 1 into a built model, and training by a self-adaptive transfer learning method;
step 4: respectively inputting the two test sets preprocessed in the step 1 into a trained model to obtain the decoding performance index of the motor imagery intention,
wherein:
in said step 1, data preprocessing includes baseline calibration, filtering and sliding window data partitioning;
in the step 2, the constructed brain electricity double-branch convolutional neural network comprises: the electroencephalogram deep and shallow double-branch feature extractor and the motor imagery intention classifier are connected in series, wherein the electroencephalogram deep and shallow double-branch feature extractor is used for extracting deep features and shallow features of an electroencephalogram signal, and two different features extract branches G 1 And G 2 Composition, wherein G 1 Is representative of deep feature extraction branches, G 2 Representing shallow feature extraction branches; the motor imagery intention classifier is used for deducing a motor intention recognition result according to the extracted brain electricity depth layer-by-layer characteristics;
in the step 3:
when the electroencephalogram double-branch convolutional neural network model is trained by a self-adaptive transfer learning method, calculating errors of model output and labels by adopting a contrast loss function, iteratively updating model parameters by error back propagation and random gradient descent, setting training data Batchsize to 59, setting model learning rate to 0.0001, optimizing model parameters by adopting an Adam optimizer, adopting a cross entropy loss function for the loss function, and storing model parameters after 160 times of iterative training;
in the step 4:
when the motor imagery intention decoding performance index is calculated, a time length window from 0.5s before the occurrence of a motor imagery task event to 4s after the occurrence of the task event is selected, the electroencephalogram signals of each test in the test set are extracted, the sampling frequency is 250Hz, each electroencephalogram signal of each test comprises 1125 sampling points, the divided data are input into a trained electroencephalogram double-branch convolutional neural network model, and the intention decoding accuracy and Kappa value are adopted to evaluate the electroencephalogram signal decoding effect of the model.
The method for decoding the motor imagery intention of the double-branch convolutional neural network based on the self-adaptive transfer learning has the main advantages that:
1. aiming at the problem that the existing motor imagery intention decoding method usually only pays attention to the private features of different tested objects from a certain view angle, so that other public feature information is lost, the invention provides a double-branch convolutional neural network, and the decoding accuracy of a model is improved by designing the branches of the convolutional neural network of the electroencephalogram deep and shallow view angle, adopting different convolution kernel sizes, convolution depths and parameter settings to capture the electroencephalogram features of different scales, capturing the information in the electroencephalogram signals more comprehensively and searching the best public features among different tested objects;
2. aiming at the problems that the electroencephalogram signal is difficult to collect and label, the number of individual electroencephalogram signal label samples is too small, and the existing motor imagery intention decoding model is easy to fit, the invention introduces a cross-test migration learning method, and aims to solve the problems that the label samples are insufficient and the data distribution is not matched, so that the generalization performance of the model is improved;
3. aiming at the problem that the existing deep transfer learning individuation analysis method is difficult to learn the public features with the most discriminant from the existing data, the invention provides a module self-adaptive method, a transfer strategy which is most suitable for the current test is prepared according to the analyzed test, the identification performance of the current test cross-test deep transfer learning individuation electroencephalogram signal is improved, the effectiveness verification is carried out on the BCI IV 2a and the BCI IV 2b, and the average identification accuracy reaches 81.3% and 85.6% respectively, which are superior to the existing optimal method.
Drawings
Fig. 1 is a flowchart of a motor imagery intent decoding method of a dual-branch convolutional neural network based on adaptive transfer learning according to one embodiment of the present invention.
Fig. 2 is a block diagram of a dual-branch convolutional neural network according to one embodiment of the present invention.
Fig. 3 is a block diagram of a module adaptive migration architecture according to one embodiment of the present invention.
FIG. 4 is a block diagram of a parameter migration and trimming process according to one embodiment of the present invention.
Detailed Description
An overall flow of a motor imagery intention decoding method of a double-branch convolutional neural network based on adaptive transfer learning according to an embodiment of the present invention is shown in fig. 1, which includes:
step S1: preprocessing a pre-acquired public motor imagery data set BCI IV 2a and BCI IV 2b, wherein the BCI IV 2a is a four-classification task (comprising a left hand, a right hand, a tongue and two feet), the BCI IV 2b is a two-classification task (comprising a left hand and a right hand), source domain data and target domain data corresponding to the BCI IV 2a and the BCI IV 2b respectively are established, all data in the source domain data and part of data in the target domain data are used as training sets, the other part of data in the target domain data is used as test sets, and the data preprocessing comprises baseline calibration, filtering and sliding window data division, and the method specifically comprises the following steps:
S1.1: baseline calibration, which includes that in order to avoid baseline drift phenomenon which is easy to occur in long-time acquisition of electroencephalogram signals, an empirical mode decomposition method is adopted to calibrate the baseline, and meanwhile, a direct removal method is adopted to remove noise and artifacts in the electroencephalogram signals;
s1.2: filtering, namely removing high-frequency noise above 64Hz by adopting low-pass filtering for removing high-frequency noise interference and direct-current interference in the electroencephalogram signals, and simultaneously removing power frequency interference of 50 Hz;
s1.3: the sliding window data division comprises that for an electroencephalogram double-branch convolutional neural network to input continuous electroencephalogram data, a sliding window is adopted to intercept data fragments of 1.5s-6s of each motor imagery experiment in BCI IV 2a data to serve as an electroencephalogram sample of four-class motor imagery, and meanwhile, a sliding window is adopted to intercept data fragments of 2.5s-7s of each motor imagery experiment in BCI IV 2b data to serve as an electroencephalogram sample of one-time two-class motor imagery;
step S2: using a Pytorch deep learning framework, constructing a model structure of an electroencephalogram double-branch convolutional neural network, comprising:
the constructed electroencephalogram double-branch convolutional neural network comprises an electroencephalogram deep and shallow double-branch feature extractor and a motor imagery intention classifier, wherein the electroencephalogram deep and shallow double-branch feature extractor is connected with the motor imagery intention classifier in series, and is used for extracting deep features and shallow features of an electroencephalogram signal, and branches G are extracted by two different features 1 And G 2 Composition of G 1 Is deep feature extraction branch, G 2 Is a shallow feature extraction branch; the motor imagery intention classifier is used for deducing a motor intention recognition result according to the extracted brain electricity depth layer-by-layer characteristics, and specifically comprises the following steps:
a) The brain electrical deep and shallow layer double-branch feature extractor is used for selecting different feature extraction branches according to the brain electrical signal characteristics of the four classification tasks obtained from the BCI IV 2a and the two classification tasks obtained from the BCI IV 2b to form the brain electrical deep and shallow layer double-branch feature extractor so as to better adapt to different tasks, and the specific structure is shown in fig. 2, and comprises:
a1 Shallow feature extraction branch G) 1 The shallow characteristic information used for extracting the electroencephalogram signal comprises:
firstly, sequentially inputting an input sample x into a continuous two-layer convolution network, wherein the first layer is time domain convolution, the convolution kernel size is 1 multiplied by 10, the step length is 1 multiplied by 1, and the number of output feature images is 40;
then, the second layer is space convolution, the convolution kernel size is E multiplied by 1, the step length is 1 multiplied by 1, the number of the output feature images is 40, the original signals are mapped to time domain feature images with different scales through the two layers of continuous convolution, signals among different acquisition channels are linearly overlapped through the space convolution, global channel information is obtained, and the feature images with the size of 40 multiplied by 1 multiplied by (T-10) are obtained;
Finally, the features areThe figures are sequentially input with BN (BatchNorm) layer, square nonlinear layer, maximum pooling layer (the pooling core size is 1×75, the pooling step size is 1×15) and Log nonlinear layer activation layer to obtain the size ofIs characterized by the shallow layer of the brain electrical signal;
a2 A deep feature extraction branch for extracting deep feature information in an electroencephalogram signal, comprising:
firstly, inputting a sample x into a two-layer convolution network consisting of a time domain convolution and a space convolution, wherein the convolution kernel sizes are 1×10 and E×1 respectively, the step sizes are 1×1, and the number of output feature images is 25;
then, sequentially inputting the feature images into a combination of 3 convolution layers, 1 BN layer and 1 maximum pooling layer, wherein the convolution kernels of the three convolutions are 1 multiplied by 10, the number of the output feature images is 50, 100 and 200 respectively, and the convolution kernels and the step sizes of the pooling layers are 1 multiplied by 4 to obtain the feature image with the size ofIs characterized by deep electroencephalogram signals;
b) Constructing a motor imagery intention classifier for inferring motor imagery intention from the extracted shallow features and deep features of the electroencephalogram signal, the motor imagery intention classifier comprising:
the method comprises the steps of adding a dropout layer to avoid model overfitting before the characteristics obtained by an electroencephalogram deep and shallow layer double-branch characteristic extractor are input into a motor imagery intention classifier, wherein the weight input size of the first full-connection layer is changed according to the characteristics extracted by different data sets, and the weight output size of the second full-connection layer is changed according to the classification number of four-classification or two-classification tasks.
Step S3: inputting the preprocessed two training sets preprocessed in the step S1 into a built double-branch convolutional neural network respectively, and sequentially performing three steps of source domain pre-training, module self-adaptive migration and target domain model parameter fine tuning to generate a final training model, wherein the training process is shown in fig. 3, and the specific training steps comprise:
step S3.1: the method comprises the steps of source domain pre-training, training a double-branch convolutional neural network on source domain data with a large data volume, extracting the most discriminant features in original data from double branches of an electroencephalogram deep and shallow layer, aligning the feature distribution of a target domain and a source domain based on a difference feature alignment domain self-adaptive method, and guaranteeing the feature extraction network to extract the most discriminant features of electroencephalogram signals on the target domain, wherein a loss function for updating parameters of the double-branch convolutional neural network and parameters of a motor imagery intention classifier is as follows:
l=l clsd l d (1),
middle l cls Represents the classification loss, lambda d Weights representing domain loss, l d Represents the domain loss function, wherein,
l cls the classification loss is used for ensuring that the model can extract information related to classification tasks, the classification loss is trained on a source domain and a target domain, and in the domain self-adaption process, the most discriminative characteristic in the source domain characteristics can be extracted through the classification loss ensuring model, the cross entropy loss function is selected to calculate the classification loss, and the calculation formula is as follows:
l cls =L CE (T cls (F),y) (2),
In the formula, y represents a task tag, and for a motor imagery task, if an input signal is left-hand motor imagery data, a real tag is y=0; if the input signal is the right-hand motor imagery data, the real label y=1; if the input signal is bipedal motor imagery data, the real label y=2; if the input signal is tongue movement imagination data, the real label y=3;
l d domain loss function: for ensuring that each feature extraction branch can acquire the common features of the source domain and the target domain, the brain electrical deep and shallow layer double-branch feature extractor calculates each feature G independently m (x s ) And G m (x t ) Distribution difference, G is made by optimizing domain loss function m (x s ) And G m (x t ) Dividing intoThe distribution difference can be minimized on each characterization, so that each feature extraction branch can acquire the common features of the source domain and the target domain, wherein the domain loss function is as follows:
in which L d (. Cndot. Cndot.) for calculating the distance between two features, two feature distribution probability functions can be evaluatedAnd->The similarity between the two is calculated as follows:
using multi-core maximum mean difference (Maximum Mean Discrepancy, MMD) to avoid constraint failure due to misselection of kernel functions, when calculating MMD, multi-core K is used to replace kernel functions in equation (4) The multi-core K is defined as:
in the formula, for different kernel functions k u Adopting a size beta u Weighting, carrying out normalization processing on the weight values at the same time so as to accurately measure the difference between different distributions, and simultaneously selecting a Gaussian radial basis kernel function as a mapping kernel function, wherein the calculation formula is as follows:
in the method, in the process of the invention, I F s -F t || 2 For calculating the Euclidean distance, sigma, of two vectors u Representing the size of the Gaussian kernel, and obtaining different Gaussian radial basis functions by setting the sizes of the different Gaussian kernels;
step S3.2: performing module adaptive migration, configured to obtain characteristics adapted to migrate to a target domain, where the migration process is shown in fig. 4, and includes:
firstly, based on a pre-trained brain electrical deep and shallow layer double-branch feature extractor, respectively extracting brain electrical deep features and brain electrical shallow layer features in a source domain and a target domain, and calculating migration scores of the brain electrical deep features and the brain electrical shallow layer features, wherein the calculation formula is as follows:
wherein x is t(c1) Target field sample denoted by label c1, N c1 Representing the number of target domain samples with labels of c1, |·| represents the L1 norm for measuring the distance of the feature distribution, the central feature f of each type of label feature distribution ce (. Cndot.) represents the overall case of such tag feature distribution,
secondly, a feature extraction branch suitable for freezing parameters and a feature extraction branch suitable for fine tuning parameters in the electroencephalogram deep and shallow layer double-branch feature extractor are obtained through a Gaussian mixture model (Gaussian mixture model, GMM), and the calculation formula of the GMM is as follows:
Wherein mu is c Sum sigma c Respectively representing the mean value and covariance matrix of the cluster c, wherein p (c) represents the probability of the attribution category c, and M feature extraction branches of the electroencephalogram deep and shallow layer double-branch feature extractor are subjected to GMM clustering according to the correspondingMigration scores are aggregated into two classes, μ c The class feature extractor with the smaller value will be frozen;
again, the personalized migration policy i= [ I ] is adaptively obtained by the distribution of the source domain features and the target domain features 1 ,I 2 ]Wherein I is 1 ,I 2 A strategy for indicating whether the corresponding feature extraction branch parameters are frozen or not, if the strategy is 0, the feature extraction branch parameters corresponding to the subscript are not reserved and need to be finely adjusted on the target domain; if the characteristic extraction branch parameter is 1, the characteristic extraction branch parameter corresponding to the index is indicated to be frozen and reserved;
finally, according to the value of each element in the individuation migration strategy I, extracting branches from each characteristic in the electroencephalogram deep and shallow layer double-branch characteristic extractor, and freezing parameters of the electroencephalogram deep and shallow layer double-branch characteristic extractor screened out by the individuation migration strategy by adopting a module migration method, wherein the definition formula of the module migration method is as follows:
wherein F is t(m) The output characteristics are represented by a set of values,for fine-tuned feature branching, G m Representing feature extraction branches in an electroencephalogram deep and shallow dual-branch feature extractor with parameters frozen without trimming, I m Represents G m A flag for judging whether the parameters of the feature extraction branch need to be frozen or fine-tuned, when I m When=1, output characteristic F t(m) Feature extractor G for parameter freezing m The common characteristics of the source domains are output, if the G is needed m Further fine tuning and optimizing parameters of (a) on a target domain, and outputting characteristics F t(m) For feature extractor after trimming->The output private characteristics of the target domain;
step S3.3: the target domain model parameter fine tuning is used for fine tuning the characteristic extraction branches which are not frozen in the step S3.2 and the supervised classification training process on the target domain, training the motor imagery intention classifier to adapt to the characteristic distribution of the target domain, updating the characteristic extraction branches which are not frozen and the motor imagery intention classifier parameters by calculating the cross entropy classification loss, wherein the calculation formula is as follows:
in the method, in the process of the invention,representing the classification result given by the network, y representing the true classification result, log representing the logarithm based on 2.
Step S4: respectively inputting the two preprocessed test sets in the step S1 into a trained model to obtain the motor imagery intention decoding performance index, wherein the method comprises the following steps:
the motor imagery task data set is used as a classification task with the same number of various samples, the recognition accuracy Acc is used for describing the classification performance of the brain electricity double-branch convolutional neural network model, and the calculation formula is as follows:
In N true Representing the number of correct classifications, N total Representing the total number of samples,
taking the used data set as a motor imagery 4 classification task, taking the bias of an electroencephalogram double-branch convolutional neural network model as an evaluation index, and using Kappa coefficients Kappa to measure whether classification of the model is biased to a certain class, wherein Kappa coefficient calculation is based on a confusion matrix, the value is between-1 and 1, and the calculation formula of the Kappa coefficients is as follows:
p in the formula 0 For classification accuracy, P e To classify penalty terms with bias, the formula is:
wherein c represents the total number of classifications, a i And b i Representing the sum of the ith row and the ith column in the confusion matrix respectively, so that according to the calculation formula of Kappa, when the classification of the brain electric double-branch convolution neural network model is biased for a certain class, namely the confusion matrix is unbalanced, the value of Kappa is reduced.
When the method is used for performance test, a time length window [ -0.5s,4s ] (0 s when a task event occurs) is adopted to extract electroencephalogram signals of each test in a test set, the sampling frequency is 250Hz, the electroencephalogram signals of each test contain 1125 sampling points, divided data are input into a trained electroencephalogram double-branch convolutional neural network model for system performance test, the accuracy ACC and Kappa values of movement intention recognition are obtained, and compared with the performance of the existing most advanced method, and the existing migration learning method comprises the following steps:
Migration component analysis (transfer component analysis, TCA): the traditional domain adaptation method calculates a mapping matrix by minimizing MMD between the source domain features and the target domain features, so that the source domain features and the target domain features have similar distribution after projection;
unsupervised domain adaptation (Unsupervised Domain Adaptation, UDA): the method reduces the distribution difference between the source and target domains by aligning subspaces. The method applies the motor imagery intention classifier learned from the source domain to the target domain, and can be used when the target domain has no label or few labels;
domain adaptive neural network (Domain Adaptation Neural Network, DANN): the method is inspired by generating an countermeasure network, and a domain discriminator is provided for reducing the difference of projection distribution of a target domain and a source domain;
depth adaptation network (Deep Adaptation Network, DAN): the method uses MK-MMD to replace MMD, thereby avoiding the influence of improper kernel function selection. At the same time, the distribution is adjusted by calculating the domain loss of the multilayer structure.
To verify the validity of the proposed method, performance tests were performed on both data sets BCI IV 2a and BCI IV 2 b. Tables 1 and 2 set forth detailed comparisons of the results of the method of the present invention and the results of other methods on the BCI IV 2a dataset and BCI IV 2b, respectively, for the different methods. As shown in tables 1 and 2, acc and κ of the proposed method of this chapter reached 81.3% and 0.781, 85.6% and 0.738, respectively, for the BCI IV 2a and BCI IV 2b data sets. At least 4.8% and 0.053% improvement over the BCI IV 2a dataset and at least 2.9% and 0.058% improvement over the BCI IV 2b dataset, as compared to other methods. In particular, in the BCI IV 2a data set, the classification accuracy of the four classifications of the test No. A7 is as high as 97.8%, and in the BCI IV 2B data set, the test No. B3 achieves an accuracy improvement of at least 25.5% compared with other methods.
Table 1 performance contrast of motor imagery intention decoding method of double-branch convolutional neural network based on adaptive transfer learning
Table 2 different migration learns the classification performance on the BCI Competition IV IIb dataset
The method for decoding the motor imagery intention of the double-branch convolutional neural network based on the adaptive migration learning provided by the invention is described in detail above, but the scope of the invention is not limited to this. Various modifications to the embodiments described above are within the scope of the invention without departing from the scope of protection as defined by the appended claims.

Claims (10)

1. An electroencephalogram double-branch convolutional neural network modeling method for decoding motor imagery intentions of double-branch convolutional neural network based on self-adaptive transfer learning is characterized by comprising the following steps:
a) Using a Pytorch deep learning framework, constructing an electroencephalogram deep and shallow dual-branch feature extractor, wherein the electroencephalogram deep and shallow dual-branch feature extractor comprises:
a1 A shallow feature extraction branch for extracting shallow feature information in an electroencephalogram signal, comprising:
firstly, sequentially inputting an input sample x into a continuous two-layer convolution network, wherein the first layer is time domain convolution, the convolution kernel size is 1 multiplied by 10, the step length is 1 multiplied by 1, and the number of output feature images is 40;
Then, the second layer is space convolution, the convolution kernel size is E multiplied by 1, the step length is 1 multiplied by 1, the number of the output feature images is 40, the original signals are mapped to time domain feature images with different scales through the two layers of continuous convolution, signals among different acquisition channels are linearly overlapped through the space convolution, global channel information is obtained, and the feature images with the size of 40 multiplied by 1 multiplied by (T-10) are obtained;
finally, sequentially inputting the feature map into a BN layer, a Square nonlinear layer, a maximum pooling layer with the pooling core size of 1×75 and the pooling step length of 1×15 and a Log nonlinear layer activation layer to obtain a core layer with the size of 1×15Is characterized by the shallow layer of the brain electrical signal;
a2 A deep feature extraction branch for extracting deep feature information in an electroencephalogram signal, comprising:
firstly, inputting a sample x into a two-layer convolution network consisting of a time domain convolution and a space convolution, wherein the convolution kernel sizes are 1×10 and E×1 respectively, the step sizes are 1×1, and the number of output feature images is 25;
then, sequentially inputting the feature images into a combination of 3 convolution layers, 1 BN layer and 1 maximum pooling layer, wherein the convolution kernels of the three convolutions are 1 multiplied by 10, the number of the output feature images is 50, 100 and 200 respectively, and the convolution kernels and the step sizes of the pooling layers are 1 multiplied by 4 to obtain the feature image with the size of Is characterized by deep electroencephalogram signals;
b) Constructing a motor imagery intention classifier for inferring motor imagery intention from the extracted shallow features and deep features of the electroencephalogram signal, the motor imagery intention classifier comprising:
the method comprises the steps of adding a dropout layer to avoid model overfitting before the characteristics obtained by an electroencephalogram deep and shallow layer double-branch characteristic extractor are input into a motor imagery intention classifier, wherein the weight input size of the first full-connection layer is changed according to the characteristics extracted by different data sets, and the weight output size of the second full-connection layer is changed according to the classification number of four-classification or two-classification tasks.
2. The modeling method of an electroencephalogram double-branch convolutional neural network according to claim 1, characterized by further comprising:
step S3: inputting the two preprocessed training sets into a constructed double-branch convolutional neural network respectively, and sequentially performing three steps of source domain pre-training, module self-adaptive migration and target domain model parameter fine tuning to generate a final training model, wherein the method specifically comprises the following steps of:
step S3.1: the method comprises the steps of source domain pre-training, training a double-branch convolutional neural network on source domain data with a large data volume, extracting the most discriminant features in original data from double branches of an electroencephalogram deep and shallow layer, aligning the feature distribution of a target domain and a source domain based on a difference feature alignment domain self-adaptive method, and guaranteeing the feature extraction network to extract the most discriminant features of electroencephalogram signals on the target domain, wherein a loss function for updating parameters of the double-branch convolutional neural network and parameters of a motor imagery intention classifier is as follows:
In the middle ofRepresents the classification loss, lambda d Weights representing domain loss, +.>Represents the domain loss function, wherein,
the classification loss is used for ensuring that the model can extract information related to classification tasks, the classification loss is trained on a source domain and a target domain, and in the domain self-adaption process, the most discriminative characteristic in the source domain characteristics can be extracted through the classification loss ensuring model, the cross entropy loss function is selected to calculate the classification loss, and the calculation formula is as follows:
in the formula, y represents a task tag, and for a motor imagery task, if an input signal is left-hand motor imagery data, a real tag is y=0; if the input signal is the right-hand motor imagery data, the real label y=1; if the input signal is bipedal motor imagery data, the real label y=2; if the input signal is tongue movement imagination data, the real label y=3;
domain loss function: for ensuring that each feature extraction branch can acquire the common features of the source domain and the target domain, the brain electrical deep and shallow layer double-branch feature extractor calculates each feature G independently m (x s ) And G m (x t ) Distribution difference, G is made by optimizing domain loss function m (x s ) And G m (x t ) The distribution difference can be minimized on each characterization, so that each feature extraction branch can acquire the common features of the source domain and the target domain, wherein the domain loss function is as follows:
In which L d (. Cndot. Cndot.) for calculating the distance between two features, two feature distribution probability functions can be evaluatedAndthe similarity between the two is calculated as follows:
using multi-core maximum mean difference MMD to avoid constraint failure caused by error selection of kernel function, and using multi-core K to replace kernel function in formula (4) when calculating MMDThe multi-core K is defined as:
in the formula, for different kernel functions k u Adopting a size beta u Simultaneously carrying out normalization processing on the weight values to accurately measure the difference between different distributions, and simultaneously selecting a Gaussian radial basis function as a mapping kernel function, wherein the calculation formula is as follows:
in the method, in the process of the invention, I F s -F t || 2 For calculating the Euclidean distance of two vectors, σ u Representing the size of the Gaussian kernel, and obtaining different Gaussian radial basis functions by setting the sizes of the different Gaussian kernels;
step S3.2: performing module adaptive migration, configured to obtain characteristics adaptively migrated to a target domain, including:
firstly, based on a pre-trained brain electrical deep and shallow layer double-branch feature extractor, respectively extracting brain electrical deep features and brain electrical shallow layer features in a source domain and a target domain, and calculating migration scores of the brain electrical deep features and the brain electrical shallow layer features, wherein the calculation formula is as follows:
Wherein x is t(c1) Target field sample denoted by label c1, N c1 Representing the number of target domain samples with labels of c1, |·| represents the L1 norm for measuring the distance of the feature distribution, the central feature f of each type of label feature distribution ce (. Cndot.) represents the overall case of such tag feature distribution,
secondly, a feature extraction branch suitable for freezing parameters and a feature extraction branch suitable for fine tuning parameters in an electroencephalogram deep and shallow layer double-branch feature extractor are obtained through a Gaussian Mixture Model (GMM), and the calculation formula of the GMM is as follows:
wherein mu is c Sum sigma c Respectively representing the mean and covariance matrixes of the cluster c, wherein p (c) represents the probability of the attribution category c, and M feature extraction branches of the electroencephalogram deep and shallow dual-branch feature extractor are clustered into two categories according to the corresponding migration scores after GMM clustering, wherein mu is calculated by using the M feature extraction branches c The class feature extractor with the smaller value will be frozen;
again, the personalized migration policy i= [ I ] is adaptively obtained by the distribution of the source domain features and the target domain features 1 ,I 2 ]Wherein I is 1 ,I 2 A strategy for indicating whether the corresponding feature extraction branch parameters are frozen or not, if the strategy is 0, the feature extraction branch parameters corresponding to the subscript are not reserved and need to be finely adjusted on the target domain; if the characteristic extraction branch parameter is 1, the characteristic extraction branch parameter corresponding to the index is indicated to be frozen and reserved;
Finally, according to the value of each element in the individuation migration strategy I, extracting branches from each characteristic in the electroencephalogram deep and shallow layer double-branch characteristic extractor, and freezing parameters of the electroencephalogram deep and shallow layer double-branch characteristic extractor screened out by the individuation migration strategy by adopting a module migration method, wherein the definition formula of the module migration method is as follows:
wherein F is t(m) The output characteristics are represented by a set of values,for fine-tuned feature branching, G m Representing feature extraction branches in an electroencephalogram deep and shallow dual-branch feature extractor with parameters frozen without trimming, I m Represents G m A flag for judging whether the parameters of the feature extraction branch need to be frozen or fine-tuned, when I m When=1, output characteristic F t(m) Feature extractor G for parameter freezing m The common characteristics of the source domains are output, if the G is needed m Further fine tuning and optimizing parameters of (a) on a target domain, and outputting characteristics F t(m) For feature extractor after trimming->The output private characteristics of the target domain;
step S3.3: the target domain model parameter fine tuning is used for fine tuning the characteristic extraction branches which are not frozen in the step S3.2 and the supervised classification training process on the target domain, training the motor imagery intention classifier to adapt to the characteristic distribution of the target domain, updating the characteristic extraction branches which are not frozen and the motor imagery intention classifier parameters by calculating the cross entropy classification loss, wherein the calculation formula is as follows:
In the method, in the process of the invention,representing the classification result given by the network, y representing the true classification result, log representing the logarithm based on 2.
3. The modeling method of the electroencephalogram double-branch convolutional neural network according to claim 2, wherein the two preprocessed training sets are obtained through the following preprocessing steps:
step S1: preprocessing a pre-acquired public motor imagery data set BCI IV 2a and a pre-acquired public motor imagery data set BCI IV 2b, establishing source domain data and target domain data corresponding to the two data sets respectively, taking all data in the source domain data and part of data in the target domain data as training sets, and taking the other part of data in the target domain data as test sets.
4. The modeling method of the electroencephalogram double-branch convolutional neural network according to claim 3, wherein:
the step S1 includes:
s1.1: baseline calibration, which includes that in order to avoid baseline drift phenomenon which easily occurs in long-time acquisition of electroencephalogram signals, an empirical mode decomposition method is adopted to calibrate the baseline, and meanwhile, a direct removal method is adopted to remove noise and artifacts in the electroencephalogram signals;
s1.2: filtering, namely removing high-frequency noise above 64Hz by adopting low-pass filtering for removing high-frequency noise interference and direct-current interference in the electroencephalogram signals, and simultaneously removing power frequency interference of 50 Hz;
S1.3: the sliding window data division comprises that for the electroencephalogram double-branch convolutional neural network to input continuous electroencephalogram data, a sliding window is adopted to intercept data fragments of 1.5s-6s of each motor imagery experiment in BCI IV 2a data to serve as an electroencephalogram sample of four-class motor imagery, and meanwhile, a sliding window is adopted to intercept data fragments of 2.5s-7s of each motor imagery experiment in BCI IV 2b data to serve as an electroencephalogram sample of one-time two-class motor imagery.
5. A motor imagery intention decoding method of a double-branch convolutional neural network based on adaptive transfer learning is characterized by comprising the following steps:
step S1: preprocessing pre-acquired public motor imagery data sets BCI IV 2a and BCI IV 2b, establishing source domain data and target domain data corresponding to the two data sets respectively, taking all data in the source domain data and part of data in the target domain data as training sets, and taking the other part of data in the target domain data as test sets;
step S2: performing a modeling method of the electroencephalogram double-branch convolutional neural network according to claim 1;
step S3: respectively inputting the two preprocessed training sets preprocessed in the step S1 into a built model, training by a self-adaptive transfer learning method, and fine-tuning model parameters by using training data in a target domain to generate a final training model;
Step S4: respectively inputting the two test sets preprocessed in the step S1 into a trained model to obtain the motor imagery intention decoding performance index,
wherein:
step S1 comprises baseline calibration, filtering and sliding window data division, wherein a sliding window divides continuous electroencephalogram signals into signal fragments with the length of 5S, and the signal fragments are not overlapped;
recording the ith list of tested motor imagery electroencephalogram signal data sets as D i ={(x 1 ,y 1 ),...,(x N ,y N ) -wherein:
n represents the number of the electroencephalogram signal fragments to be tested,
intercepting continuous electroencephalogram signals into input samples x epsilon R with the same size according to fixed time length E×T Wherein E is an electroencephalogram signalThe number of the acquisition channels, T is the number of sampling points contained in the current sample,
y i is a label for each segment, 0 representing the segment as a left hand motor imagery intent, 1 representing a right hand motor imagery intent, 2 representing a tongue motor imagery intent, 3 representing a bipedal motor imagery intent, wherein the BCI IV 2a dataset is a four-classification task, i.e., left hand, right hand, tongue, and bipedal; the BCI IV 2b dataset is a two-classification task, i.e., left-hand and right-hand.
6. The motor imagery decoding method of a double-branch convolutional neural network based on adaptive transfer learning of claim 5, wherein step S3 comprises:
Inputting the two preprocessed training sets in the step S1 into a constructed double-branch convolutional neural network respectively, and sequentially performing three steps of source domain pre-training, module self-adaptive migration and target domain model parameter fine tuning to generate a final training model, wherein the method specifically comprises the following steps of:
step S3.1: the method comprises the steps of source domain pre-training, training a double-branch convolutional neural network on source domain data with a large data volume, extracting the most discriminant features in original data from double branches of an electroencephalogram deep and shallow layer, aligning the feature distribution of a target domain and a source domain based on a difference feature alignment domain self-adaptive method, and guaranteeing the feature extraction network to extract the most discriminant features of electroencephalogram signals on the target domain, wherein a loss function for updating parameters of the double-branch convolutional neural network and parameters of a motor imagery intention classifier is as follows:
in the middle ofRepresents the classification loss, lambda d Weights representing domain loss, +.>Represents the domain loss function, wherein,
the classification loss is used for ensuring that the model can extract information related to classification tasks, the classification loss is trained on a source domain and a target domain, and in the domain self-adaption process, the classification loss ensures that the model can extract the most discriminative characteristic in the source domain characteristics, and the cross entropy loss function is selected to calculate the classification loss, wherein the calculation formula is as follows:
In the formula, y represents a task tag, and for a motor imagery task, if an input signal is left-hand motor imagery data, a real tag is y=0; if the input signal is the right-hand motor imagery data, the real label y=1; if the input signal is bipedal motor imagery data, the real label y=2; if the input signal is tongue movement imagination data, the real label y=3;
domain loss function: for ensuring that each feature extraction branch can acquire the common features of the source domain and the target domain, the brain electrical deep and shallow layer double-branch feature extractor calculates each feature G independently m (x s ) And G m (x t ) Distribution difference, G is made by optimizing domain loss function m (x s ) And G m (x t ) The distribution difference can be minimized on each characterization, so that each feature extraction branch can acquire the common features of the source domain and the target domain, wherein the domain loss function is as follows: domain loss function: to ensure that each feature extraction branch can find the common features of the source domain and the target domain, the depth multi-branch feature extractor calculates the difference between each feature and the distribution separately to minimize the difference between the feature and the distribution on each characterization by optimizing the following loss function, thereby ensuring that each feature extraction branch can find the common features of the source domain and the target domain, wherein Domain loss function
In which L d (. Cndot. Cndot.) for calculating the distance between two features, two feature distribution probability functions can be evaluatedAndthe similarity between the two is calculated as follows:
to avoid constraint failure due to the selection of the wrong kernel function, a multi-kernel maximum mean difference MMD is used to avoid constraint failure due to the wrong selection of the kernel function, and when MMD is calculated, a multi-kernel K is used to replace the kernel function in formula (4)The multi-core K is defined as:
in the formula, for different kernel functions k u Adopting a size beta u Simultaneously carrying out normalization processing on the weight values to accurately measure the difference between different distributions, and simultaneously selecting a Gaussian radial basis function as a mapping kernel function, wherein the calculation formula is as follows:
in the method, in the process of the invention, I F s -F t || 2 For calculating the Euclidean distance of two vectors, σ u The method comprises the steps of representing the size of a Gaussian kernel, controlling the local influence range of the Gaussian radial basis function, and obtaining different Gaussian radial basis functions by setting different Gaussian kernel sizes;
step S3.2: performing module adaptive migration, for searching features suitable for migration to a target domain, including:
firstly, based on a pre-trained brain electrical deep and shallow layer double-branch feature extractor, respectively extracting brain electrical deep features and brain electrical shallow layer features in a source domain and a target domain, and calculating migration scores of the brain electrical deep features and the brain electrical shallow layer features, wherein the calculation formula is as follows:
Wherein x is t(c1) Target field sample denoted by label c1, N c1 Representing the number of target domain samples with labels of c1, |·| representing the L1 norm, used to measure the distance of the feature distribution, and select the central feature f of each type of label feature distribution ce (. Cndot.) represents the overall case of such tag feature distribution,
secondly, obtaining a characteristic extraction branch suitable for freezing parameters and a characteristic extraction branch suitable for fine tuning parameters in the electroencephalogram deep and shallow layer double-branch characteristic extractor by a GMM clustering method, wherein the GMM clustering calculation formula is as follows:
wherein mu is c Sum sigma c Respectively representing the mean and covariance matrixes of the cluster c, wherein p (c) represents the probability of the attribution category c, and M feature extraction branches of the electroencephalogram deep and shallow dual-branch feature extractor are clustered into two categories according to the corresponding migration scores after GMM clustering, wherein mu is calculated by using the M feature extraction branches c Class feature extractor with smaller valueWill be frozen;
thirdly, through the distribution of the source domain characteristics and the target domain characteristics, the individuation migration strategy I= [ I ] of each individual is obtained in a self-adaptive mode 1 ,I 2 ]Wherein I is 1 ,I 2 A strategy for indicating whether the corresponding feature extraction branch parameters are frozen or not, if the strategy is 0, the feature extraction branch parameters corresponding to the subscript are not reserved and need to be finely adjusted on the target domain; if the characteristic extraction branch parameter is 1, the characteristic extraction branch parameter corresponding to the index is indicated to be frozen and reserved;
Finally, according to the value of each element in the individuation migration strategy I, extracting branches from each characteristic in the electroencephalogram deep and shallow layer double-branch characteristic extractor, and freezing parameters of the electroencephalogram deep and shallow layer double-branch characteristic extractor screened out by the individuation migration strategy by adopting a module migration method, wherein the definition formula of the module migration method is as follows:
wherein F is t(m) The output characteristics are represented by a set of values,for fine-tuned feature branching, G m Representing feature extraction branches in an electroencephalogram deep and shallow dual-branch feature extractor with parameters frozen without trimming, I m Represents G m A flag for judging whether the parameters of the feature extraction branch need to be frozen or fine-tuned, when I m When=1, output characteristic F t(m) Feature extractor G for parameter freezing m The common characteristics of the source domains are output, if the G is needed m Further fine tuning and optimizing parameters of (a) on a target domain, and outputting characteristics F t(m) For feature extractor after trimming->The output private characteristics of the target domain;
step S3.3: the target domain model parameter fine tuning is used for fine tuning the characteristic extraction branches which are not frozen in the step S3.2 and the supervised classification training process on the target domain, training the motor imagery intention classifier to adapt to the characteristic distribution of the target domain, updating the characteristic extraction branches which are not frozen and the motor imagery intention classifier parameters by calculating the cross entropy classification loss, wherein the calculation formula is as follows:
In the method, in the process of the invention,representing the classification result given by the network, y representing the true classification result, log representing the logarithm based on 2.
7. The motor imagery decoding method of the double-branch convolutional neural network based on adaptive transfer learning of claim 6, wherein:
the step S4 includes:
the motor imagery task data set is used as a classification task with the same number of various samples, the recognition accuracy Acc is used for describing the classification performance of the double-branch convolutional neural network model, and the calculation formula is as follows:
in N true Representing the number of correct classifications, N total Representing the total number of samples,
taking the used data set as a motor imagery 4 classification task, taking the bias of a double-branch convolutional neural network model as an evaluation index, and using Kappa coefficients Kappa to measure whether classification of the model has classification bias on a certain class, wherein Kappa coefficients are calculated based on confusion matrixes, the values are between-1 and 1, and the calculation formula of the Kappa coefficients is as follows:
p in the formula 0 For classification accuracy, P e To classify penalty terms with bias, the formula is:
wherein c represents the total number of classifications, a i And b i Representing the sum of the ith row and ith column, respectively, in the confusion matrix, whereby the value of Kappa is reduced when the classification of the two-branch convolutional neural network model exhibits a bias for a certain class, i.e., the confusion matrix is unbalanced, according to the calculation formula of Kappa.
8. The method for decoding motor imagery of a double-branch convolutional neural network based on adaptive transfer learning according to claim 7, wherein the method is characterized by:
in the step S4:
when the motor imagery intention decoding performance index is calculated, a time length window from 0.5s before the occurrence of a motor imagery task event to 4s after the occurrence of the task event is selected, the electroencephalogram signals of each test in the test set are extracted, the sampling frequency is 250Hz, each electroencephalogram signal of each test comprises 1125 sampling points, the divided data are input into a trained electroencephalogram double-branch convolutional neural network model, and the intention decoding accuracy, kappa value and decoding time length are adopted to evaluate the electroencephalogram signal decoding effect of the model.
9. The method for decoding motor imagery of the double-branch convolutional neural network based on adaptive transfer learning according to claim 5, wherein the method is characterized by:
the step S1 includes:
s1.1: baseline calibration, which includes that in order to avoid baseline drift phenomenon which easily occurs in long-time acquisition of electroencephalogram signals, an empirical mode decomposition method is adopted to calibrate the baseline, and meanwhile, a direct removal method is adopted to remove noise and artifacts in the electroencephalogram signals;
s1.2: filtering, namely removing high-frequency noise above 64Hz by adopting low-pass filtering for removing high-frequency noise interference and direct-current interference in the electroencephalogram signals, and simultaneously removing power frequency interference of 50 Hz;
S1.3: the sliding window data division comprises that for the electroencephalogram double-branch convolutional neural network to input continuous electroencephalogram data, a sliding window is adopted to intercept data fragments of 1.5s-6s of each motor imagery experiment in BCI IV 2a data to serve as an electroencephalogram sample of four-class motor imagery, and meanwhile, a sliding window is adopted to intercept data fragments of 2.5s-7s of each motor imagery experiment in BCI IV 2b data to serve as an electroencephalogram sample of one-time two-class motor imagery.
10. A computer readable storage medium storing a computer executable program enabling a processor to perform the method according to one of claims 1-9.
CN202311236959.0A 2023-09-23 2023-09-23 Double-branch convolutional neural network motor imagery intention decoding method based on self-adaptive transfer learning Pending CN117193537A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311236959.0A CN117193537A (en) 2023-09-23 2023-09-23 Double-branch convolutional neural network motor imagery intention decoding method based on self-adaptive transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311236959.0A CN117193537A (en) 2023-09-23 2023-09-23 Double-branch convolutional neural network motor imagery intention decoding method based on self-adaptive transfer learning

Publications (1)

Publication Number Publication Date
CN117193537A true CN117193537A (en) 2023-12-08

Family

ID=88997812

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311236959.0A Pending CN117193537A (en) 2023-09-23 2023-09-23 Double-branch convolutional neural network motor imagery intention decoding method based on self-adaptive transfer learning

Country Status (1)

Country Link
CN (1) CN117193537A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117435916A (en) * 2023-12-18 2024-01-23 四川云实信息技术有限公司 Self-adaptive migration learning method in aerial photo AI interpretation

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117435916A (en) * 2023-12-18 2024-01-23 四川云实信息技术有限公司 Self-adaptive migration learning method in aerial photo AI interpretation
CN117435916B (en) * 2023-12-18 2024-03-12 四川云实信息技术有限公司 Self-adaptive migration learning method in aerial photo AI interpretation

Similar Documents

Publication Publication Date Title
CN113627518B (en) Method for realizing neural network brain electricity emotion recognition model by utilizing transfer learning
CN109036553B (en) Disease prediction method based on automatic extraction of medical expert knowledge
CN110693493B (en) Epilepsia electroencephalogram prediction feature extraction method based on convolution and recurrent neural network combined time multiscale
Perna Convolutional neural networks learning from respiratory data
KR20170082440A (en) Method and apparatus for electrocardiogram authentication
CN110680313B (en) Epileptic period classification method based on pulse group intelligent algorithm and combined with STFT-PSD and PCA
Wang et al. Weakly supervised lesion detection from fundus images
CN110522412B (en) Method for classifying electroencephalogram signals based on multi-scale brain function network
CN110289081B (en) Epilepsia detection method based on deep network stack model self-adaptive weighting feature fusion
CN110309758B (en) Electrocardiosignal feature extraction method and device, computer equipment and storage medium
CN112329609A (en) Feature fusion transfer learning arrhythmia classification system based on 2D heart beat
Rashid et al. Artificial immune system–negative selection classification algorithm (NSCA) for four class electroencephalogram (EEG) signals
CN117193537A (en) Double-branch convolutional neural network motor imagery intention decoding method based on self-adaptive transfer learning
CN117009780A (en) Space-time frequency domain effective channel attention motor imagery brain electrolysis code method based on contrast learning
De et al. An adaptive vector quantization approach for image segmentation based on SOM network
Thenmozhi et al. Feature selection using extreme gradient boosting Bayesian optimization to upgrade the classification performance of motor imagery signals for BCI
Ganeshbabu Glaucoma image classification using discrete orthogonal stockwell transform
Hwaidi et al. A noise removal approach from eeg recordings based on variational autoencoders
CN114027786B (en) Sleep breathing disorder detection method and system based on self-supervision type memory network
CN112259228B (en) Depression screening method by dynamic attention network non-negative matrix factorization
CN115017960B (en) Electroencephalogram signal classification method based on space-time combined MLP network and application
CN116831594A (en) Epileptic electroencephalogram classification method based on iterative graph convolution neural network
CN117503157A (en) Electroencephalogram signal emotion recognition method based on SGCRNN model
CN116522106A (en) Motor imagery electroencephalogram signal classification method based on transfer learning parallel multi-scale filter bank time domain convolution
CN113066544B (en) FVEP characteristic point detection method based on CAA-Net and LightGBM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination