CN118228129B - Motor imagery electroencephalogram signal classification method based on deep migration learning - Google Patents

Motor imagery electroencephalogram signal classification method based on deep migration learning Download PDF

Info

Publication number
CN118228129B
CN118228129B CN202410635143.3A CN202410635143A CN118228129B CN 118228129 B CN118228129 B CN 118228129B CN 202410635143 A CN202410635143 A CN 202410635143A CN 118228129 B CN118228129 B CN 118228129B
Authority
CN
China
Prior art keywords
data
model
electroencephalogram
network
deep
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410635143.3A
Other languages
Chinese (zh)
Other versions
CN118228129A (en
Inventor
张秀梅
刘方达
李慧
夏常磊
崔维波
周凯龙
张泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Technology
Original Assignee
Changchun University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Technology filed Critical Changchun University of Technology
Priority to CN202410635143.3A priority Critical patent/CN118228129B/en
Publication of CN118228129A publication Critical patent/CN118228129A/en
Application granted granted Critical
Publication of CN118228129B publication Critical patent/CN118228129B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention provides a motor imagery electroencephalogram signal classification method based on deep transfer learning, which relates to the field of brain-computer interface data classification processing of neural networks and machine learning, and comprises the following steps: s1, acquiring brain electrical signals and preprocessing brain electrical signal data; s2, constructing a EEGNet-attribute-Resnet classification model; s3, realizing parameter sharing of the shallow network of the source domain and the target domain by using European alignment, and realizing domain adaptation of the deep network of different subjects by using the maximum mean difference MMD measurement difference. Compared with the prior art, the method and the device for classifying the electroencephalogram signals based on the data migration from fine adjustment and sharing of parameters of the shallow network and domain adaptation of the deep network can solve the problem of low classification accuracy caused by insufficient data quantity, improve the classification accuracy of the electroencephalogram signals of the cross-subjects, complete good classification tasks of different subjects, and can be widely applied to the fields of medical health and the like.

Description

Motor imagery electroencephalogram signal classification method based on deep migration learning
Technical Field
The invention belongs to the field of brain-computer interface data classification processing by neural networks and machine learning, and particularly relates to a motor imagery electroencephalogram signal classification method based on deep transfer learning.
Background
The brain-computer interface creates a new man-machine communication mode for people, which converts the brain ideas into actual commands to control external equipment. Motor imagery is one of the classical brain-machine interface paradigms, a notional motor behavior. By motor imagery or some motor intent, brain activity may be converted to control signals, which may be captured and identified by electroencephalograms (electroencephalography, EEG). According to the current research, when people imagine own body movement, the brain signals can be obviously changed under the influence of a motor imagination task, and meanwhile, the decoding of the motor imagination brain signals has potential application value.
In current motor imagery-based brain-computer interface research, there are three major challenges: low signal-to-noise ratio, inherent non-stationarity in recording brain electrical signals, specificity present among different subjects, and many machine-learning-based methods have been proposed for the three challenges described above, with good use of machine learning requiring large amounts of data to learn the distribution of the data. However, in applications of brain-computer interfaces, it is time consuming and cumbersome to re-collect the required training data, and thus, it is far from sufficient to use only conventional machine learning methods to precisely address the inter-individual variability problem with less training samples. Transfer learning is a method in machine learning that focuses on storing knowledge obtained when solving one problem and applying it to a different but related problem.
In the fields of image processing and natural language processing, and even in the research of brain-computer interfaces, migration learning reveals the vital value and influence of the migration learning. The transfer learning can map the features from different fields to a unified feature space, increase the data volume, overcome the problem of limited data volume and solve the problem of distribution difference. The transfer learning can solve the problem of over fitting caused by insufficient data and can improve the generalization capability of an electroencephalogram signal recognition model. Although many mature algorithms can accurately decode motor imagery electroencephalogram signals, the classification accuracy of the algorithms still has room for improvement. For patients who need to be used in medical rehabilitation, the classification accuracy varies significantly between individuals, effectively limiting their maturity and range of application.
Disclosure of Invention
Aiming at the problems of long training time and poor cross-subject classification effect in the field of motor imagery electroencephalogram classification in the prior art, the invention aims to provide a motor imagery electroencephalogram classification method based on deep transfer learning, and designs a deep transfer learning algorithm based on a EEGNet-Attention-ResNet model, so that a transfer module can be integrated on a EEGNet-Attention-ResNet model, thereby helping a target domain to train a reliable classification model by utilizing source domain information, and achieving the effects of reducing the number of training samples of the target domain subjects and shortening the training time of the target domain subjects.
In order to achieve the above purpose, the present invention provides the following technical solutions: a motor imagery electroencephalogram signal classification method based on deep migration learning comprises the following steps:
step S1: and acquiring a data set of the motor imagery electroencephalogram signals and preprocessing.
The preprocessing module is used for preprocessing the electroencephalogram signals by using a band-pass filter, removing artifacts of the electroencephalogram signals, obtaining a preprocessed data set, and dividing the preprocessed data set into a training set and a testing set based on the source domain subject data set.
Step S11: it is assumed that EEG data from different subjects is considered to be different domains. The source domain given a tag is expressed asAnd unlabeled target domainsHere, whereIt is the n-channel and m sample points that represent the multi-channel EEG data.The source tag representing the i-th sample. The method is based on a deep migration learning model, knowledge can be extracted from the source domain D s, so that more accurate prediction classification capacity on the target domain D t is enhanced, and distribution drift among data domains is effectively reduced.
Step S12: and (3) preprocessing the brain power domain data set in the step (S11) by using a band-pass filter to remove artifacts of the brain electrical signals and obtain preprocessed brain electrical data.
Step S13: in step S12, the present invention divides the preprocessed electroencephalogram data set to form a training set and a testing set.
Step S2: by applying EEGNet-Attention-ResNet algorithm to train the classification model, the invention performs feature extraction on the source domain dataset of the preprocessed motor imagery electroencephalogram, further processes the source domain dataset through a full connection layer, and finally obtains and outputs a classification result. The classification model is a model which is formed by adding a space, a channel Attention mechanism and a ResNet residual network on the basis of EEGNet neural network, namely a EEGNet-Attention-ResNet model, training the model based on EEGNet through a training set and performing performance evaluation on the trained model through a testing set, so as to obtain a convolutional neural network model.
Step S21: a dual self-attention mechanism, namely a spatial and channel attention mechanism, is added based on EEGNet neural network layers. Because the electroencephalogram signals contain abundant information in terms of space and electroencephalogram channels, extracting more useful information can greatly improve the classification accuracy.
Step S211: and inputting the electroencephalogram characteristics extracted through EEGNet networks, and learning the importance of channels in the electroencephalogram signals through the ECA module, wherein the learned channel weights are multiplied with the input characteristics to obtain the channel attention characteristics. The ECA channel attention module firstly uses a global average pooling layer to aggregate convolution characteristics to obtain a characteristic vector, then adaptively determines the size of a kernel K, and uses one-dimensional convolution to learn channel weights to complete information interaction among cross channels. The module does not reduce the local cross-channel interaction strategy of dimension, and effectively avoids the generated adverse effect.
Step S212: the channel attention output characteristic is taken as input, importance of the spatial position is learned by the CBAM module, and the learned spatial position weight is multiplied by the channel attention characteristic to obtain a final result. The CBAM attention module performs maximum pooling and average pooling operation on the feature information, performs convolution operation on different generated feature graphs after connection, normalizes the weight through a Sigmoid function, and finally multiplies the weight and the input feature graph to obtain a final result. Important spatial feature information is focused through learning spatial attention weights, repeated and non-critical information is restrained, and the performance and generalization capability of the whole network are improved.
The dual-channel sub-attention mechanism weight has characteristic self-adaptability, enhances the relevance among channels and the importance of spatial positions, inhibits non-key information, and improves the accuracy of the representation capability and the network perception capability of the electroencephalogram data.
Step S22: on the basis of the step S21, resNet residual networks are added, and the problems of greatly reduced recognition rate, increased computing resource consumption, gradient disappearance or explosion and the like can occur along with the increase of the number of network model layers. By introducing ResNet residual error network and directly connecting by using shortcut, the feature map extracted from the attention mechanism and the output residual error map are learned, and complete map is not needed, so that the problems of related gradient disappearance and network degradation are solved.
Step S23: and extracting the characteristics of the electroencephalogram signals through the model, and classifying through the full-connection layer.
Step S24: finally, updating the network weight based on the Adam optimization algorithm.
Step S3: and constructing a migration learning model.
And performing model adaptation, and constructing a deep transfer learning model, so that the model of the source domain brain electrical signal can be applied to the motor imagery brain electrical signal of the target domain, and further, the accurate feature extraction and classification of the motor imagery brain electrical signal are completed.
Step S31: firstly, inputting channel data preprocessed by a source domain and a target domain and electroencephalogram signal data of the target domain.
Step S32: in a shallower network hierarchy, the characteristics of the electroencephalogram signals are often universal in the field, and the alignment of the universal characteristics is realized through an Euclidean alignment method, namely, a characteristic alignment module is added after a EEGNet time and space filter, the characteristics alignment module consists of the filter and the characteristic alignment module, namely, the Euclidean alignment module, and the characteristic extraction model trained in the step S2 for realizing data alignment realizes the sharing of shallow parameters of source domain data and parameters of target domain data. The euclidean alignment expression is performed as follows:
representing the mean of the n covariance matrices, The covariance function is represented by a function of the covariance,A segment of an EEG signal sample is represented,Representing a transpose of the EEG signal sample data, n representing the total number of signal samples,Representing a sample representation after center alignment has been performed.
Step S33: as the depth of the network increases, the representation of different subject features becomes more specific, and the invention realizes the finer domain adaptation of the output features of the feature extractor by adding a domain adaptation module.
An adaptation layer is added between output features of a feature extractor of source domain data and target domain data, a measurement function of the maximum mean difference MMD is added between the adaptation layer to measure the difference of the output features of the source domain data and the target domain data, the multi-core maximum mean difference MK-MMD maps the source domain data and the target domain data to a regeneration kernel Hilbert space through a plurality of Gaussian kernel functions, the distance between two distributions p and q is measured in the regeneration kernel Hilbert space, and a kernel function K defined by a plurality of kernels is described as follows through a formula:
Wherein the method comprises the steps of For the weights contributed by different Gaussian kernels, the weights of the contributing large Gaussian kernels are large, the weights of the contributing small Gaussian kernels are small, k u is the u-th Gaussian kernel, and the u-th Gaussian kernel is added into the loss of the network to continue training, m represents the number of Gaussian kernels, k represents the combination of functions of different kernels { k u }, and MK-MMD can be expressed by the following formula:
in the method, in the process of the invention, Representing the distance of the regenerated kernel Hilbert space H k, whereRepresenting the mapping of the source domain data D s and the target domain data D t in the renewable kernel hilbert space, respectively, E p、Eq represents the characteristic output mathematical expectations of the source domain data and the target domain data, respectively.
The whole deep migration learning model consists of two main parts: on one hand, a European alignment method is added to a shallow network for parameter sharing; on the other hand, domain adaptation is achieved in deep networks using maximum mean difference MMD, thereby minimizing domain offset.
Step S34: and finally, updating the network weight by adopting an Adam optimization algorithm.
The invention has the following beneficial effects:
According to the invention, a EEGNet-Attention-ResNet model is constructed, EEGNet, attention, resNet is arranged in series, EEGNet inherits the advantages of CNN, and meanwhile, the defect that the same CNN cannot process different experimental paradigm signals is overcome, so that more accurate frequency domain characteristics can be obtained; the characteristics are extracted and fused respectively by introducing a double-channel Attention mechanism and ResNet residual networks, so that the classification result of the electroencephalogram signals fully considers the factors of a time domain and a space domain and is more accurate; and a deep migration learning module is introduced, model adaptation is carried out from fine adjustment of a shallow network, sharing parameters and domain adaptation of a deep network respectively, and the built deep migration learning model can finish good classification tasks of different subjects. Further understand the operation mechanism of human brain and has important practical significance in medical health and biomedical engineering.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
Fig. 1 is a flow chart of a method of classifying MI-EEG signals according to the invention.
Fig. 2 is a diagram of an attention module architecture according to the present invention.
Detailed Description
The present invention will be further described with reference to the drawings and the specific embodiments thereof in order to make the objects, technical solutions and advantages of the present invention more apparent.
Referring to fig. 1, which is an overall flow chart of the implementation of the invention, the invention provides a motor imagery electroencephalogram signal classification method based on deep transfer learning, which specifically comprises the following flow: data acquisition and pretreatment; adopting EEGNet, attention mechanism and Resnet residual error network training classification model; and realizing the transfer learning of the target domain data based on the Euclidean distance and the metric function of the maximum mean MMD.
Referring to fig. 2, the dual self-attention mechanism of the present invention is that a channel attention module and a spatial attention module learn important information of channel and spatial position respectively, and suppress non-critical information, so as to improve the perception capability of a network to a critical target and improve the accuracy of classification.
As shown in fig. 1, a motor imagery electroencephalogram signal classification method based on deep transfer learning comprises the following steps:
step S1: a motor imagery data set is acquired and the data set is preprocessed.
Step S11: it is assumed that EEG data from different subjects is considered to be a different domain. The source domain given a tag is expressed asAnd unlabeled target domainsHere, whereIt is the n-channel and m sample points that represent the multi-channel EEG data.The source tag representing the i-th sample. The method is based on a deep migration learning model, knowledge can be extracted from the source domain D s, so that more accurate prediction classification capacity on the target domain D t is enhanced, and distribution drift among data domains is effectively reduced.
Step S12: and (3) preprocessing the brain power domain data set in the step (S11) by using a band-pass filter to remove artifacts of the brain electrical signals and obtain preprocessed brain electrical data.
Step S13: in step S12, the present invention divides the preprocessed electroencephalogram data set to form a training set and a testing set.
Step S2: and (5) extracting and classifying the characteristics.
Based on EEGNet model, EEGNet inherits the advantages of CNN, and meanwhile, the defect that the same CNN cannot process different experimental paradigm signals is overcome, so that more accurate frequency domain characteristics can be obtained. By introducing a double-channel Attention mechanism and ResNet residual networks, the double-channel Attention mechanism comprises a channel Attention mechanism and a spatial Attention mechanism, the ResNet residual networks, a characteristic extraction and classification model of an electroencephalogram signal is built, the built model consists of a source domain data input layer and a characteristic extraction and classification output layer, and the method specifically comprises the following steps:
step S21: a two-channel Attention mechanism, namely a space and channel Attention mechanism, is added based on EEGNet neural network layers, as shown in figure 2. Because the electroencephalogram signals contain abundant information in the aspects of channels and spaces, extracting more useful information can greatly improve the classification accuracy.
Step S211: firstly, inputting an electroencephalogram feature extracted through EEGNet networks, learning the importance of a channel in an electroencephalogram signal through an ECA module, and multiplying the learned channel weight with the input feature to obtain a channel attention feature. The ECA channel attention module firstly uses a global average pooling layer to aggregate convolution characteristics to obtain characteristic vectors. And then, the size of the kernel K is adaptively determined, and the one-dimensional convolution is used for learning channel weights, so that information interaction among the cross channels is completed. Meanwhile, the module does not reduce the local cross-channel interaction strategy of dimension, and the adverse effect is effectively avoided.
Step S212: then, taking the channel attention output characteristic as input, sending CBAM the importance of the spatial position learning module, and multiplying the learned spatial position weight with the channel attention characteristic to obtain a final result. The CBAM attention module firstly performs maximum pooling and average pooling operation on the characteristic information respectively, secondly performs convolution operation on different generated characteristic graphs after connection, normalizes the weight through a Sigmoid function, and finally multiplies the weight and the input characteristic graph to obtain a final result. Important characteristic information is focused through learning the spatial attention weight, repeated and non-critical information is restrained, and the performance and generalization capability of the whole network are improved.
The weight of the dual-channel Attention mechanism has characteristic self-adaptability, enhances the relevance among channels and the importance of spatial positions, inhibits non-key information, and improves the accuracy of the representation capability and the network perception capability of the electroencephalogram data.
Step S22: on the basis of the step S21, resNet residual networks are added, and the problems of greatly reduced recognition rate, increased computing resource consumption, gradient disappearance or explosion and the like can occur along with the increase of the number of network model layers. By introducing ResNet residual error network and directly connecting through shortcut, the feature map extracted from the attention mechanism and the output residual error map are learned, and complete map is not needed, so that the problems of related gradient disappearance and network degradation can be solved.
Step S23: and classifying the characteristic extraction of the electroencephalogram signals through the full-connection layer by the model.
Step S24: network weights are updated based on Adam optimization algorithm.
Step S25: based on the above steps as one-time cross-validation, repeating according to the steps S21-S24.
Step S3: model adaptation, namely constructing a deep migration learning model, so that a classification model of a source domain electroencephalogram signal can be applied to an electroencephalogram signal of a target domain, and specifically comprises the following steps:
Step S31: firstly, the channel data preprocessed by the source domain and the target domain are input.
Step S32: in a shallower network hierarchy, features are often universal in the field, and alignment of the universal features is realized through an Euclidean alignment method, namely, a feature alignment module is added after a second convolution filter of EEGNet networks to perform fusion, namely, the Euclidean alignment module, so that a model trained in the step S2 for realizing data alignment realizes shallow parameters of source domain data and parameter sharing of target domain data. The euclidean alignment expression is performed as follows:
representing the mean of the n covariance matrices, The covariance function is represented by a function of the covariance,A segment of an EEG signal sample is represented,Representing a transpose of the EEG signal sample data, n representing the total number of signal samples,Representing a sample representation after center alignment has been performed.
Step S33: as network depth increases, the representation of features of different subjects becomes more specialized, and the present invention is more refined domain-adaptive to the output features of the feature extractor.
An adaptation layer is added between output features of a feature extractor of source domain data and target domain data, a measurement function of the maximum mean difference MMD is added between the adaptation layer to measure the difference of the output features of the source domain data and the target domain data, the multi-core maximum mean difference MK-MMD maps the source domain data and the target domain data to a regeneration kernel Hilbert space through a plurality of Gaussian kernel functions, the distance between two distributions p and q is measured in the regeneration kernel Hilbert space, and a kernel function K defined by a plurality of kernels is described as follows through a formula:
Wherein the method comprises the steps of For the weights contributed by different Gaussian kernels, the weights of the contributing large Gaussian kernels are large, the weights of the contributing small Gaussian kernels are small, k u is the u-th Gaussian kernel, and the u-th Gaussian kernel is added into the loss of the network to continue training, m represents the number of Gaussian kernels, k represents the combination of functions of different kernels { k u }, and MK-MMD can be expressed by the following formula:
in the method, in the process of the invention, Representing the distance of the regenerated kernel Hilbert space H k, whereRepresenting the mapping of the source domain data D s and the target domain data D t in the renewable kernel hilbert space, respectively, E p、Eq represents the characteristic output mathematical expectations of the source domain data and the target domain data, respectively.
The whole deep migration learning model mainly comprises two parts: and the European alignment method is adopted in the shallow network to carry out parameter sharing, and the maximum mean difference MMD is adopted in the deep network to realize domain adaptation, so that domain offset is reduced to the greatest extent.
Step S34: and finally, updating the network weight by adopting an Adam optimization algorithm.
Step S35: based on the above steps as one-time cross-validation, repeating steps S31-S34.

Claims (2)

1. A motor imagery electroencephalogram signal classification method based on deep migration learning is characterized by comprising the following steps:
step S1: acquiring brain electrical signals and preprocessing brain electrical signal data;
the step S1 of collecting the brain electrical signals and preprocessing the brain electrical signal data comprises the following steps:
step S11: acquiring a motor imagery electroencephalogram data set, and dividing electroencephalogram data of different subjects into a source domain and a target domain;
Step S12: preprocessing the brain power domain data set in the step S11 by using a band-pass filter to remove artifacts of the brain electrical signals and obtain preprocessed brain electrical data;
Step S13: dividing the electroencephalogram data set preprocessed in the step S12 into a training set and a testing set;
Step S2: a EEGNet network is taken as a basic framework, a double-channel Attention mechanism, namely a space, a channel Attention model and a ResNet residual network, is added on the basis, a motor imagery electroencephalogram signal classification network is constructed, the motor imagery electroencephalogram signal classification network is used for extracting the characteristics of source domain data, classifying the characteristics through a full connection layer, updating weights through an Adam optimization algorithm, and obtaining and outputting a model of a classification result;
Step S21: on the basis of EEGNet neural network, a channel and a spatial attention mechanism, namely an ECA module and a CBAM module are added in series, so that the performance of relevant important channels and spaces for extracting electroencephalogram signals by a network model is enhanced;
Step S22: on the basis of the step S21, a ResNet residual network is added in series, and a EEGNet-Attention-ResNet model is built;
Step S23: extracting characteristics of the preprocessed source domain brain electrical data through EEGNet-Attention-ResNet model;
Step S24: classifying the electroencephalogram features extracted in the step S23 through the full connection layer;
step S25: updating the weight through an Adam optimization algorithm;
step S3: constructing a model-adaptive deep migration learning model;
step S31: constructing a shallow model adaptation module, and realizing the alignment of general characteristics of brain electrical signals of different subjects based on European alignment;
Step S32: and constructing a deep model adaptation module, measuring the difference between the output characteristics of the source domain data and the target domain data based on MMD mean value difference measurement, and realizing the field adaptation of the deep characteristics of different subjects.
2. The motor imagery electroencephalogram classification method based on deep transfer learning according to claim 1, wherein the construction model-adapted deep transfer learning model in step S3 is implemented specifically according to the following steps:
step S31: the EEGNet-Attention-ResNet model is used for constructing a shallow model adaptation module, realizing characteristic alignment of general characteristics of electroencephalogram signals of different subjects based on European alignment, performing model adaptation, constructing a deep migration learning model and performing parameter sharing of a general characteristic layer;
Based on the European alignment module, the European alignment module is added after the time and space convolution filter, the shallow parameters of the electroencephalogram signal classification model trained in the step S23 in the claim 1 are shared, and the European alignment expression is executed as follows:
representing the mean of the n covariance matrices, The covariance function is represented by a function of the covariance,A segment of an EEG signal sample is represented,Representing a transpose of the EEG signal sample data, n representing the total number of signal samples,A sample representation after center alignment is performed;
step S32: the EEGNet-Attention-ResNet model is used for constructing a deep model adaptation module, measuring the difference between the output characteristics of the source domain data and the target domain data based on MMD mean value difference measurement, and realizing the field adaptation of the deep characteristics of different subjects;
Based on the maximum mean difference MMD metric function, after the feature extractor is set up in step S23 in claim 1, adding a domain adaptive module to measure the difference between the output features of the source domain and the target domain data, where the maximum mean difference MMD maps the electroencephalogram data to the regenerated kernel hilbert space through a plurality of gaussian kernel functions, measures the distribution distance of two data feature outputs p and q in the regenerated kernel hilbert space, where the p and q distributions represent the feature outputs of the source domain data and the target domain data, and the kernel function K expression defined by the plurality of kernels is as follows:
Wherein the method comprises the steps of For the weights contributed by different Gaussian kernels, the weights of the contributing large Gaussian kernels are large, the weights of the contributing small Gaussian kernels are small, k u is the u-th Gaussian kernel, and the u-th Gaussian kernel is added into the loss of the network to continue training, m represents the number of Gaussian kernels, k represents the combination of functions of different kernels { k u }, and MK-MMD is expressed by the following formula:
in the method, in the process of the invention, Representing the distance of the regenerated kernel Hilbert space H k, whereMapping of the characteristic outputs of the source domain data D s and the target domain data D t in the renewable kernel hilbert space is represented, and E p、Eq represents the mathematical expectations of the characteristic outputs of the source domain data and the target domain data, respectively.
CN202410635143.3A 2024-05-22 2024-05-22 Motor imagery electroencephalogram signal classification method based on deep migration learning Active CN118228129B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410635143.3A CN118228129B (en) 2024-05-22 2024-05-22 Motor imagery electroencephalogram signal classification method based on deep migration learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410635143.3A CN118228129B (en) 2024-05-22 2024-05-22 Motor imagery electroencephalogram signal classification method based on deep migration learning

Publications (2)

Publication Number Publication Date
CN118228129A CN118228129A (en) 2024-06-21
CN118228129B true CN118228129B (en) 2024-07-16

Family

ID=91501173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410635143.3A Active CN118228129B (en) 2024-05-22 2024-05-22 Motor imagery electroencephalogram signal classification method based on deep migration learning

Country Status (1)

Country Link
CN (1) CN118228129B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113303814A (en) * 2021-06-13 2021-08-27 大连理工大学 Single-channel ear electroencephalogram automatic sleep staging method based on deep transfer learning
CN115105076A (en) * 2022-05-20 2022-09-27 中国科学院自动化研究所 Electroencephalogram emotion recognition method and system based on dynamic convolution residual multi-source migration

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113627518B (en) * 2021-08-07 2023-08-08 福州大学 Method for realizing neural network brain electricity emotion recognition model by utilizing transfer learning
CN117332300A (en) * 2023-10-16 2024-01-02 安徽大学 Motor imagery electroencephalogram classification method based on self-attention improved domain adaptation network
CN117520891A (en) * 2023-11-08 2024-02-06 山东大学 Motor imagery electroencephalogram signal classification method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113303814A (en) * 2021-06-13 2021-08-27 大连理工大学 Single-channel ear electroencephalogram automatic sleep staging method based on deep transfer learning
CN115105076A (en) * 2022-05-20 2022-09-27 中国科学院自动化研究所 Electroencephalogram emotion recognition method and system based on dynamic convolution residual multi-source migration

Also Published As

Publication number Publication date
CN118228129A (en) 2024-06-21

Similar Documents

Publication Publication Date Title
Aznan et al. Simulating brain signals: Creating synthetic eeg data via neural-based generative models for improved ssvep classification
CN113693613B (en) Electroencephalogram signal classification method, electroencephalogram signal classification device, computer equipment and storage medium
CN109375776B (en) Electroencephalogram action intention recognition method based on multi-task RNN model
CN114027786B (en) Sleep breathing disorder detection method and system based on self-supervision type memory network
Gao et al. Convolutional neural network and riemannian geometry hybrid approach for motor imagery classification
CN117150346A (en) EEG-based motor imagery electroencephalogram classification method, device, equipment and medium
CN117574059A (en) High-resolution brain-electrical-signal deep neural network compression method and brain-computer interface system
CN116522106A (en) Motor imagery electroencephalogram signal classification method based on transfer learning parallel multi-scale filter bank time domain convolution
CN112307996B (en) Fingertip electrocardio identity recognition device and method
CN114707539A (en) Hand joint angle estimation method, hand joint angle estimation device, storage medium and equipment
CN113128384B (en) Brain-computer interface software key technical method of cerebral apoplexy rehabilitation system based on deep learning
Khalkhali et al. Low latency real-time seizure detection using transfer deep learning
Nguyen et al. A novel surface electromyographic gesture recognition using discrete cosine transform-based attention network
CN117473303A (en) Personalized dynamic intention feature extraction method and related device based on electroencephalogram signals
CN118228129B (en) Motor imagery electroencephalogram signal classification method based on deep migration learning
CN117034030A (en) Electroencephalo-gram data alignment algorithm based on positive and negative two-way information fusion
CN117150292A (en) Incremental learning-based gesture recognition model training method
CN116236209A (en) Method for recognizing motor imagery electroencephalogram characteristics of dynamics change under single-side upper limb motion state
CN114428555B (en) Electroencephalogram movement intention recognition method and system based on cortex source signals
Singh et al. Motor imagery classification based on subject to subject transfer in Riemannian manifold
CN116343324A (en) sEMG processing method and system for fusing MEMD and Hilbert space filling curves
Ngo et al. EEG Signal-Based Eye Blink Classifier Using Convolutional Neural Network for BCI Systems
Wang et al. Multi-channel LFP recording data compression scheme using Cooperative PCA and Kalman Filter
CN118626940B (en) Bimodal signal fusion method based on self-adaptive space-time convolution attention network
CN117390543B (en) FA-CNN-based motor imagery electroencephalogram signal processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant