CN114757273A - Electroencephalogram signal classification method based on collaborative contrast regularization average teacher model - Google Patents

Electroencephalogram signal classification method based on collaborative contrast regularization average teacher model Download PDF

Info

Publication number
CN114757273A
CN114757273A CN202210362532.4A CN202210362532A CN114757273A CN 114757273 A CN114757273 A CN 114757273A CN 202210362532 A CN202210362532 A CN 202210362532A CN 114757273 A CN114757273 A CN 114757273A
Authority
CN
China
Prior art keywords
eeg
network
eeg data
collaborative
contrast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210362532.4A
Other languages
Chinese (zh)
Inventor
杭文龙
李增光
殷明波
梁爽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Tech University
Original Assignee
Nanjing Tech University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Tech University filed Critical Nanjing Tech University
Priority to CN202210362532.4A priority Critical patent/CN114757273A/en
Publication of CN114757273A publication Critical patent/CN114757273A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • A61B5/7207Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal of noise induced by motion artifacts
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • A61B5/7257Details of waveform analysis characterised by using transforms using Fourier transforms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention provides an electroencephalogram signal classification method based on a collaborative contrast regularization average teacher model, which comprises the following steps: firstly, preprocessing an EEG (Electroencephalogram) signal, generating EEG data under a new visual angle by using a two-stage EEG data amplification method, and generating EEG data under another visual angle in a time-frequency domain; respectively learning EEG data characteristics of different visual angles by using student and teacher networks in the average teacher model; and (3) encouraging the consistency of the EEG data predictions of different visual angles by using consistency losses of student and teacher networks, encouraging the consistency of EEG data structures of different visual angles by using collaborative contrast losses, and finally weighting and summing the EEG data structures with cross entropy losses to optimize network parameters. The two-stage EEG data augmentation method provided by the invention can capture time domain and time domain variation factors at the same time, and the provided collaborative contrast regularization average teacher model can learn data level information under different visual angles and consistency information of data structures between different visual angles, so that the robustness, the discriminability and the generalization of the model are improved.

Description

Electroencephalogram signal classification method based on collaborative contrast regularization average teacher model
Technical Field
The invention relates to an electroencephalogram signal identification method based on a collaborative contrast regularization average teacher model, and belongs to the field of electroencephalogram signal identification.
Background
Brain Computer Interface (BCI) means to create a direct connection between a human or an animal and an external device, so as to exchange information between the Brain and the device. Currently, the BCI system has been widely used to help patients with stroke, spinal cord injury, etc. to control external devices to improve their quality of life. Electroencephalography (EEG) is a bioelectrical phenomenon that results from the transfer of information in the form of ions between brain cell populations, and is a general reflection of the electrophysiological activity of neurons on the surface of the cerebral cortex or scalp. Compared with other types of brain signals, the EEG has the advantages of convenience in acquisition, high spatial-temporal resolution and the like, and has unique application value in a BCI system. By means of a large amount of EEG training data, the deep learning method greatly improves the EEG recognition performance. However, EEG (such as motor imagery EEG) in partial BCI system paradigm has the common problems of low stationarity and signal-to-noise ratio, and the problem of difficulty in acquiring a large amount of training data due to time and labor consuming data acquisition. EEG data augmentation methods can alleviate the above-mentioned problem of insufficient training data. The EEG contains rich information such as time, frequency, space, etc., and can perform feature representation in multiple domains (such as time domain, time-frequency domain, etc.), however, the existing EEG data augmentation method usually generates new augmentation data in a single domain, ignores the variation factors of the EEG data in multiple domains, and is not high in quality of the augmentation EEG data, thereby bringing challenges to the establishment of a robust EEG identification model. In addition, most of the current deep learning methods only consider information of EEG data layers under different viewing angles, neglect information of unchanged EEG data structure layers between different viewing angles, and are difficult to break through the performance bottleneck of an EEG identification model.
Aiming at the problems, the invention relies on the existing deep learning method, firstly improves the EEG data amplification mechanism, establishes a novel two-stage EEG data amplification mechanism, realizes the amplification representation of EEG data on a plurality of domains, ensures that the generated EEG data can simultaneously capture the variation factors of the plurality of domains, and enhances the generalization performance of the existing model; subsequently, the invention constructs a cooperative constraint mode among multi-view EEG data structures, learns the intra-class similarity and inter-class difference information of an EEG data structure layer by comparing regularization, encourages the consistency of EEG data structures among different views through cooperative constraint, and further improves the classification performance of the EEG data by a model.
Disclosure of Invention
The invention provides an EEG signal identification method based on a collaborative contrast regularization average teacher model, which is characterized in that a two-stage EEG data augmentation method is adopted to generate EEG data capable of capturing variation factors of multiple domains at the same time, an average teacher network model containing student and teacher networks is introduced to learn EEG depth characteristics under different visual angles on the basis, a consistency loss function is used to encourage the students and the teacher network model to predict the consistency of EEG data under different visual angles, a collaborative contrast loss function is used to encourage the consistency of EEG data structures under different visual angles, and finally a final loss function is formed by weighting and summing a cross entropy loss function, a consistency loss function and the collaborative contrast loss function to optimize network parameters.
The invention adopts the following technical scheme for solving the problems:
an electroencephalogram signal identification method based on a collaborative contrast regularization average teacher model comprises the following steps:
step 1: EEG data is acquired.
And 2, step: preprocessing EEG data including band-pass filtering and artifact removal, selecting specific time period and frequency band to obtain EEG data
Figure BSA0000270493290000021
Wherein
Figure BSA0000270493290000022
Representing the nth EEG data, c, d representing the number of electrode channels and the number of time sampling points of the EEG data, ynE {0, 1, 2, 3} is the label to which the nth EEG data corresponds.
And step 3: augmentation by different dataCarrying out segmentation and recombination on the EEG data to obtain the EEG data under different viewing angles after amplification
Figure BSA0000270493290000023
And
Figure BSA0000270493290000027
wherein
Figure BSA0000270493290000024
Is EEG data obtained in a two-stage augmentation mode.
And 4, step 4: using a special EEG decoding network as a main network of an average teacher model to amplify EEG data
Figure BSA0000270493290000025
And exchanging the sequenced EEG data
Figure BSA0000270493290000026
And respectively inputting the student network and the teacher network in the average teacher model.
And 5: and (4) calculating the classification loss by using the probability output obtained by the output of the student network in the step 4 through the softmax function and the real label.
And 6: and 4, obtaining respective probability output by using the output of the student network and the output of the teacher network in the step 4 through a softmax function respectively, and calculating consistency loss between the student network and the teacher network.
And 7: and (4) carrying out dimension reduction on the EEG features at different viewing angles output in the step (4) by using a projection network, and then calculating the collaborative contrast loss among the features after dimension reduction of each viewing angle.
And 8: and respectively carrying out weighted summation on the classification loss, the consistency loss and the collaborative contrast loss to form a final loss function, and optimizing the whole network by using a back propagation algorithm.
Preferably, the EEG data amplification method proposed by the present invention includes two stages, first introducing a linear interpolation technique into a segmentation and reconstruction method in the EEG time domain, then further converting the time domain EEG amplification data into a frequency domain representation thereof by short-time fourier transform, and after segmenting and reconstructing the EEG signal in the time-frequency domain, converting the EEG signal into a time domain EEG signal again by inverse short-time fourier transform, and using the time domain EEG signal as the amplification EEG data.
Preferably, the collaborative contrast regularization average teacher model takes a collaborative contrast regularization mechanism as a core module, which is specifically expressed as:
Figure BSA0000270493290000031
wherein, the first and the second end of the pipe are connected with each other,
Figure BSA0000270493290000032
wherein the content of the first and second substances,
Figure BSA0000270493290000033
and
Figure BSA0000270493290000034
respectively representing the results obtained by the characteristics of EEG data learned by student (teacher) network in different augmentation modes through projection network and normalization operation,
Figure BSA0000270493290000035
represents all samples except the ith sample, p (i) ≡ a (i): y isp=yiDenotes with the label ypThe same set of all samples, τ represents the scale parameter.
Has the beneficial effects that:
1. according to the invention, a two-stage EEG data amplification method is introduced, the problem that high-frequency noise is brought to the EEG data by a traditional segmentation and recombination method is solved by utilizing a linear interpolation technology in the time domain of an EEG signal in the first stage, the amplified EEG data obtained in the first stage is further amplified in a segmentation and recombination mode in the time-frequency domain of the EEG signal in the second stage, finally generated EEG data can capture the time domain and time-frequency domain variation factors at the same time, and the robustness and generalization capability of the model are enhanced.
2. At present, an average teacher model can only learn information of an EEG data layer, and the method introduces collaborative comparison regularization loss for the average teacher model, on one hand, normalization embedding of EEG data is guaranteed to have better intra-class compactness and inter-class separability through comparison regularization constraint, so that EEG data structure information is learned; on the other hand, a collaborative learning mechanism is designed, EEG data structure consistency information under different visual angles can be learned, and the discriminant learning capability of the model is improved.
Drawings
FIG. 1 is a network frame diagram of an electroencephalogram signal identification method based on a collaborative contrast regularization average teacher model.
Detailed Description
Please refer to fig. 1:
the present invention will be further explained with reference to examples.
The main implementation flow of the invention is as follows, and the related network framework is shown in figure 1.
Step 1: EEG data is acquired.
Step 2: preprocessing EEG data, including band-pass filtering and artifact removing, selecting specific time period and frequency band to obtain EEG data
Figure BSA0000270493290000036
Wherein
Figure BSA0000270493290000037
The number of electrode channels and the number of time sampling points, y, representing the nth EEG data, c, d representing the EEG data, respectivelynE {0, 1, 2, 3} is the label to which the nth EEG data corresponds.
And step 3: the EEG data is segmented and recombined according to two different data augmentation modes to obtain the EEG data under different viewing angles after augmentation
Figure BSA0000270493290000041
And
Figure BSA0000270493290000042
specific EEG data augmentation is as follows:
1) for pretreatmentThe processed EEG data is transformed from a time domain representation to a time-frequency domain representation using a short-time fourier transform; the method comprises the steps of dividing EEG samples represented by time-frequency domains into a plurality of non-overlapping segments by adopting a segmentation and recombination mode, randomly selecting other EEG samples with the same label according to the label of the current EEG sample, and obtaining the time-frequency domain representation of an augmented sample by randomly combining the segments at the same position of different samples. Reconverting newly generated EEG samples to a time domain representation using an inverse short-time Fourier transform, i.e., forming augmented data corresponding to the EEG data, denoted herein as
Figure BSA0000270493290000043
2) Dividing the preprocessed EEG data into non-overlapping J segments, and randomly selecting two segments from EEG samples with the same label for interpolation fitting, wherein the specific mode is as follows:
Figure BSA0000270493290000044
wherein the content of the first and second substances,
Figure BSA0000270493290000045
is representative of the current EEG sample and,
Figure BSA0000270493290000046
representing randomly selected EEG samples with the same label, RijRepresents from [1, N]Randomly selected arbitrary integer, alpha is the interpolation fitting coefficient, and the value is 0.999. For new data formed after interpolation fitting
Figure BSA0000270493290000047
Again using the method of 1) above, two-stage augmented EEG data is obtained, here denoted as
Figure BSA0000270493290000048
And 4, step 4: utilizing an EEG decoding network Shallow ConvNet as a backbone network of an average teacher model to amplify EEG data
Figure BSA0000270493290000049
And permuted EEG data
Figure BSA00002704932900000410
And respectively inputting the student network and the teacher network in the average teacher model.
And 5: calculating the classification loss by using the probability output obtained by the output of the student network in the step 4 through the softmax function and the real label, wherein the formula is as follows:
Figure BSA00002704932900000411
wherein the content of the first and second substances,
Figure BSA00002704932900000412
a loss of classification is indicated and,
Figure BSA00002704932900000413
and
Figure BSA00002704932900000414
respectively representing the augmented EEG data, y, obtained using the two methods in step 3iA real-life label representing the specimen,
Figure BSA00002704932900000415
representing the predicted outcome of the student network.
Step 6: and 4, calculating consistency loss between the student network and the teacher network by utilizing respective probability output obtained by the softmax function of the output of the student network and the output of the teacher network in the step 4, wherein the formula is as follows:
Figure BSA00002704932900000416
wherein the content of the first and second substances,
Figure BSA00002704932900000417
a loss of consistency is indicated and indicated,
Figure BSA00002704932900000418
and
Figure BSA00002704932900000419
respectively representing the augmented EEG data obtained using the two methods in step 3,
Figure BSA00002704932900000420
and
Figure BSA00002704932900000421
respectively representing the prediction results of the student network and the teacher network.
And 7: using projection networks with the same structure to perform dimension reduction on the EEG characteristics with different visual angles learned in the step 4, extracting deeper EEG characteristics, and using the output of the two projection networks to calculate the collaborative contrast loss between the EEG characteristics, wherein the formula is as follows:
Figure BSA0000270493290000051
Figure BSA0000270493290000052
wherein, the first and the second end of the pipe are connected with each other,
Figure BSA0000270493290000053
and
Figure BSA0000270493290000054
representing augmented EEG data in a current processing batch
Figure BSA0000270493290000055
(containing B EEG samples) after the EEG features learned using the student network and the teacher network have passed through the projection network and been normalized,
Figure BSA0000270493290000056
represents all but the i-th sample of the current processing batch, p (i) p ≡ a (i): y isp=yiDenotes with the label ypAnd in the same set of all positive samples, tau represents a scale parameter, and the value of the method is 0.07.
And 8: respectively carrying out weighted summation on the classification loss, the consistency loss and the collaborative contrast loss to form a final loss function, which is as follows:
Figure BSA0000270493290000057
wherein λ is1,λ2The coefficients of the consistency loss function and the collaborative contrast loss function are respectively expressed, and the values of the invention are 1 and 0.001. Optimizing the student network by using a back propagation algorithm to obtain the updated parameter theta of the student network at the t-th timesAnd teacher network parameter θtDependent on student network parameter θsThe updating is carried out by an exponential moving average method, and the formula is as follows:
θt=β·θt+(1-β)·θs (9)
wherein beta represents an update parameter, and the value of the invention is 0.999.
The above description is only an embodiment of the present invention, and is not intended to limit the scope of the present invention, and all equivalent structures or equivalent flow transformations made by using the contents of the specification and the drawings, or directly or indirectly applied to the related technical fields, are included in the scope of the present invention.

Claims (7)

1. An electroencephalogram signal classification method based on a collaborative contrast regularization average teacher model is characterized by comprising the following steps:
step 1: acquiring EEG data;
step 2: preprocessing EEG data including band-pass filtering and artifact removal, selecting specific time period and frequency band to obtain original EEG data
Figure FSA0000270493280000011
Wherein
Figure FSA0000270493280000012
Representing the nth EEG data, c and d respectively representing the number of electrode channels and the number of time sampling points of the EEG signal, ynE {0, 1, 2, 3} is a label for the nth EEG data;
and step 3: the EEG data is segmented and recombined by using different data augmentation modes to obtain augmented EEG data under different visual angles
Figure FSA0000270493280000013
And
Figure FSA0000270493280000014
wherein
Figure FSA0000270493280000015
EEG data obtained in a two-stage augmentation mode;
and 4, step 4: using a special EEG decoding network as a main network of an average teacher model to amplify EEG data
Figure FSA0000270493280000016
And exchanging the sequenced EEG data
Figure FSA0000270493280000017
Respectively inputting a student network and a teacher network in the average teacher model;
and 5: calculating classification loss by using probability output obtained by the output of the student network in the step 4 through a softmax function and the real label;
step 6: 4, obtaining respective probability output by utilizing the output of the student network and the output of the teacher network in the step 4 through a softmax function respectively, and calculating consistency loss between the student network and the teacher network;
and 7: reducing the dimension of the EEG characteristics with different view angles output in the step 4 by using a projection network, and then calculating the collaborative contrast loss among the characteristics after dimension reduction of each view angle;
and 8: and respectively carrying out weighted summation on the classification loss, the consistency loss and the collaborative contrast loss to form a final loss function, and optimizing the whole network by using a back propagation algorithm.
2. The electroencephalogram signal classification method based on the collaborative contrast regularization average teacher model according to claim 1, characterized in that in step 3, two augmentation modes mainly include the following:
1) transforming the preprocessed EEG data from a time domain representation to a time-frequency domain representation using a short-time fourier transform; dividing the EEG sample represented by the time-frequency domain into a plurality of non-overlapping segments by adopting a segmentation and recombination mode, then randomly selecting other EEG samples with the same label according to the label of the current EEG sample, and obtaining the time-frequency domain representation of the augmented sample by randomly combining the segments at the same position of different samples; reconverting newly generated EEG samples to a time domain representation using an inverse short-time Fourier transform, i.e., forming augmented data corresponding to the EEG data, denoted herein as
Figure FSA0000270493280000018
2) Dividing the preprocessed EEG data into non-overlapping J segments, and randomly selecting two segments from EEG samples with the same label for interpolation fitting, wherein the specific mode is as follows:
Figure FSA0000270493280000021
wherein the content of the first and second substances,
Figure FSA0000270493280000022
is representative of the current EEG sample and,
Figure FSA0000270493280000023
representing randomly selected EEG samples with the same label, RijRepresents from [1, N ]]Randomly selected random integer, alpha is interpolation fitting coefficient, the invention takes 0.999, and new data formed after interpolation fitting
Figure FSA0000270493280000024
Reuse of the method of 1) above, i.e. obtaining two-stage augmented data, here denoted as
Figure FSA0000270493280000025
3. The EEG classification method based on collaborative contrast regularization average teacher model according to claim 1, wherein in step 4, EEG decoding network Shallow ConvNet is used as the backbone network of the average teacher model.
4. The electroencephalogram signal classification method based on the collaborative contrast regularization average teacher model, as claimed in claim 1, wherein in step 5, probability output obtained by passing output of a student network through a softmax function is calculated together with a real label for classification loss, and the specific calculation method is as follows:
Figure FSA0000270493280000026
wherein the content of the first and second substances,
Figure FSA0000270493280000027
a loss of classification is indicated and,
Figure FSA0000270493280000028
and
Figure FSA0000270493280000029
respectively representing the augmented EEG data, y, obtained using the two methods in step 3iA real-life label representing the specimen,
Figure FSA00002704932800000210
representing the predicted outcome of the student network.
5. The electroencephalogram signal classification method based on the collaborative contrast regularization average teacher model, according to the claim 1, characterized in that in the step 6, respective probability outputs obtained by the outputs of the student network and the teacher network respectively through a softmax function in the step 4 are utilized to calculate consistency loss between the outputs, and the specific calculation mode is as follows:
Figure FSA00002704932800000211
wherein the content of the first and second substances,
Figure FSA00002704932800000212
a loss of consistency is indicated and indicated,
Figure FSA00002704932800000213
and
Figure FSA00002704932800000214
respectively representing the augmented EEG data obtained using the two methods in step 3,
Figure FSA00002704932800000215
and
Figure FSA00002704932800000216
and respectively representing the prediction results of the student network and the teacher network.
6. The electroencephalogram signal classification method based on the collaborative contrast regularization average teacher model, according to claim 1, characterized in that in step 7, the collaborative contrast loss between the two projection networks is calculated by using the outputs of the two projection networks, and the specific calculation method is as follows:
Figure FSA00002704932800000217
Figure FSA00002704932800000218
wherein the content of the first and second substances,
Figure FSA0000270493280000031
and
Figure FSA0000270493280000032
representing augmented EEG data in a current processing batch
Figure FSA0000270493280000033
(containing B EEG samples) after the EEG features learned by student network and teacher network are projected through the projection network and normalized,
Figure FSA0000270493280000034
represents all but the i-th sample of the current processing batch, p (i) p ≡ a (i): y isp=yiDenotes with the label ypAnd in the same set of all positive samples, tau represents a scale parameter, and the value of the method is 0.07.
7. The electroencephalogram signal classification method based on the collaborative contrast regularized average teacher model, according to claim 1, characterized in that in step 8, the classification loss, the consistency loss and the collaborative contrast loss are weighted and summed to form a final loss function, and a back propagation algorithm is used to optimize the whole network, specifically as follows:
Figure FSA0000270493280000035
wherein λ is1,λ2Separately representing a consistency loss function and a cooperative pairComparing with the coefficient of the loss function, the invention takes values of 1 and 0.001, and optimizes the student network by using a back propagation algorithm to obtain the updating parameter theta of the student network at the t timesAnd teacher network parameter θtDependent on student network parameter θsThe updating is carried out by an exponential moving average method, and the formula is as follows:
θt=β·θt+(1-β)·θs (7)
wherein beta represents an update parameter, and the value of the invention is 0.999.
CN202210362532.4A 2022-04-07 2022-04-07 Electroencephalogram signal classification method based on collaborative contrast regularization average teacher model Pending CN114757273A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210362532.4A CN114757273A (en) 2022-04-07 2022-04-07 Electroencephalogram signal classification method based on collaborative contrast regularization average teacher model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210362532.4A CN114757273A (en) 2022-04-07 2022-04-07 Electroencephalogram signal classification method based on collaborative contrast regularization average teacher model

Publications (1)

Publication Number Publication Date
CN114757273A true CN114757273A (en) 2022-07-15

Family

ID=82329734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210362532.4A Pending CN114757273A (en) 2022-04-07 2022-04-07 Electroencephalogram signal classification method based on collaborative contrast regularization average teacher model

Country Status (1)

Country Link
CN (1) CN114757273A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116258730A (en) * 2023-05-16 2023-06-13 先进计算与关键软件(信创)海河实验室 Semi-supervised medical image segmentation method based on consistency loss function

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116258730A (en) * 2023-05-16 2023-06-13 先进计算与关键软件(信创)海河实验室 Semi-supervised medical image segmentation method based on consistency loss function
CN116258730B (en) * 2023-05-16 2023-08-11 先进计算与关键软件(信创)海河实验室 Semi-supervised medical image segmentation method based on consistency loss function

Similar Documents

Publication Publication Date Title
CN112307958B (en) Micro-expression recognition method based on space-time appearance motion attention network
CN112766355B (en) Electroencephalogram signal emotion recognition method under label noise
CN110288018A (en) A kind of WiFi personal identification method merging deep learning model
Wen et al. Image recovery via transform learning and low-rank modeling: The power of complementary regularizers
CN112800998B (en) Multi-mode emotion recognition method and system integrating attention mechanism and DMCCA
CN112381008B (en) Electroencephalogram emotion recognition method based on parallel sequence channel mapping network
CN113749657B (en) Brain electricity emotion recognition method based on multi-task capsule
CN114176607B (en) Electroencephalogram signal classification method based on vision transducer
CN113673434B (en) Electroencephalogram emotion recognition method based on efficient convolutional neural network and contrast learning
CN111461201A (en) Sensor data classification method based on phase space reconstruction
CN111931656B (en) User independent motor imagery classification model training method based on transfer learning
CN114757273A (en) Electroencephalogram signal classification method based on collaborative contrast regularization average teacher model
CN111242028A (en) Remote sensing image ground object segmentation method based on U-Net
CN117150346A (en) EEG-based motor imagery electroencephalogram classification method, device, equipment and medium
CN117132849A (en) Cerebral apoplexy hemorrhage transformation prediction method based on CT flat-scan image and graph neural network
CN115017960B (en) Electroencephalogram signal classification method based on space-time combined MLP network and application
CN116821764A (en) Knowledge distillation-based multi-source domain adaptive EEG emotion state classification method
CN113516101B (en) Electroencephalogram signal emotion recognition method based on network structure search
CN114283301A (en) Self-adaptive medical image classification method and system based on Transformer
CN115470863A (en) Domain generalized electroencephalogram signal classification method based on double supervision
CN117523626A (en) Pseudo RGB-D face recognition method
CN113553917A (en) Office equipment identification method based on pulse transfer learning
CN114358195A (en) Traditional Chinese medicine complex constitution identification method based on improved VGG16 network
Zhang et al. Hierarchical model compression via shape-edge representation of feature maps—an enlightenment from the primate visual system
CN114492560A (en) Electroencephalogram emotion classification method based on transfer learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination