CN114757273A - Electroencephalogram signal classification method based on collaborative contrast regularization average teacher model - Google Patents
Electroencephalogram signal classification method based on collaborative contrast regularization average teacher model Download PDFInfo
- Publication number
- CN114757273A CN114757273A CN202210362532.4A CN202210362532A CN114757273A CN 114757273 A CN114757273 A CN 114757273A CN 202210362532 A CN202210362532 A CN 202210362532A CN 114757273 A CN114757273 A CN 114757273A
- Authority
- CN
- China
- Prior art keywords
- eeg
- network
- eeg data
- collaborative
- contrast
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000000007 visual effect Effects 0.000 claims abstract description 13
- 238000013434 data augmentation Methods 0.000 claims abstract description 7
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 230000003190 augmentative effect Effects 0.000 claims description 13
- 239000000126 substance Substances 0.000 claims description 8
- 230000003416 augmentation Effects 0.000 claims description 7
- 230000011218 segmentation Effects 0.000 claims description 6
- 238000005215 recombination Methods 0.000 claims description 5
- 230000006798 recombination Effects 0.000 claims description 5
- 230000009467 reduction Effects 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000013527 convolutional neural network Methods 0.000 claims description 2
- 239000004576 sand Substances 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims 3
- 230000001131 transforming effect Effects 0.000 claims 1
- 238000003199 nucleic acid amplification method Methods 0.000 abstract description 9
- 230000003321 amplification Effects 0.000 abstract description 8
- 238000000537 electroencephalography Methods 0.000 description 92
- 230000006870 function Effects 0.000 description 14
- 230000007246 mechanism Effects 0.000 description 4
- 210000004556 brain Anatomy 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000010606 normalization Methods 0.000 description 2
- 208000006011 Stroke Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004958 brain cell Anatomy 0.000 description 1
- 210000003710 cerebral cortex Anatomy 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 150000002500 ions Chemical class 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 210000004761 scalp Anatomy 0.000 description 1
- 208000020431 spinal cord injury Diseases 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7203—Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
- A61B5/7207—Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal of noise induced by motion artifacts
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/725—Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7253—Details of waveform analysis characterised by using transforms
- A61B5/7257—Details of waveform analysis characterised by using transforms using Fourier transforms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Abstract
The invention provides an electroencephalogram signal classification method based on a collaborative contrast regularization average teacher model, which comprises the following steps: firstly, preprocessing an EEG (Electroencephalogram) signal, generating EEG data under a new visual angle by using a two-stage EEG data amplification method, and generating EEG data under another visual angle in a time-frequency domain; respectively learning EEG data characteristics of different visual angles by using student and teacher networks in the average teacher model; and (3) encouraging the consistency of the EEG data predictions of different visual angles by using consistency losses of student and teacher networks, encouraging the consistency of EEG data structures of different visual angles by using collaborative contrast losses, and finally weighting and summing the EEG data structures with cross entropy losses to optimize network parameters. The two-stage EEG data augmentation method provided by the invention can capture time domain and time domain variation factors at the same time, and the provided collaborative contrast regularization average teacher model can learn data level information under different visual angles and consistency information of data structures between different visual angles, so that the robustness, the discriminability and the generalization of the model are improved.
Description
Technical Field
The invention relates to an electroencephalogram signal identification method based on a collaborative contrast regularization average teacher model, and belongs to the field of electroencephalogram signal identification.
Background
Brain Computer Interface (BCI) means to create a direct connection between a human or an animal and an external device, so as to exchange information between the Brain and the device. Currently, the BCI system has been widely used to help patients with stroke, spinal cord injury, etc. to control external devices to improve their quality of life. Electroencephalography (EEG) is a bioelectrical phenomenon that results from the transfer of information in the form of ions between brain cell populations, and is a general reflection of the electrophysiological activity of neurons on the surface of the cerebral cortex or scalp. Compared with other types of brain signals, the EEG has the advantages of convenience in acquisition, high spatial-temporal resolution and the like, and has unique application value in a BCI system. By means of a large amount of EEG training data, the deep learning method greatly improves the EEG recognition performance. However, EEG (such as motor imagery EEG) in partial BCI system paradigm has the common problems of low stationarity and signal-to-noise ratio, and the problem of difficulty in acquiring a large amount of training data due to time and labor consuming data acquisition. EEG data augmentation methods can alleviate the above-mentioned problem of insufficient training data. The EEG contains rich information such as time, frequency, space, etc., and can perform feature representation in multiple domains (such as time domain, time-frequency domain, etc.), however, the existing EEG data augmentation method usually generates new augmentation data in a single domain, ignores the variation factors of the EEG data in multiple domains, and is not high in quality of the augmentation EEG data, thereby bringing challenges to the establishment of a robust EEG identification model. In addition, most of the current deep learning methods only consider information of EEG data layers under different viewing angles, neglect information of unchanged EEG data structure layers between different viewing angles, and are difficult to break through the performance bottleneck of an EEG identification model.
Aiming at the problems, the invention relies on the existing deep learning method, firstly improves the EEG data amplification mechanism, establishes a novel two-stage EEG data amplification mechanism, realizes the amplification representation of EEG data on a plurality of domains, ensures that the generated EEG data can simultaneously capture the variation factors of the plurality of domains, and enhances the generalization performance of the existing model; subsequently, the invention constructs a cooperative constraint mode among multi-view EEG data structures, learns the intra-class similarity and inter-class difference information of an EEG data structure layer by comparing regularization, encourages the consistency of EEG data structures among different views through cooperative constraint, and further improves the classification performance of the EEG data by a model.
Disclosure of Invention
The invention provides an EEG signal identification method based on a collaborative contrast regularization average teacher model, which is characterized in that a two-stage EEG data augmentation method is adopted to generate EEG data capable of capturing variation factors of multiple domains at the same time, an average teacher network model containing student and teacher networks is introduced to learn EEG depth characteristics under different visual angles on the basis, a consistency loss function is used to encourage the students and the teacher network model to predict the consistency of EEG data under different visual angles, a collaborative contrast loss function is used to encourage the consistency of EEG data structures under different visual angles, and finally a final loss function is formed by weighting and summing a cross entropy loss function, a consistency loss function and the collaborative contrast loss function to optimize network parameters.
The invention adopts the following technical scheme for solving the problems:
an electroencephalogram signal identification method based on a collaborative contrast regularization average teacher model comprises the following steps:
step 1: EEG data is acquired.
And 2, step: preprocessing EEG data including band-pass filtering and artifact removal, selecting specific time period and frequency band to obtain EEG dataWhereinRepresenting the nth EEG data, c, d representing the number of electrode channels and the number of time sampling points of the EEG data, ynE {0, 1, 2, 3} is the label to which the nth EEG data corresponds.
And step 3: augmentation by different dataCarrying out segmentation and recombination on the EEG data to obtain the EEG data under different viewing angles after amplificationAndwhereinIs EEG data obtained in a two-stage augmentation mode.
And 4, step 4: using a special EEG decoding network as a main network of an average teacher model to amplify EEG dataAnd exchanging the sequenced EEG dataAnd respectively inputting the student network and the teacher network in the average teacher model.
And 5: and (4) calculating the classification loss by using the probability output obtained by the output of the student network in the step 4 through the softmax function and the real label.
And 6: and 4, obtaining respective probability output by using the output of the student network and the output of the teacher network in the step 4 through a softmax function respectively, and calculating consistency loss between the student network and the teacher network.
And 7: and (4) carrying out dimension reduction on the EEG features at different viewing angles output in the step (4) by using a projection network, and then calculating the collaborative contrast loss among the features after dimension reduction of each viewing angle.
And 8: and respectively carrying out weighted summation on the classification loss, the consistency loss and the collaborative contrast loss to form a final loss function, and optimizing the whole network by using a back propagation algorithm.
Preferably, the EEG data amplification method proposed by the present invention includes two stages, first introducing a linear interpolation technique into a segmentation and reconstruction method in the EEG time domain, then further converting the time domain EEG amplification data into a frequency domain representation thereof by short-time fourier transform, and after segmenting and reconstructing the EEG signal in the time-frequency domain, converting the EEG signal into a time domain EEG signal again by inverse short-time fourier transform, and using the time domain EEG signal as the amplification EEG data.
Preferably, the collaborative contrast regularization average teacher model takes a collaborative contrast regularization mechanism as a core module, which is specifically expressed as:
wherein the content of the first and second substances,andrespectively representing the results obtained by the characteristics of EEG data learned by student (teacher) network in different augmentation modes through projection network and normalization operation,represents all samples except the ith sample, p (i) ≡ a (i): y isp=yiDenotes with the label ypThe same set of all samples, τ represents the scale parameter.
Has the beneficial effects that:
1. according to the invention, a two-stage EEG data amplification method is introduced, the problem that high-frequency noise is brought to the EEG data by a traditional segmentation and recombination method is solved by utilizing a linear interpolation technology in the time domain of an EEG signal in the first stage, the amplified EEG data obtained in the first stage is further amplified in a segmentation and recombination mode in the time-frequency domain of the EEG signal in the second stage, finally generated EEG data can capture the time domain and time-frequency domain variation factors at the same time, and the robustness and generalization capability of the model are enhanced.
2. At present, an average teacher model can only learn information of an EEG data layer, and the method introduces collaborative comparison regularization loss for the average teacher model, on one hand, normalization embedding of EEG data is guaranteed to have better intra-class compactness and inter-class separability through comparison regularization constraint, so that EEG data structure information is learned; on the other hand, a collaborative learning mechanism is designed, EEG data structure consistency information under different visual angles can be learned, and the discriminant learning capability of the model is improved.
Drawings
FIG. 1 is a network frame diagram of an electroencephalogram signal identification method based on a collaborative contrast regularization average teacher model.
Detailed Description
Please refer to fig. 1:
the present invention will be further explained with reference to examples.
The main implementation flow of the invention is as follows, and the related network framework is shown in figure 1.
Step 1: EEG data is acquired.
Step 2: preprocessing EEG data, including band-pass filtering and artifact removing, selecting specific time period and frequency band to obtain EEG dataWhereinThe number of electrode channels and the number of time sampling points, y, representing the nth EEG data, c, d representing the EEG data, respectivelynE {0, 1, 2, 3} is the label to which the nth EEG data corresponds.
And step 3: the EEG data is segmented and recombined according to two different data augmentation modes to obtain the EEG data under different viewing angles after augmentationAndspecific EEG data augmentation is as follows:
1) for pretreatmentThe processed EEG data is transformed from a time domain representation to a time-frequency domain representation using a short-time fourier transform; the method comprises the steps of dividing EEG samples represented by time-frequency domains into a plurality of non-overlapping segments by adopting a segmentation and recombination mode, randomly selecting other EEG samples with the same label according to the label of the current EEG sample, and obtaining the time-frequency domain representation of an augmented sample by randomly combining the segments at the same position of different samples. Reconverting newly generated EEG samples to a time domain representation using an inverse short-time Fourier transform, i.e., forming augmented data corresponding to the EEG data, denoted herein as
2) Dividing the preprocessed EEG data into non-overlapping J segments, and randomly selecting two segments from EEG samples with the same label for interpolation fitting, wherein the specific mode is as follows:
wherein the content of the first and second substances,is representative of the current EEG sample and,representing randomly selected EEG samples with the same label, RijRepresents from [1, N]Randomly selected arbitrary integer, alpha is the interpolation fitting coefficient, and the value is 0.999. For new data formed after interpolation fittingAgain using the method of 1) above, two-stage augmented EEG data is obtained, here denoted as
And 4, step 4: utilizing an EEG decoding network Shallow ConvNet as a backbone network of an average teacher model to amplify EEG dataAnd permuted EEG dataAnd respectively inputting the student network and the teacher network in the average teacher model.
And 5: calculating the classification loss by using the probability output obtained by the output of the student network in the step 4 through the softmax function and the real label, wherein the formula is as follows:
wherein the content of the first and second substances,a loss of classification is indicated and,andrespectively representing the augmented EEG data, y, obtained using the two methods in step 3iA real-life label representing the specimen,representing the predicted outcome of the student network.
Step 6: and 4, calculating consistency loss between the student network and the teacher network by utilizing respective probability output obtained by the softmax function of the output of the student network and the output of the teacher network in the step 4, wherein the formula is as follows:
wherein the content of the first and second substances,a loss of consistency is indicated and indicated,andrespectively representing the augmented EEG data obtained using the two methods in step 3,andrespectively representing the prediction results of the student network and the teacher network.
And 7: using projection networks with the same structure to perform dimension reduction on the EEG characteristics with different visual angles learned in the step 4, extracting deeper EEG characteristics, and using the output of the two projection networks to calculate the collaborative contrast loss between the EEG characteristics, wherein the formula is as follows:
wherein, the first and the second end of the pipe are connected with each other,andrepresenting augmented EEG data in a current processing batch(containing B EEG samples) after the EEG features learned using the student network and the teacher network have passed through the projection network and been normalized,represents all but the i-th sample of the current processing batch, p (i) p ≡ a (i): y isp=yiDenotes with the label ypAnd in the same set of all positive samples, tau represents a scale parameter, and the value of the method is 0.07.
And 8: respectively carrying out weighted summation on the classification loss, the consistency loss and the collaborative contrast loss to form a final loss function, which is as follows:
wherein λ is1,λ2The coefficients of the consistency loss function and the collaborative contrast loss function are respectively expressed, and the values of the invention are 1 and 0.001. Optimizing the student network by using a back propagation algorithm to obtain the updated parameter theta of the student network at the t-th timesAnd teacher network parameter θtDependent on student network parameter θsThe updating is carried out by an exponential moving average method, and the formula is as follows:
θt=β·θt+(1-β)·θs (9)
wherein beta represents an update parameter, and the value of the invention is 0.999.
The above description is only an embodiment of the present invention, and is not intended to limit the scope of the present invention, and all equivalent structures or equivalent flow transformations made by using the contents of the specification and the drawings, or directly or indirectly applied to the related technical fields, are included in the scope of the present invention.
Claims (7)
1. An electroencephalogram signal classification method based on a collaborative contrast regularization average teacher model is characterized by comprising the following steps:
step 1: acquiring EEG data;
step 2: preprocessing EEG data including band-pass filtering and artifact removal, selecting specific time period and frequency band to obtain original EEG dataWhereinRepresenting the nth EEG data, c and d respectively representing the number of electrode channels and the number of time sampling points of the EEG signal, ynE {0, 1, 2, 3} is a label for the nth EEG data;
and step 3: the EEG data is segmented and recombined by using different data augmentation modes to obtain augmented EEG data under different visual anglesAndwhereinEEG data obtained in a two-stage augmentation mode;
and 4, step 4: using a special EEG decoding network as a main network of an average teacher model to amplify EEG dataAnd exchanging the sequenced EEG dataRespectively inputting a student network and a teacher network in the average teacher model;
and 5: calculating classification loss by using probability output obtained by the output of the student network in the step 4 through a softmax function and the real label;
step 6: 4, obtaining respective probability output by utilizing the output of the student network and the output of the teacher network in the step 4 through a softmax function respectively, and calculating consistency loss between the student network and the teacher network;
and 7: reducing the dimension of the EEG characteristics with different view angles output in the step 4 by using a projection network, and then calculating the collaborative contrast loss among the characteristics after dimension reduction of each view angle;
and 8: and respectively carrying out weighted summation on the classification loss, the consistency loss and the collaborative contrast loss to form a final loss function, and optimizing the whole network by using a back propagation algorithm.
2. The electroencephalogram signal classification method based on the collaborative contrast regularization average teacher model according to claim 1, characterized in that in step 3, two augmentation modes mainly include the following:
1) transforming the preprocessed EEG data from a time domain representation to a time-frequency domain representation using a short-time fourier transform; dividing the EEG sample represented by the time-frequency domain into a plurality of non-overlapping segments by adopting a segmentation and recombination mode, then randomly selecting other EEG samples with the same label according to the label of the current EEG sample, and obtaining the time-frequency domain representation of the augmented sample by randomly combining the segments at the same position of different samples; reconverting newly generated EEG samples to a time domain representation using an inverse short-time Fourier transform, i.e., forming augmented data corresponding to the EEG data, denoted herein as
2) Dividing the preprocessed EEG data into non-overlapping J segments, and randomly selecting two segments from EEG samples with the same label for interpolation fitting, wherein the specific mode is as follows:
wherein the content of the first and second substances,is representative of the current EEG sample and,representing randomly selected EEG samples with the same label, RijRepresents from [1, N ]]Randomly selected random integer, alpha is interpolation fitting coefficient, the invention takes 0.999, and new data formed after interpolation fittingReuse of the method of 1) above, i.e. obtaining two-stage augmented data, here denoted as
3. The EEG classification method based on collaborative contrast regularization average teacher model according to claim 1, wherein in step 4, EEG decoding network Shallow ConvNet is used as the backbone network of the average teacher model.
4. The electroencephalogram signal classification method based on the collaborative contrast regularization average teacher model, as claimed in claim 1, wherein in step 5, probability output obtained by passing output of a student network through a softmax function is calculated together with a real label for classification loss, and the specific calculation method is as follows:
5. The electroencephalogram signal classification method based on the collaborative contrast regularization average teacher model, according to the claim 1, characterized in that in the step 6, respective probability outputs obtained by the outputs of the student network and the teacher network respectively through a softmax function in the step 4 are utilized to calculate consistency loss between the outputs, and the specific calculation mode is as follows:
wherein the content of the first and second substances,a loss of consistency is indicated and indicated,andrespectively representing the augmented EEG data obtained using the two methods in step 3,andand respectively representing the prediction results of the student network and the teacher network.
6. The electroencephalogram signal classification method based on the collaborative contrast regularization average teacher model, according to claim 1, characterized in that in step 7, the collaborative contrast loss between the two projection networks is calculated by using the outputs of the two projection networks, and the specific calculation method is as follows:
wherein the content of the first and second substances,andrepresenting augmented EEG data in a current processing batch(containing B EEG samples) after the EEG features learned by student network and teacher network are projected through the projection network and normalized,represents all but the i-th sample of the current processing batch, p (i) p ≡ a (i): y isp=yiDenotes with the label ypAnd in the same set of all positive samples, tau represents a scale parameter, and the value of the method is 0.07.
7. The electroencephalogram signal classification method based on the collaborative contrast regularized average teacher model, according to claim 1, characterized in that in step 8, the classification loss, the consistency loss and the collaborative contrast loss are weighted and summed to form a final loss function, and a back propagation algorithm is used to optimize the whole network, specifically as follows:
wherein λ is1,λ2Separately representing a consistency loss function and a cooperative pairComparing with the coefficient of the loss function, the invention takes values of 1 and 0.001, and optimizes the student network by using a back propagation algorithm to obtain the updating parameter theta of the student network at the t timesAnd teacher network parameter θtDependent on student network parameter θsThe updating is carried out by an exponential moving average method, and the formula is as follows:
θt=β·θt+(1-β)·θs (7)
wherein beta represents an update parameter, and the value of the invention is 0.999.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210362532.4A CN114757273A (en) | 2022-04-07 | 2022-04-07 | Electroencephalogram signal classification method based on collaborative contrast regularization average teacher model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210362532.4A CN114757273A (en) | 2022-04-07 | 2022-04-07 | Electroencephalogram signal classification method based on collaborative contrast regularization average teacher model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114757273A true CN114757273A (en) | 2022-07-15 |
Family
ID=82329734
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210362532.4A Pending CN114757273A (en) | 2022-04-07 | 2022-04-07 | Electroencephalogram signal classification method based on collaborative contrast regularization average teacher model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114757273A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116258730A (en) * | 2023-05-16 | 2023-06-13 | 先进计算与关键软件(信创)海河实验室 | Semi-supervised medical image segmentation method based on consistency loss function |
-
2022
- 2022-04-07 CN CN202210362532.4A patent/CN114757273A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116258730A (en) * | 2023-05-16 | 2023-06-13 | 先进计算与关键软件(信创)海河实验室 | Semi-supervised medical image segmentation method based on consistency loss function |
CN116258730B (en) * | 2023-05-16 | 2023-08-11 | 先进计算与关键软件(信创)海河实验室 | Semi-supervised medical image segmentation method based on consistency loss function |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112307958B (en) | Micro-expression recognition method based on space-time appearance motion attention network | |
CN112766355B (en) | Electroencephalogram signal emotion recognition method under label noise | |
CN110288018A (en) | A kind of WiFi personal identification method merging deep learning model | |
Wen et al. | Image recovery via transform learning and low-rank modeling: The power of complementary regularizers | |
CN112800998B (en) | Multi-mode emotion recognition method and system integrating attention mechanism and DMCCA | |
CN112381008B (en) | Electroencephalogram emotion recognition method based on parallel sequence channel mapping network | |
CN113749657B (en) | Brain electricity emotion recognition method based on multi-task capsule | |
CN114176607B (en) | Electroencephalogram signal classification method based on vision transducer | |
CN113673434B (en) | Electroencephalogram emotion recognition method based on efficient convolutional neural network and contrast learning | |
CN111461201A (en) | Sensor data classification method based on phase space reconstruction | |
CN111931656B (en) | User independent motor imagery classification model training method based on transfer learning | |
CN114757273A (en) | Electroencephalogram signal classification method based on collaborative contrast regularization average teacher model | |
CN111242028A (en) | Remote sensing image ground object segmentation method based on U-Net | |
CN117150346A (en) | EEG-based motor imagery electroencephalogram classification method, device, equipment and medium | |
CN117132849A (en) | Cerebral apoplexy hemorrhage transformation prediction method based on CT flat-scan image and graph neural network | |
CN115017960B (en) | Electroencephalogram signal classification method based on space-time combined MLP network and application | |
CN116821764A (en) | Knowledge distillation-based multi-source domain adaptive EEG emotion state classification method | |
CN113516101B (en) | Electroencephalogram signal emotion recognition method based on network structure search | |
CN114283301A (en) | Self-adaptive medical image classification method and system based on Transformer | |
CN115470863A (en) | Domain generalized electroencephalogram signal classification method based on double supervision | |
CN117523626A (en) | Pseudo RGB-D face recognition method | |
CN113553917A (en) | Office equipment identification method based on pulse transfer learning | |
CN114358195A (en) | Traditional Chinese medicine complex constitution identification method based on improved VGG16 network | |
Zhang et al. | Hierarchical model compression via shape-edge representation of feature maps—an enlightenment from the primate visual system | |
CN114492560A (en) | Electroencephalogram emotion classification method based on transfer learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |