CN117056788A - EEG signal classification method and device based on supervised comparison learning - Google Patents

EEG signal classification method and device based on supervised comparison learning Download PDF

Info

Publication number
CN117056788A
CN117056788A CN202311315334.3A CN202311315334A CN117056788A CN 117056788 A CN117056788 A CN 117056788A CN 202311315334 A CN202311315334 A CN 202311315334A CN 117056788 A CN117056788 A CN 117056788A
Authority
CN
China
Prior art keywords
sample
eeg
samples
data
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311315334.3A
Other languages
Chinese (zh)
Other versions
CN117056788B (en
Inventor
冯琳清
田琪
吴雯
罗曼丽
朱琴
蔡涛
魏依娜
唐弢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202311315334.3A priority Critical patent/CN117056788B/en
Publication of CN117056788A publication Critical patent/CN117056788A/en
Application granted granted Critical
Publication of CN117056788B publication Critical patent/CN117056788B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an EEG signal classification method and device based on supervised contrast learning, comprising the following steps: inputting the EEG signals into a pre-trained feature extraction model to obtain EEG features; classifying the EEG features by a classifier; the training process of the feature extraction model comprises the following steps: acquiring a labeled EEG dataset; intercepting EEG data, wherein EEG data samples with the same label are positive samples, and EEG data samples with different labels are negative samples; carrying out data enhancement on the intercepted EEG data sample based on the sliding window to obtain an enhanced sample; taking the intercepted EEG data sample and the enhanced sample as a training set; and setting a loss function by using a training set training feature extraction model, wherein the loss function is used for gradually increasing the distance between the negative sample feature vectors, gradually decreasing the distance between the positive sample feature vectors and gradually decreasing the distance between the enhanced sample feature vectors and the positive sample feature vectors.

Description

EEG signal classification method and device based on supervised comparison learning
Technical Field
The application relates to the field of Electroencephalogram (EEG) signal processing, in particular to an EEG signal classification technology based on supervised contrast learning (Contrastive Learning).
Background
EEG is a pattern obtained by amplifying and recording spontaneous bioelectric potentials of the brain from the scalp by a precise electronic instrument, and is spontaneous and rhythmic electric activity of a brain cell group recorded by an electrode. As an important means of reflecting and revealing brain activity, EEG plays an important role in exploring brain structure and mechanism. In clinical aspect, it is widely applied to diagnosis or treatment of various brain diseases such as sleep disorder, epilepsy, senile dementia and the like; in the research field, the device is an important research means in the research field of brain-computer interfaces (Brain Computer Interface), and is an important tool for researching brain activity mechanisms, constructing brain-computer interaction robots and realizing man-machine hybrid intelligent application.
Different brain activities often produce respective corresponding EEG signals, and exploring the association between EEG signals and specific brain activities is a prerequisite for understanding the mechanisms of brain activity. Classifying the EEG signals based on the features expressed in the EEG signals can help us learn about the correlation between EEG signals and specific brain activities. Clinically, the EEG signals of epileptic patients are distinguished from the normal EEG signals, so that the characteristics of the EEG signals of the epileptic are obtained, diagnosis and monitoring of epileptic diseases can be realized, and diagnosis and treatment effects of the patients are improved; in brain-computer interface research, by distinguishing respective EEG signals under specific brain behaviors, the brain-computer interface research can help us to decode brain activity mechanisms based on the EEG signals and realize brain-computer interaction. Thus, an accurate and efficient EEG signal classification method is necessary for brain science research and application.
The traditional EEG signal classification method is to manually extract EEG signal characteristics after the signals are subjected to filtering, segmentation and other processes, and then classify the characteristics through a classifier. The method needs expert to combine the experience knowledge and application scene characteristics, selects proper characteristics and classifiers, has classification effect influenced by expert knowledge level, and has higher generalization difficulty in other scenes. With the continuous development of machine learning technology, more and more researchers develop EEG classification research using related technologies. By constructing network models with different structures, potential characteristics of data per se in a large number of EEGs are mined, so that accurate and efficient classification of the EEG data is realized. Although a few related art methods have been proposed for EEG data classification, there are certain deficiencies in the prior art. First, the EEG signal is used as a continuous time sequence signal, and errors in labeling the data labels or differences in the start points of the data segmentation can cause differences in the resulting EEG signal segments. Even for EEG data segments containing the same label, the difference in the position of the label data in the sample will affect the difference in the final extracted features, and will affect the final data classification accuracy. Therefore, there is still room for further improvement in the prior art to enhance the classification effect of EEG data, ultimately facilitating the development of related applications and research.
Disclosure of Invention
Aiming at the defects of the prior art, the application provides an EEG signal classification method and device based on supervised contrast learning, which fully utilizes the label information existing in the EEG signal to realize accurate classification of the EEG signal types.
In a first aspect, an embodiment of the present application provides a method for classifying EEG signals based on supervised contrast learning, the method specifically comprising:
inputting the EEG signals to a pre-trained feature extraction model for feature extraction to obtain EEG features;
classifying the extracted EEG features by a classifier;
the training process of the feature extraction model comprises the following steps:
acquiring a labeled EEG dataset;
according to the label information of the EEG data, the EEG data samples with the same label are positive samples, and the EEG data samples with different labels are negative samples;
carrying out data enhancement on the intercepted EEG data sample based on a sliding window method to obtain an enhanced sample;
taking the intercepted EEG data sample and the enhanced sample as a training set; and setting a loss function by utilizing a training set training feature extraction model, wherein the loss function is used for gradually increasing the distance between feature vectors corresponding to negative samples, gradually decreasing the distance between feature vectors corresponding to positive samples, and further gradually decreasing the distance between the feature vectors corresponding to enhanced samples in the positive samples and the feature vectors corresponding to other positive samples.
In a second aspect, embodiments of the present application provide an EEG signal classification apparatus based on supervised contrast learning, the apparatus comprising:
the input unit is used for inputting the EEG signals into a pre-trained feature extraction model to perform feature extraction so as to obtain EEG features;
a generation unit for classifying the extracted EEG features by a classifier;
the training process of the feature extraction model comprises the following steps:
acquiring a labeled EEG dataset;
according to the label information of the EEG data, the EEG data samples with the same label are positive samples, and the EEG data samples with different labels are negative samples;
carrying out data enhancement on the intercepted EEG data sample based on a sliding window method to obtain an enhanced sample;
taking the intercepted EEG data sample and the enhanced sample as a training set; and setting a loss function by utilizing a training set training feature extraction model, wherein the loss function is used for gradually increasing the distance between feature vectors corresponding to negative samples, gradually decreasing the distance between feature vectors corresponding to positive samples, and further gradually decreasing the distance between the feature vectors corresponding to enhanced samples in the positive samples and the feature vectors corresponding to other positive samples.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, the memory being coupled to the processor; the memory is used for storing program data, and the processor is used for executing the program data to realize the EEG signal classification method based on supervised contrast learning.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the above-described supervised contrast learning based EEG signal classification method.
Compared with the prior art, the application has the beneficial technical effects that:
1) The application provides an EEG signal classification method based on supervised contrast learning, which fully plays the role of a data tag on the premise that EEG data itself has the data tag, and realizes the effective classification of different types of EEG data. The method of the application is based on the data label to mine the potential difference of EEG data of different categories, reduces the dependence on scene and expert knowledge, and can be applied in various EEG data classification scenes.
2) The method has the advantages that the number of the same label samples is increased through the data enhancement method based on the sliding window, the diversity of the label samples is improved, feature extraction differences caused by different positions of label fragments in the samples are reduced, the influence of the label fragments on the final classification effect is reduced, and the classification accuracy is improved.
3) The contrast loss function for distinguishing the enhancement sample from the common same label sample (positive sample and negative sample) is provided, and the influence of the feature extraction model on the enhancement sample, the same label sample (i.e. positive sample) and different label samples (i.e. negative sample) during training is comprehensively considered, so that EEG features extracted by the feature extraction model can effectively distinguish EEG data of different labels.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
FIG. 1 is a flow chart of an EEG signal classification method based on supervised contrast learning according to an embodiment of the present application;
FIG. 2 is a flowchart of training steps of a feature extraction model according to an embodiment of the present application
FIG. 3 is a training frame diagram of a feature extraction model provided by an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a feature extraction model according to an embodiment of the present application;
the loss value variation graph trained in the embodiment of fig. 5;
FIG. 6 is a schematic diagram of an EEG signal classification device based on supervised contrast learning according to an embodiment of the present application;
fig. 7 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the application. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
The EEG signal characteristic retrieval method and device based on the convolutional neural network provided by the application are explained in detail below with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present application provides a method for classifying EEG signals based on supervised contrast learning, the method specifically comprising:
s1, inputting the EEG signals into a pre-trained feature extraction model to perform feature extraction, and obtaining EEG features.
As shown in fig. 2 and 3, the feature extraction model is obtained through training:
s101, acquiring a labeled EEG data set, and filtering and denoising.
Specifically, because the EEG signals can be affected by factors such as environmental noise and muscle movement artifacts in the acquisition process, filtering and denoising processing is needed for the EEG signals according to the actual application scene requirements, interference information in the signals is filtered, and effective signal components are reserved.
In an embodiment for extracting EEG sleep spindle waves, the main frequency band of the sleep spindle waves is 11-16 Hz, so that a band-pass filter with cut-off frequencies of 11 Hz and 16 Hz can be selected to filter the data, and noise data in the signal is removed.
S102, intercepting the EEG data according to the label information of the EEG data. In the subsequent feature extraction model training, the EEG data samples with the same label are positive samples, and the EEG data samples with different labels are negative samples.
Wherein intercepting EEG data based on the tag information of the EEG data comprises: acquiring the length of the tag information, setting the interception length of the EEG data according to the sampling frequency of the EEG data, the length of the tag information and the length requirement of the input data for training the feature extraction model, and starting interception from each starting point of the tag information.
In an EEG spindle embodiment, the spindle tag length varies from 0.5 to 1 second. The sampling frequency of the EEG data is 200Hz per second, so 256 is chosen as the intercept length, and the signal is intercepted as a positive sample starting from each starting point of the tag information. Since in this embodiment there is only one type of tag in the EEG data, the subsequent application scenario is that the data with the tag can be identified from the normal EEG signal. Thus, the present embodiment also randomly intercepts the signal as a negative sample in the non-expert labeled region of the EEG data set based on 256 signal lengths, ultimately training the feature extraction model to distinguish the two labels.
S103, carrying out data enhancement on the intercepted EEG data samples based on a sliding window method, and carrying out data enhancement on each intercepted EEG data sample to obtain a plurality of corresponding enhancement samples, wherein the labels of the enhancement samples are consistent with the labels of the samples corresponding to the enhancement samples; and intercepting the obtained EEG data sample and the enhancement sample to form a training set of a subsequent training feature extraction model.
Specifically, data enhancement of the extracted positive and negative samples based on the sliding window method includes: the length of the extracted sample is M, the effective length of the label information is N, and the length selection range of M is N < M <2N; and sliding the sample for multiple times with the step length of S by a sliding window method, and intercepting the enhanced sample to increase the sample containing the same tag fragment. Wherein the length of the sliding window is the same as the length of the sample; the value range of S is 0.125-0.25X (M-N). The intercepted EEG data samples and the enhanced samples together form training data of a feature extraction model.
It should be noted that, the length of the EEG data sample for training the feature extraction model is set to be M, where the data sample length M needs to be greater than the effective sample length N of the expert label, so as to ensure that the segmented segment can be a complete segment containing the expert label; to reduce the entry of invalid interference information into the model, the length of M is selected to be in the range of N < M <2N.
As in one embodiment, the maximum length of the expert label is 200, the length of the eeg signal interception M is 256, and the step size is selected to be 0.25 times the M-N difference, i.e. the sliding window is used for the data enhancement operation with a length of step size s=16. The extracted positive and negative samples and the enhanced sample together form training data of the feature extraction model.
The feature extraction model consists of a coding layer and a projection layer. The coding layer is composed of a plurality of convolution layers and a pooling layer, and is used for performing dimension reduction and feature extraction on the input EEG signals and used for compressing the input EEG signals and extracting hidden EEG features. The projection layer is then made up of several linear layers, the function of which is to project the extracted EEG features into the same feature space to calculate the distance between their feature vectors.
First, a model structure of a feature extraction model is described: as shown in fig. 4, the feature extraction model includes a coding layer and a projection layer, where the coding layer performs a dimension reduction and feature extraction on an input EEG signal to obtain a first EEG feature vector; the projection layer projects the extracted first EEG feature vector to the same space to obtain a second EEG feature vector. The coding layer comprises a first convolution layer, a batch normalization layer (BN layer), a pooling layer and a second convolution layer which are sequentially connected; the projection layer comprises a first linear layer and a second linear layer which are sequentially connected.
Further, after the EEG signal sample with the signal length of 256 passes through the encoder, the data features are extracted through convolution kernel pooling operation, and finally, feature vectors with shape (4,1,30) are output. The projection layer projects the feature vectors into the same feature space, and the final output vector is the feature vector of (1, 16). The detailed parameters of each layer of model are shown in table 1.
Table 1: detailed parameter table of self-encoder model
S104, randomly extracting a batch of samples in the training set, inputting the samples into the feature extraction model, and setting a loss function, wherein the loss function is used for gradually increasing the distance between feature vectors corresponding to negative samples, gradually decreasing the distance between feature vectors corresponding to positive samples, and further gradually decreasing the distance between the feature vectors corresponding to enhanced samples in the positive samples and the feature vectors corresponding to other positive samples.
Specifically, the step S104 includes:
the decimated training set data is input into the model (i.e., training data for one batch), which in this example is sized at 64.
The training data of the feature extraction model to be input is normalized, and in this example, the normalization method adopts a maximum and minimum normalization method, and the conversion formula is as follows:
y=(x-min (x))/( max (x)- min (x));
where min (x) represents the minimum value in the training data set x and max (x) represents the maximum value in the training data set x.
Inputting a sample of the batch into a feature extraction model to perform feature extraction, and calculating a feature vector z obtained after each sample passes through a projection layer i
The loss function (loss function) is expressed as follows:
α= (|E|)/(|E|+|P|);
β= (|P|)/(|E|+|P|);
where the sample set I is the sum of the extracted samples and the corresponding enhanced samples, I is any sample in the sample set I of the input model, B is the sum of all positive samples (samples with the same label as the input sample I) for the input sample I, E is the sum of the samples IA row enhancement sample set, wherein P is a set of all the same label samples except i enhancement samples, A is a set of all negative samples (samples with different labels from the input sample i), E is any sample in a set E, and E| is the total number of enhancement samples; a is any sample in the set A, P is any sample in the set P, p| is the total number of samples in the set P, and z i Then represents the feature vector, z, of sample i obtained by the feature extraction model e To enhance the feature vector of the sample obtained by the feature extraction model, z a Is the characteristic vector z obtained by extracting the negative sample through the characteristic model p In order to enhance the feature vector obtained by the feature extraction model of the positive sample except the sample, τ represents the hyper-parameters of the feature extraction model, and α and β represent the influence factors of the enhanced sample and the positive sample respectively.
In the loss function, the feature vector z obtained for the input sample i i ,z i ·z e Representing the similarity distance, z, of the input sample i to its enhanced sample i ·z p Then the input sample i is represented by a similar distance, z, from the positive sample other than the self-enhanced sample i ·z a Then a similar distance of the input sample from the negative sample is represented. In order to optimize the model's ability in training, the value of the loss function needs to be continually reduced. In order to reduce the value of the loss function, it is then necessary to reduce the fraction of molecules in the fraction (i.e. z i ·z p ,z i ·z e Two parts), the part of the denominator (i.e. z) i ·z a ). Therefore, the whole training process gradually shortens the distance between the input sample i and the positive sample, and enlarges the distance between the input sample i and the negative sample. The influence of the sample obtained by enhancing the input sample i and the positive sample in addition to the sample on the loss function can be further adjusted through the weight coefficients alpha and beta, so that the characteristic extraction model can more accurately distinguish different types of samples in the training process.
In the whole training process, the feature extraction model can continuously update model parameters based on a back propagation mode so as to ensure that the feature extraction model can learn proper parameters. As shown in fig. 5, the training process is ended until the value of the loss function loss tends to be stable, and a trained feature extraction model is obtained.
It should be noted that contrast learning is a machine learning technique that enlarges the distance between dissimilar sample feature vectors by continuously narrowing the distance between similar sample feature vectors during model training, so that the model has the capability of distinguishing different samples. The label information of the data is effectively utilized in contrast learning, so that the classification of the data can be more effectively realized. The enhanced sample and other identical label samples are identical in the conventional contrast learning loss calculation function, and the distinction between the enhanced sample and other identical label samples is not considered. In this example, the influence of the enhancement sample, the same label sample (i.e., positive sample), and different label samples (i.e., negative sample) on the training of the final model is considered through the set loss function, so that the classification accuracy of the model is finally improved.
S2, classifying the extracted EEG features through a classifier.
In this example, the classifier uses a support vector machine (Support vector machine, SVM).
Specifically, an SVM classifier is created that is trained using the extracted EEG features and corresponding labels. After training is completed, the EEG data without labels can be classified using the SVM classifier.
As shown in fig. 6, the present application provides an EEG signal classification apparatus based on supervised contrast learning, the apparatus comprising:
and the input unit is used for inputting the EEG signals into a pre-trained feature extraction model to perform feature extraction so as to obtain EEG features.
And the generation unit is used for classifying the extracted EEG features through a classifier.
The training process of the feature extraction model comprises the following steps:
a labeled EEG dataset is acquired.
And intercepting the EEG data according to the label information of the EEG data, wherein EEG data samples with the same label are positive samples, and EEG data samples with different labels are negative samples.
And carrying out data enhancement on the intercepted EEG data sample based on a sliding window method to obtain an enhanced sample.
Taking the intercepted EEG data sample and the enhanced sample as a training set; and setting a loss function by utilizing a training set training feature extraction model, wherein the loss function is used for gradually increasing the distance between feature vectors corresponding to negative samples, gradually decreasing the distance between feature vectors corresponding to positive samples, and gradually decreasing the distance between the feature vectors corresponding to enhanced samples and the feature vectors corresponding to positive samples.
As shown in fig. 7, an embodiment of the present application provides an electronic device including a memory 101 for storing one or more programs; a processor 102. The method of any of the first aspects described above is implemented when one or more programs are executed by the processor 102.
And a communication interface 103, where the memory 101, the processor 102 and the communication interface 103 are electrically connected directly or indirectly to each other to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory 101 may be used to store software programs and modules that are stored within the memory 101 for execution by the processor 102 to perform various functional applications and data processing. The communication interface 103 may be used for communication of signaling or data with other node devices.
The Memory 101 may be, but is not limited to, a random access Memory 101 (Random Access Memory, RAM), a Read Only Memory 101 (ROM), a programmable Read Only Memory 101 (Programmable Read-Only Memory, PROM), an erasable Read Only Memory 101 (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable Read Only Memory 101 (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
The processor 102 may be an integrated circuit chip with signal processing capabilities. The processor 102 may be a general purpose processor 102, including a central processor 102 (Central Processing Unit, CPU), a network processor 102 (Network Processor, NP), etc.; but may also be a digital signal processor 102 (Digital Signal Processing, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
In the embodiments provided in the present application, it should be understood that the disclosed method and system may be implemented in other manners. The above-described method and system embodiments are merely illustrative, for example, flow charts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of methods and systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
In another aspect, an embodiment of the application provides a computer readable storage medium having stored thereon a computer program which, when executed by the processor 102, implements a method as in any of the first aspects described above. The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory 101 (ROM), a random access Memory 101 (RAM, random Access Memory), a magnetic disk or an optical disk, or other various media capable of storing program codes.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. The specification and examples are to be regarded in an illustrative manner only.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof.

Claims (10)

1. A method for classifying EEG signals based on supervised contrast learning, the method comprising in particular:
inputting the EEG signals to a pre-trained feature extraction model for feature extraction to obtain EEG features;
classifying the extracted EEG features by a classifier;
the training process of the feature extraction model comprises the following steps:
acquiring a labeled EEG dataset;
according to the label information of the EEG data, the EEG data samples with the same label are positive samples, and the EEG data samples with different labels are negative samples;
carrying out data enhancement on the intercepted EEG data sample based on a sliding window method to obtain an enhanced sample;
taking the intercepted EEG data sample and the enhanced sample as a training set; and setting a loss function by utilizing a training set training feature extraction model, wherein the loss function is used for gradually increasing the distance between feature vectors corresponding to negative samples, gradually decreasing the distance between feature vectors corresponding to positive samples, and further gradually decreasing the distance between the feature vectors corresponding to enhanced samples in the positive samples and the feature vectors corresponding to other positive samples.
2. The supervised contrast learning based EEG signal classification method according to claim 1, wherein intercepting EEG data according to the label information of the EEG data comprises:
the length of the tag information is acquired, the length of interception of the EEG data is set according to the sampling frequency of the EEG data, and interception is started from each starting point of the tag information.
3. The supervised contrast learning based EEG signal classification method according to claim 1, wherein data enhancement of extracted positive and negative samples based on a sliding window method comprises:
the length of the extracted sample is M, the effective length of the label information is N, and the length selection range of M is N < M <2N; sliding the sample for multiple times with the step length S by a sliding window method; wherein the length of the sliding window is the same as the length of the sample; the value range of S is 0.125-0.25X (M-N).
4. The supervised contrast learning based EEG signal classification method according to claim 1, wherein the feature extraction model comprises a coding layer and a projection layer, wherein the coding layer performs a down-scaling feature extraction on the input EEG signal to obtain a first EEG feature vector; the projection layer projects the extracted first EEG feature vector to the same feature space to obtain a second EEG feature vector.
5. The supervised contrast learning based EEG signal classification method according to claim 4, wherein the coding layer comprises a first convolution layer, a batch normalization layer, a pooling layer, a second convolution layer connected in sequence; the projection layer comprises a first linear layer and a second linear layer which are sequentially connected.
6. The supervised contrast learning based EEG signal classification method according to claim 1, wherein the expression of the loss function is:
wherein, the sample set I is the sum of the extracted samples and the corresponding enhanced samples, I is any sample in the sample set I of the input model, for the sample I, B is the sum of all positive samples, E is the enhanced sample set with the sample I, P is the set of all the same label samples except the enhanced sample of I, A is the set of all negative samples, E is any sample in the set E, and E is the total number of the enhanced samples; a is any sample in the set A, P is any sample in the set P, p| is the total number of samples in the set P, and z i Then represents the feature vector, z, of sample i obtained by the feature extraction model e To enhance the feature vector of the sample obtained by the feature extraction model, z a Is the characteristic vector z obtained by extracting the negative sample through the characteristic model p In order to enhance the feature vector obtained by the feature extraction model of the positive sample except the sample, τ represents the hyper-parameters of the feature extraction model, and α and β represent the influence factors of the enhanced sample and the positive sample respectively.
7. The supervised contrast learning based EEG signal classification method according to claim 6, wherein the impact factor of the enhancement samples is the ratio of enhancement sample bases to the total number of enhancement sample bases to positive sample bases; the positive sample has an impact factor of the positive sample radix being a ratio of the enhanced sample radix to the total number of positive sample radix.
8. An EEG signal classification apparatus based on supervised contrast learning, the apparatus comprising:
the input unit is used for inputting the EEG signals into a pre-trained feature extraction model to perform feature extraction so as to obtain EEG features;
a generation unit for classifying the extracted EEG features by a classifier;
the training process of the feature extraction model comprises the following steps:
acquiring a labeled EEG dataset;
according to the label information of the EEG data, the EEG data samples with the same label are positive samples, and the EEG data samples with different labels are negative samples;
carrying out data enhancement on the intercepted EEG data sample based on a sliding window method to obtain an enhanced sample;
taking the intercepted EEG data sample and the enhanced sample as a training set; and setting a loss function by utilizing a training set training feature extraction model, wherein the loss function is used for gradually increasing the distance between feature vectors corresponding to negative samples, gradually decreasing the distance between feature vectors corresponding to positive samples, and further gradually decreasing the distance between the feature vectors corresponding to enhanced samples in the positive samples and the feature vectors corresponding to other positive samples.
9. An electronic device comprising a memory and a processor, wherein the memory is coupled to the processor; wherein the memory is for storing program data and the processor is for executing the program data to implement the supervised contrast learning based EEG signal classification method of any of claims 1-7.
10. A computer readable storage medium having stored thereon a computer program, wherein the program when executed by a processor implements the supervised contrast learning based EEG signal classification method of any of claims 1-7.
CN202311315334.3A 2023-10-12 2023-10-12 EEG signal classification method and device based on supervised comparison learning Active CN117056788B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311315334.3A CN117056788B (en) 2023-10-12 2023-10-12 EEG signal classification method and device based on supervised comparison learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311315334.3A CN117056788B (en) 2023-10-12 2023-10-12 EEG signal classification method and device based on supervised comparison learning

Publications (2)

Publication Number Publication Date
CN117056788A true CN117056788A (en) 2023-11-14
CN117056788B CN117056788B (en) 2023-12-19

Family

ID=88653974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311315334.3A Active CN117056788B (en) 2023-10-12 2023-10-12 EEG signal classification method and device based on supervised comparison learning

Country Status (1)

Country Link
CN (1) CN117056788B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699960A (en) * 2021-01-11 2021-04-23 华侨大学 Semi-supervised classification method and equipment based on deep learning and storage medium
US20210319266A1 (en) * 2020-04-13 2021-10-14 Google Llc Systems and methods for contrastive learning of visual representations
CN115878984A (en) * 2022-12-26 2023-03-31 阿里云计算有限公司 Vibration signal processing method and device based on supervised contrast learning
CN116010840A (en) * 2022-11-18 2023-04-25 西安电子科技大学 Multi-source-domain sample re-weighted EEG signal cross-device decoding method
EP4235492A1 (en) * 2022-02-28 2023-08-30 Fujitsu Limited A computer-implemented method, data processing apparatus and computer program for object detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210319266A1 (en) * 2020-04-13 2021-10-14 Google Llc Systems and methods for contrastive learning of visual representations
CN112699960A (en) * 2021-01-11 2021-04-23 华侨大学 Semi-supervised classification method and equipment based on deep learning and storage medium
EP4235492A1 (en) * 2022-02-28 2023-08-30 Fujitsu Limited A computer-implemented method, data processing apparatus and computer program for object detection
CN116010840A (en) * 2022-11-18 2023-04-25 西安电子科技大学 Multi-source-domain sample re-weighted EEG signal cross-device decoding method
CN115878984A (en) * 2022-12-26 2023-03-31 阿里云计算有限公司 Vibration signal processing method and device based on supervised contrast learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘美春;: "脑-机接口系统的类协同式半监督学习", 科学技术与工程, no. 19, pages 88 - 92 *

Also Published As

Publication number Publication date
CN117056788B (en) 2023-12-19

Similar Documents

Publication Publication Date Title
Cui et al. Automatic sleep stage classification based on convolutional neural network and fine-grained segments
Abdelhameed et al. A deep learning approach for automatic seizure detection in children with epilepsy
Hartmann et al. Automatic a-phase detection of cyclic alternating patterns in sleep using dynamic temporal information
CN110801221B (en) Sleep apnea fragment detection equipment based on unsupervised feature learning
CN109934089B (en) Automatic multi-stage epilepsia electroencephalogram signal identification method based on supervised gradient raiser
US20190142291A1 (en) System and Method for Automatic Interpretation of EEG Signals Using a Deep Learning Statistical Model
CN110353673B (en) Electroencephalogram channel selection method based on standard mutual information
Bouton et al. Focal versus distributed temporal cortex activity for speech sound category assignment
Wang et al. CAB: classifying arrhythmias based on imbalanced sensor data
Yuan et al. Wave2vec: Learning deep representations for biosignals
Taqi et al. Classification and discrimination of focal and non-focal EEG signals based on deep neural network
Billeci et al. A combined independent source separation and quality index optimization method for fetal ECG extraction from abdominal maternal leads
CN111951958A (en) Pain data evaluation method based on self-coding and related components
Li et al. Patient-specific seizure prediction from electroencephalogram signal via multi-channel feedback capsule network
CN114595725B (en) Electroencephalogram signal classification method based on addition network and supervised contrast learning
CN113925459A (en) Sleep staging method based on electroencephalogram feature fusion
Asghar et al. Semi-skipping layered gated unit and efficient network: hybrid deep feature selection method for edge computing in EEG-based emotion classification
Kumari et al. Automated visual stimuli evoked multi-channel EEG signal classification using EEGCapsNet
Malik et al. Accurate classification of heart sound signals for cardiovascular disease diagnosis by wavelet analysis and convolutional neural network: preliminary results
Tigga et al. Efficacy of novel attention-based gated recurrent units transformer for depression detection using electroencephalogram signals
Alalayah et al. Effective early detection of epileptic seizures through EEG signals using classification algorithms based on t-distributed stochastic neighbor embedding and K-means
CN117056788B (en) EEG signal classification method and device based on supervised comparison learning
CN116664956A (en) Image recognition method and system based on multi-task automatic encoder
Hassan et al. Review of EEG Signals Classification Using Machine Learning and Deep-Learning Techniques
Bahador et al. Morphology-preserving reconstruction of times series with missing data for enhancing deep learning-based classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant