CN115358260A - Electroencephalogram sleep staging method and device, electronic equipment and storage medium - Google Patents

Electroencephalogram sleep staging method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115358260A
CN115358260A CN202210896352.4A CN202210896352A CN115358260A CN 115358260 A CN115358260 A CN 115358260A CN 202210896352 A CN202210896352 A CN 202210896352A CN 115358260 A CN115358260 A CN 115358260A
Authority
CN
China
Prior art keywords
sleep
electroencephalogram
classifier
subject
sleep staging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210896352.4A
Other languages
Chinese (zh)
Inventor
李景聪
吴潮煌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Normal University
Original Assignee
South China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Normal University filed Critical South China Normal University
Priority to CN202210896352.4A priority Critical patent/CN115358260A/en
Publication of CN115358260A publication Critical patent/CN115358260A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention relates to a brain electrical sleep staging method and device, electronic equipment and a storage medium. The electroencephalogram sleep stage classifying method comprises the following steps: acquiring electroencephalogram signals to be classified; inputting the electroencephalogram signals into a trained feature extraction network to obtain feature vectors corresponding to the electroencephalogram signals; inputting the feature vectors into a trained classifier to obtain sleep stage classification results corresponding to the electroencephalogram signals; wherein the classifier is a prototype network improved based on a meta-learning algorithm. The electroencephalogram sleep staging method can fully utilize the information of non-target domain data and transfer the information on a target domain aiming at the problem of less patient samples. And only little sample data is needed for each category.

Description

Electroencephalogram sleep staging method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of deep learning, in particular to a method and a device for staging sleep of electroencephalogram, electronic equipment and a storage medium.
Background
The brain-computer interface technology is used for collecting sleep electroencephalogram signals, and the collected signals are analyzed and processed to obtain data related to sleep quality, so that the method has great significance. Sleep staging is the key content in the field of sleep activity, by which the progress of an individual's sleep activity becomes apparent. The calculation of sleep quality evaluation indexes such as the deep sleep duration, the total sleep time, the ratio of each sleep period and the like is also simple. Sleep staging is an advantageous aid to sleep quality assessment.
In the existing sleep staging methods, the traditional deep learning algorithm is mostly used for carrying out classification processing on sleep electroencephalogram signals of a testee. However, when a patient is met by the traditional deep learning each time, if only the deep learning method is adopted, a large amount of time is used in the previous data analysis. And if the target individual needs to take effect, a large amount of data of the target individual needs to be collected, and a large amount of time and energy are consumed.
Disclosure of Invention
Based on this, the present invention provides a method, an apparatus, an electronic device and a storage medium for brain sleep staging, which can make full use of information of non-target domain data to migrate such information on a target domain, aiming at the problem of fewer patient samples. And only a small amount of sample data is required for each category.
In a first aspect, the invention provides an electroencephalogram sleep staging method, which comprises the following steps:
acquiring electroencephalogram signals to be classified;
inputting the electroencephalogram signals into a trained feature extraction network to obtain feature vectors corresponding to the electroencephalogram signals;
inputting the feature vectors into a trained classifier to obtain sleep stage classification results corresponding to the electroencephalogram signals; wherein the classifier is a prototype network improved based on a meta-learning algorithm.
Further, the feature extraction network comprises 4 convolutional layers and 2 max pooling layers; sequentially executing convolution operation, batch normalization and linear rectification function activation on each convolution layer;
the convolution kernel is used for extracting time domain and space domain features, the maximum pooling layer is used for reducing model parameters, and the dropout layer is used for preventing overfitting caused by too few samples.
Further, the training step of the classifier comprises:
randomly extracting a subject from the training set, and randomly extracting 5 classes of data during sleep from the subject, wherein the 5 classes are respectively used as support sets;
randomly extracting a subject from the training set, randomly extracting 5 types of data from the subject, taking 5 of each type as a query set, and constructing 5-way and K-shot experiments;
and constructing tasks for multiple times to carry out experiments, and taking the average loss to carry out gradient descent to obtain the trained classifier.
Further, the training of the classifier further comprises the following testing steps:
randomly extracting a subject from the test set, and randomly extracting 5 classes of data from the subject, wherein each class is used as a support set;
randomly extracting a subject which is not repeated when the test set is extracted from the test set, and randomly extracting 5 types of data from the subject, wherein 5 data in each type are used as a query set;
and constructing tasks for multiple times for testing, and taking the average accuracy as the final test result precision.
Further, a 5-way, K-shot experiment was constructed, comprising:
acquiring a feature vector extracted by the trained feature extraction network;
averaging the feature vectors, and performing normalization processing;
and predicting a sleep staging result corresponding to the normalized feature vector by using softmax.
Further, the classifier includes a cosine similarity function and a Softmax distance function.
Further, using the following formula, the probability distribution value P is predicted j
Figure BDA0003768427900000021
Figure BDA0003768427900000022
Wherein, P j In the softmax classifier, sim is a cosine similarity function, wherein u represents a true value, q represents a predicted value, b is an error, and T is a transposition.
In a second aspect, the present invention further provides an electroencephalogram sleep staging device, including:
the electroencephalogram signal acquisition module is used for acquiring electroencephalogram signals to be classified;
the feature vector extraction module is used for inputting the electroencephalogram signals into a trained feature extraction network to obtain feature vectors corresponding to the electroencephalogram signals;
the sleep stage module is used for inputting the feature vectors into a trained classifier to obtain a sleep stage classification result corresponding to the electroencephalogram signal; wherein the classifier is a prototype network improved based on a meta-learning algorithm.
In a third aspect, the present invention also provides an electronic device, including:
at least one memory and at least one processor;
the memory for storing one or more programs;
when executed by the at least one processor, cause the at least one processor to implement the steps of a method for brain electrical sleep staging according to any of the first aspects of the present invention.
In a fourth aspect, the present invention also provides a computer-readable storage medium,
the computer readable storage medium stores a computer program which, when executed by a processor, implements the steps of a method of brain electrical sleep staging according to any of the first aspects of the invention.
The electroencephalogram sleep stage method, the electroencephalogram sleep stage device, the electronic equipment and the storage medium provided by the invention have the advantages that the prototype network is used, other tested electroencephalograms can be identified in a cross-test mode after a deep learning algorithm is carried out, the network can identify new classes which are never seen in a training process, and the cross-test electroencephalogram sleep stage is realized.
Aiming at the problem of less patient samples, a prototype network (prototype networks) can make full use of information of non-target domain data and migrate the information on a target domain. And only a small amount of sample data is required for each category. The prototype network maps the sample data in each class into a space and extracts their "mean" to represent as prototypes for that class. Due to the difference between individuals, different sleep electroencephalogram data can be seen among different individuals.
The problem of how to predict the new category of the prototype network is converted into the problem of how to predict the new tested by the prototype network, and the cross-tested sleep recognition is realized. The energy consumption can be saved, and the accuracy of sleep segmentation can be improved.
For a better understanding and practice, the invention is described in detail below with reference to the accompanying drawings.
Drawings
FIG. 1 is a schematic diagram of a method for staging sleep of an electroencephalogram according to the present invention;
FIG. 2 is a schematic diagram of a feature extraction network used in one embodiment of the present invention;
FIG. 3 is a schematic diagram of a pre-training flow of a feature extraction network used in one embodiment of the present invention;
FIG. 4 is a detailed flow chart of the present invention when support set is (5-way 2-shot) training in one embodiment;
FIG. 5 is a flow chart illustrating training a classifier according to an embodiment of the present invention;
FIG. 6 is a flow chart illustrating the testing of classifiers in one embodiment of the invention;
FIG. 7 is a diagram illustrating the details of a classifier used in one embodiment of the present invention;
FIG. 8 is a graphical illustration of the correlation of accuracy to way number in a meta-learning algorithm used in one embodiment of the present invention;
FIG. 9 is a diagram illustrating a correlation between a shot value and an accuracy in a meta-learning algorithm used in an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electroencephalogram sleep staging device provided by the invention.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that the embodiments described are only some embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the embodiments in the present application.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments of the present application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the claims that follow. In the description of the present application, it is to be understood that the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not necessarily used to describe a particular order or sequence, nor are they to be construed as indicating or implying relative importance. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as the case may be.
In addition, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
To solve the problems in the background art, an embodiment of the present application provides a brain electrical sleep staging method, as shown in fig. 1, the method includes the following steps:
s01: and acquiring the electroencephalogram signals to be classified.
S02: and inputting the electroencephalogram signals into a trained feature extraction network to obtain feature vectors corresponding to the electroencephalogram signals.
In a preferred embodiment, as shown in fig. 2 and 3, the feature extraction network comprises 4 convolutional layers and 2 max pooling layers; each convolutional layer sequentially performs convolution operation, batch normalization and linear rectification function activation.
The convolution kernel is used for extracting time domain and space domain features, the maximum pooling layer is used for reducing model parameters, and the dropout layer is used for preventing overfitting caused by too few samples.
The training dataset for the feature extraction network uses a Sleep-EDF public dataset.
Specifically, the Sleep-EDF dataset was segmented into 30 seconds samples and manually annotated by experts according to the R & K manual. The model was evaluated and tested using the FpzCz, pzOz electroencephalographic channels provided in the PSG recordings, with a sampling rate of 100Hz. Each recording begins and ends with a long awake period (W). Only 30 minutes before and after sleep is included, and because of the interest in sleep staging, there is no need for a large amount of sleep-independent wake (W) electroencephalogram data, which would not result in a large number of wake cycles involved in selecting a sleep cycle.
For all data sets, motion artifacts that each sleep data was marked as motion or unknown at the beginning and end were excluded because they did not belong to five sleep stages, thereby performing artifact removal. If the dataset is scored according to the R & K manual, it is converted to be identical to the AASM manual by combining the N3 and N4 stages into a single stage N3 to facilitate comparison between datasets [9 ].
The sleep brain electricity of 153 healthy Subjects (SC) is divided into a training set, a test set and a verification set, wherein the training set comprises 100 subjects, the test set comprises 50 subjects, and the verification set comprises 3 subjects. After data artifact processing is carried out on a training set and a testing set, samples are randomly selected from 100 subjects in the training set and 50 subjects in the testing set respectively, a sleep sample is intercepted in a time window of 30s to serve as an epoch, the channels are FpzCz and PzOz, and 10 epochs are randomly selected from W, N1, N2, N3 and R periods respectively, so that the shape of the included training set sample is (10000, 30 x 100, 2), and the shape of the testing set sample is (5000, 30 x 50, 2).
S03: inputting the feature vectors into a trained classifier to obtain sleep stage classification results corresponding to the electroencephalogram signals; wherein the classifier is a prototype network improved based on a meta-learning algorithm.
As shown in fig. 4-7, the training process of the classifier includes:
s11: randomly extracting a subject from the training set, and randomly extracting 5 classes of data in the sleep period from the subject, wherein the 5 classes are respectively used as support sets.
In a specific embodiment, 100 subjects of the training set and 50 subjects of the test set (15000, 30 x 150, 2) samples are included.
S12: randomly extracting a subject from the training set, randomly extracting 5 types of data from the subject, and taking 5 of each type as a query set to construct a 5-way and K-shot experiment.
Specifically, when the number of samples in each class is 2, a 5-way, 2shot experiment is constructed
S13: and constructing tasks for multiple times to carry out experiments, and taking the average loss to carry out gradient descent to obtain the trained classifier.
Wherein, extracting feature vectors by using a pre-trained neural network, averaging and carrying out normalization processing. Prediction analysis was performed using softmax. Multiple construction tasks were performed and then the mean loss was taken for gradient descent.
A testing phase is also included.
S14: randomly extracting a subject from the test set, and randomly extracting 5 types of data from the subject, wherein each type is used as a support set;
s15: randomly extracting a subject which is not repeated when the follow-up support set is extracted from the test set, randomly extracting 5 types of data from the subject, and taking 5 of each type as a query set;
s16: and (5) constructing tasks for multiple times to test, and taking the average accuracy as the final test result precision.
During testing, one subject was randomly selected from the test set, and then 5 classes of data were randomly selected from the subject, each class being a support set (support set), and one subject whose follow-up set (support set) was not repeated was randomly selected from the test set. Randomly extracting 5 types of data from the subject, taking 5 of each type as a query set (query _ set), constructing tasks for multiple times for testing, and taking the average accuracy as the final test result precision.
In one embodiment, in the meta-learning algorithm, since the way and shot are limited by the type and number of samples, the rules are approximately as shown in fig. 8 and 9: in the meta-learning algorithm, the prediction accuracy is limited by the way number, and the prediction accuracy will decrease with the increase of the way number, so the accuracy can be improved only by controlling the way number; since the prediction accuracy is limited by the shot value and increases as the shot value increases, it is necessary to appropriately increase the shot value as the operation condition permits, and thus the accuracy can be improved. Since the electroencephalogram sleep stage only has data of five categories (W, N1, N2, N3, R periods) in total, the way number is fixed to 5, and the number of shots can be changed as required, so that the accuracy rate of predicting the learning of the small samples should be relatively high.
According to recent literature, it is shown that increasing Fine tuning can significantly increase the probability distribution value P j The accuracy of the prediction. Thus can be based onIn the idea, a cosine similarity function (cosine similarity) and a Softmax Classifier (Softmax Classifier) are added to the Fine tuning. To obtain
Figure BDA0003768427900000061
Wherein the content of the first and second substances,
Figure BDA0003768427900000062
in the softmax classifier, sim is a cosine similarity function, wherein u represents a true value, q represents a predicted value, b is an error, and T is a transposition.
The softmax distance function uses a prototype network (Prototypical Networks) algorithm proposed by Snell et al, in small sample learning, a small support set N is given, with the example S = { (X) labeled 1 ,Y 1 ),...,(X N ,Y N ) In which X is i ∈R D Is an example of a D-dimensional feature vector, Y i E {1,.., K } is the corresponding tag, S k An example set labeled as class k is represented.
The prototype network computes an m-dimensional representation, where c k ∈R M Or prototypes, by an embedding function f φ :R D →R M With a learnable parameter phi. Each prototype is the average vector of the embedded support points belonging to its class.
Figure BDA0003768427900000063
Wherein S is k Representing an example set labeled as class k, c k For the prototype, x, y are samples in the example set, and f is a function containing learnable parameters.
Given a distance function d R M ×R M → 0, + ∞), the prototype network generates a distribution over classes for query point x based on softmax of the prototype distance in embedding space:
Figure BDA0003768427900000071
wherein, c k For prototypes in a prototype network, k is a k-th class example set, x and y are samples in the example set, and p φ A probability function for the sample belonging to each class.
Due to the difference between individuals, different sleep electroencephalogram data can be seen among different individuals. During training, an individual is taken as a category, data of five sleep periods of a single category are trained, a small sample staging network capable of classifying the five sleep periods is obtained after feature vectors are extracted through a deep neural network, normalization processing, softmax and parameter adjustment are carried out. The electroencephalogram signals in five sleep periods can be segmented according to the condition that different people are met and the samples are few. The learning algorithm with few samples, which can accurately perform sleep segmentation aiming at different individuals and with few samples, is realized.
And using Euclidean distance as a distance measure, training to enable the distance from the data of the category to the representation of the original shape of the category to be the nearest and the distance to the representation of other original shapes to be farther. During testing, softmax is carried out on the distance from the test data to the prototype data of each category to judge the category label of the test data. A simple method, small sample learning method, is proposed, representing each class based on examples in a representation space learned through neural networks, which networks are trained by using situational training, performing well in small sample sleep stages.
The embodiment of the present application further provides an electroencephalogram sleep staging device, as shown in fig. 10, the electroencephalogram sleep staging device 400 includes:
an electroencephalogram signal acquisition module 401, configured to acquire an electroencephalogram signal to be classified;
a feature vector extraction module 402, configured to input the electroencephalogram signal into a trained feature extraction network, to obtain a feature vector corresponding to the electroencephalogram signal;
a sleep stage module 403, configured to input the feature vector into a trained classifier, so as to obtain a sleep stage classification result corresponding to the electroencephalogram signal; wherein the classifier is a prototype network improved based on a meta-learning algorithm.
Preferably, the feature extraction network comprises 4 convolutional layers and 2 max pooling layers; each convolution layer sequentially executes convolution operation, batch normalization and linear rectification function activation;
the convolution kernel is used for extracting time domain and space domain features, the maximum pooling layer is used for reducing model parameters, and the dropout layer is used for preventing overfitting caused by too few samples.
Preferably, the training step of the classifier includes:
randomly extracting a subject from the training set, and randomly extracting 5 classes of data during sleep from the subject, wherein the 5 classes are respectively used as support sets;
randomly extracting a subject from the training set, randomly extracting 5 types of data from the subject, and constructing 5-way and K-shot experiments by taking 5 data of each type as a query set;
and constructing tasks for multiple times to perform experiments, and taking the average loss to perform gradient descent to obtain the trained classifier.
Preferably, the training of the classifier further comprises the following testing steps:
randomly extracting a subject from the test set, and randomly extracting 5 classes of data from the subject, wherein each class is used as a support set;
randomly extracting a subject which is not repeated when the follow-up support set is extracted from the test set, randomly extracting 5 types of data from the subject, and taking 5 of each type as a query set;
and constructing tasks for multiple times for testing, and taking the average accuracy as the final test result precision.
Preferably, a 5-way, K-shot experiment is constructed comprising:
acquiring a feature vector extracted by the trained feature extraction network;
averaging the feature vectors, and carrying out normalization processing;
and predicting a sleep staging result corresponding to the normalized feature vector by using softmax.
Preferably, the classifier includes a cosine similarity function and a Softmax distance function.
Preferably, the probability distribution value P is predicted using the following formula j
Figure BDA0003768427900000081
Figure BDA0003768427900000082
Wherein, P j And in the softmax classifier, sim is a cosine similarity function, wherein u represents a true value, q represents a predicted value, b is an error, and T is a transposition.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
An embodiment of the present application further provides an electronic device, including:
at least one memory and at least one processor;
the memory for storing one or more programs;
when executed by the at least one processor, cause the at least one processor to implement the steps of a method for brain electrical sleep staging as previously described.
For the apparatus embodiment, since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment for relevant points. The above-described device embodiments are merely illustrative, wherein the components described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the disclosure. One of ordinary skill in the art can understand and implement it without inventive effort.
Embodiments of the present application also provide a computer-readable storage medium,
the computer readable storage medium stores a computer program which, when executed by a processor, implements the steps of a method of brain electrical sleep staging as described above.
Computer-usable storage media include permanent and non-permanent, removable and non-removable media, and may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of the storage medium of the computer include, but are not limited to: phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of random access memory (rram), read only memory (ro M), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technologies, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which may be used to store information that may be accessed by a computing device.
The electroencephalogram sleep stage method, the electroencephalogram sleep stage device, the electronic equipment and the storage medium provided by the invention have the advantages that the prototype network is used, other tested electroencephalograms can be identified in a cross-test mode after a deep learning algorithm is carried out, the network can identify new classes which are never seen in a training process, and the cross-test electroencephalogram sleep stage is realized.
For the problem of less patient samples, a prototype network (prototypical networks) can make full use of information of non-target domain data to migrate such information on the target domain. And only a small amount of sample data is required for each category. The prototype network maps the sample data in each class into a space and extracts their "mean" to represent as prototypes for that class. Due to the difference between individuals, different sleep electroencephalogram data can be seen between different individuals.
The problem of how to predict the new category of the prototype network is converted into the problem of how to predict the new tested by the prototype network, and the cross-tested sleep recognition is realized. Energy consumption can be saved, and the accuracy of sleep segmentation can be improved.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.

Claims (10)

1. A brain electrical sleep staging method is characterized by comprising the following steps:
acquiring electroencephalogram signals to be classified;
inputting the electroencephalogram signals into a trained feature extraction network to obtain feature vectors corresponding to the electroencephalogram signals;
inputting the feature vectors into a trained classifier to obtain sleep stage classification results corresponding to the electroencephalogram signals; wherein the classifier is a prototype network improved based on a meta-learning algorithm.
2. The method for brain electrical sleep staging according to claim 1, characterized in that:
the feature extraction network comprises 4 convolutional layers and 2 maximum pooling layers; each convolution layer sequentially executes convolution operation, batch normalization and linear rectification function activation;
the convolution kernel is used for extracting time domain and space domain features, the maximum pooling layer is used for reducing model parameters, and the dropout layer is used for preventing overfitting caused by too few samples.
3. The electroencephalogram sleep staging method according to claim 1, wherein the training step of the classifier includes:
randomly extracting a subject from the training set, and randomly extracting 5 classes of data in the sleep period from the subject, wherein the 5 classes are respectively used as support sets;
randomly extracting a subject from the training set, randomly extracting 5 types of data from the subject, taking 5 of each type as a query set, and constructing 5-way and K-shot experiments;
and constructing tasks for multiple times to carry out experiments, and taking the average loss to carry out gradient descent to obtain the trained classifier.
4. The electroencephalogram sleep staging method according to claim 3, wherein the training of the classifier further comprises the following testing steps:
randomly extracting a subject from the test set, and randomly extracting 5 types of data from the subject, wherein each type is used as a support set;
randomly extracting a subject which is not repeated when the follow-up support set is extracted from the test set, randomly extracting 5 types of data from the subject, and taking 5 of each type as a query set;
and constructing tasks for multiple times for testing, and taking the average accuracy as the final test result precision.
5. The method of claim 3, wherein constructing a 5-way, K-shot experiment comprises:
acquiring a feature vector extracted by the trained feature extraction network;
averaging the feature vectors, and performing normalization processing;
and predicting a sleep staging result corresponding to the normalized feature vector by using softmax.
6. The method for brain electrical sleep staging according to claim 5, characterized in that:
the classifier includes a cosine similarity function and a Softmax distance function.
7. The method for brain electrical sleep staging according to claim 6, characterized in that:
using the following formula, a probability distribution value P is predicted j
Figure FDA0003768427890000021
Figure FDA0003768427890000022
Wherein, P j And in the softmax classifier, sim is a cosine similarity function, wherein u represents a true value, q represents a predicted value, b represents an error, and T is transposition.
8. An electroencephalogram sleep staging device, comprising:
the electroencephalogram signal acquisition module is used for acquiring electroencephalogram signals to be classified;
the feature vector extraction module is used for inputting the electroencephalogram signals into a trained feature extraction network to obtain feature vectors corresponding to the electroencephalogram signals;
the sleep stage module is used for inputting the feature vectors into a trained classifier to obtain a sleep stage classification result corresponding to the electroencephalogram signals; wherein the classifier is a prototype network improved based on a meta-learning algorithm.
9. An electronic device, comprising:
at least one memory and at least one processor;
the memory for storing one or more programs;
when executed by the at least one processor, cause the at least one processor to implement the steps of a method for brain electrical sleep staging as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium characterized by:
the computer readable storage medium stores a computer program which when executed by a processor implements the steps of a method of brain electrical sleep staging according to any of claims 1-7.
CN202210896352.4A 2022-07-27 2022-07-27 Electroencephalogram sleep staging method and device, electronic equipment and storage medium Pending CN115358260A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210896352.4A CN115358260A (en) 2022-07-27 2022-07-27 Electroencephalogram sleep staging method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210896352.4A CN115358260A (en) 2022-07-27 2022-07-27 Electroencephalogram sleep staging method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115358260A true CN115358260A (en) 2022-11-18

Family

ID=84031851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210896352.4A Pending CN115358260A (en) 2022-07-27 2022-07-27 Electroencephalogram sleep staging method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115358260A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115700104A (en) * 2022-12-30 2023-02-07 中国科学技术大学 Self-interpretable electroencephalogram signal classification method based on multi-scale prototype learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115700104A (en) * 2022-12-30 2023-02-07 中国科学技术大学 Self-interpretable electroencephalogram signal classification method based on multi-scale prototype learning
CN115700104B (en) * 2022-12-30 2023-04-25 中国科学技术大学 Self-interpretable electroencephalogram signal classification method based on multi-scale prototype learning

Similar Documents

Publication Publication Date Title
He et al. Deep learning based approach for bearing fault diagnosis
Colonna et al. Automatic classification of anuran sounds using convolutional neural networks
CN111134664B (en) Epileptic discharge identification method and system based on capsule network and storage medium
CN113011239B (en) Motor imagery classification method based on optimal narrow-band feature fusion
US11494689B2 (en) Method and device for improved classification
Sejuti et al. An efficient method to classify brain tumor using CNN and SVM
CN110705722A (en) Diagnostic model for industrial equipment fault diagnosis and construction method and application thereof
CN114118165A (en) Multi-modal emotion data prediction method and device based on electroencephalogram and related medium
CN110659682A (en) Data classification method based on MCWD-KSMOTE-AdaBoost-DenseNet algorithm
CN115414051A (en) Emotion classification and recognition method of electroencephalogram signal self-adaptive window
Yang et al. Cross-domain missingness-aware time-series adaptation with similarity distillation in medical applications
CN115358260A (en) Electroencephalogram sleep staging method and device, electronic equipment and storage medium
KR101789078B1 (en) Hidden discriminative features extraction for supervised high-order time series modeling
CN114926299A (en) Prediction method for predicting vehicle accident risk based on big data analysis
Raychaudhuri et al. Exploiting temporal coherence for self-supervised one-shot video re-identification
CN108805181B (en) Image classification device and method based on multi-classification model
CN114358279A (en) Image recognition network model pruning method, device, equipment and storage medium
CN112861881A (en) Honeycomb lung recognition method based on improved MobileNet model
CN116612335A (en) Few-sample fine-granularity image classification method based on contrast learning
Acır et al. Automatic recognition of sleep spindles in EEG via radial basis support vector machine based on a modified feature selection algorithm
CN112733727B (en) Electroencephalogram consciousness dynamic classification method based on linear analysis and feature decision fusion
Manimegalai et al. Deep Learning Based Approach for Identification of Parkinson’s Syndrome
Bourjandi et al. Combined deep centralized coordinate learning and hybrid loss for human activity recognition
CN110174947A (en) The Mental imagery task recognition method to be cooperated based on fractals and probability
CN109190658A (en) Video degree of awakening classification method, device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination