CN116269212A - Multi-mode sleep stage prediction method based on deep learning - Google Patents

Multi-mode sleep stage prediction method based on deep learning Download PDF

Info

Publication number
CN116269212A
CN116269212A CN202211666812.0A CN202211666812A CN116269212A CN 116269212 A CN116269212 A CN 116269212A CN 202211666812 A CN202211666812 A CN 202211666812A CN 116269212 A CN116269212 A CN 116269212A
Authority
CN
China
Prior art keywords
model
convolutional neural
signal
sleep
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211666812.0A
Other languages
Chinese (zh)
Inventor
汪梦影
许金山
林怡炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202211666812.0A priority Critical patent/CN116269212A/en
Publication of CN116269212A publication Critical patent/CN116269212A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4815Sleep quality
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/389Electromyography [EMG]
    • A61B5/397Analysis of electromyograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/398Electrooculography [EOG], e.g. detecting nystagmus; Electroretinography [ERG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4809Sleep detection, i.e. determining whether a subject is asleep or not
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4812Detecting sleep stages or cycles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Psychiatry (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physiology (AREA)
  • Evolutionary Computation (AREA)
  • Signal Processing (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Ophthalmology & Optometry (AREA)
  • Fuzzy Systems (AREA)
  • Psychology (AREA)
  • Databases & Information Systems (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Anesthesiology (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

A multi-modal sleep stage prediction method based on deep learning, comprising: step 1: acquiring an original PSG signal; dividing the signal into 30s segments, and preprocessing; step 2: performing supervised pre-training on two different scales of CNNs of the model using the class balance dataset to prevent overfitting to sleep stages; step 3: the signals are respectively input into a large convolutional neural network, a small convolutional neural network and a small convolutional neural network which are pre-trained, and a filter representing learning is trained after four times of convolution and two times of pooling; step 4: the obtained characteristics of the step 2 through the two convolutional neural networks are fused and then input into a residual error learning network, and the characteristics are fused again with the characteristics through the two-time bidirectional LSTM module; step 5: the output obtained in the step 3 passes through a softmax layer to obtain a sleep stage predicted by the model, and the softmax function and the cross entropy loss are combined to serve as a loss function of the model to train the model; the model was evaluated using a SleepEDF dataset.

Description

Multi-mode sleep stage prediction method based on deep learning
Technical Field
The invention belongs to the technical field of bioelectric signal analysis, and particularly relates to a multi-mode sleep stage prediction method based on deep learning.
Background
Sleep is critical to a person's mental and physical health, and monitoring of sleep quality has a significant impact on medical research and practice. The sleep process is divided into different stages according to different changes in physiological signals of the human body during sleep, and this process is called sleep staging or scoring. Sleep disorders are associated with a number of different diseases, and sleep staging is performed using a 30 second night Polysomnogram (PSG) to screen, evaluate and diagnose sleep disorders. Sleep stages and cycles represent potential neurophysiologic processes from which diagnostic markers of various sleep disorders can also be obtained.
Typically, sleep professionals determine sleep quality through electrical activity recorded by sensors attached to different parts of the body. A set of signals from these sensors is called Polysomnography (PSG), consisting of electroencephalogram (EEG), electrooculography (EOG), electromyogram (EMG) and Electrocardiogram (ECG). This PSG was divided into 30 seconds periods and then sleep stage scoring was performed manually by an expert according to accepted manuals (R & K and AASM). However, this purely manual method is time-consuming and labor-consuming.
The accurate and efficient monitoring of sleep not only has great medical value, but also allows individuals to self-evaluate and self-manage sleep. The existing sleep scoring method is manually determined by doctors according to manual standards, and has low efficiency.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a multi-mode sleep stage prediction method based on deep learning.
The machine can perform this task thousands of times faster than a human expert, saving thousands of hours per year to the clinician, and enabling broader sleep assessment and diagnosis, automating sleep scoring. The invention provides a sleep stage automatic scoring method based on a multi-mode electroencephalogram, which utilizes the characteristic of deep learning of automatically learning sleep stage scoring from the electroencephalogram to obtain sleep stage scoring, and can better assist medical diagnosis.
The invention discloses a multi-mode sleep stage prediction method based on deep learning, which solves the technical problems of the method and comprises the following specific steps:
step 1: the raw PSG signal was obtained from the dataset SleepEDF-20, including an EEG PZ-Oz signal, a horizontal EOG signal, and a under chin EMG signal. The signal is divided into 30s segments, preprocessing is carried out, and the sampling frequency of various sleep physiological electric signal data information is unified.
Step 2: supervised pre-training of two different scale Convolutional Neural Networks (CNNs) of the model using class-balanced data sets prevents overfitting to sleep stages. Two CNNs were extracted, softmax layers were superimposed and optimized with Adam optimizer. At the end of the pretraining, the softmax layer was discarded. A class balance training set is obtained by replicating a few sleep stages in the original training set so that all sleep stages have the same number of samples.
Step 3: and (3) inputting a section of 30s signal with the sampling rate of Fs into a large convolutional neural network, a small convolutional neural network and a small convolutional neural network respectively, and training a filter representing learning through four times of convolution and twice pooling for extracting time-invariant features.
Step 4: and (3) fusing the obtained features of the step (2) through the two convolutional neural networks, inputting the fused features into a residual error learning network, and fusing the fused features with the features of the two-time bidirectional LSTM module again to learn time-related features, such as stage conversion rules, namely predicting the possible next stage through the current stage.
Step 5: and (3) obtaining a sleep stage predicted by the model through the softmax layer by the output obtained in the step (3), and combining the softmax function and the cross entropy loss to train the model as a loss function of the model. The model was evaluated using a SleepEDF dataset.
The invention provides a multi-mode sleep stage prediction method based on deep learning, which utilizes a real tag signal and combines a multi-mode signal input model, firstly extracts time-invariant features of signals through two pre-trained convolutional neural networks, inputs the time-invariant features into a residual error network after fusion, and outputs the sleep stage prediction of the signals. Training is continued to obtain the best sleep stage prediction effect.
The invention has the advantages that: by utilizing the deep learning method, the CNN network and the bidirectional LSTM module with two different sizes are designed, so that the characteristics in the multi-mode sleep signal are extracted, the sleep stage is automatically learned and predicted, more accurate prediction results can be obtained compared with manual judgment, and more time and energy are saved. Meanwhile, compared with a signal of a single channel, the multi-mode signal has higher accuracy by considering multi-scale characteristic crossing.
Drawings
Fig. 1 is a flow chart of the method of the present invention.
Fig. 2 is a diagram of a multi-modal signal.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more clear, the technical solutions of the embodiments of the present invention will be clearly and completely described, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by those skilled in the art without making any inventive effort based on the embodiments of the present invention are within the scope of protection of the present invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
The technical scheme of the invention is described in detail below.
A multi-mode sleep stage prediction method based on deep learning comprises the following steps:
step 1: the raw PSG signal is obtained from the dataset, comprising an EEG PZ-Oz signal, a horizontal EOG signal, and a under chin EMG signal. The signal is divided into 30s segments, preprocessing is carried out, and the sampling frequency of various sleep physiological electric signal data information is unified. The number of data for the different sleep stages in the dataset is shown in table 1.
TABLE 1 data categories and number of data in data set SleepEDF
Figure SMS_1
Step 2: supervised pre-training of two different scale Convolutional Neural Networks (CNNs) of the model using class-balanced data sets prevents overfitting to sleep stages. Two CNNs were extracted, softmax layers were superimposed and optimized with Adam optimizer. At the end of the pretraining, the softmax layer was discarded. A class balance training set is obtained by replicating a few sleep stages in the original training set so that all sleep stages have the same number of samples.
Step 3: and (3) inputting a section of 30s signal with the sampling rate of Fs into a large convolutional neural network, a small convolutional neural network and a small convolutional neural network respectively, and training a filter representing learning through four times of convolution and twice pooling for extracting time-invariant features.
Step 4: and (3) fusing the obtained features of the step (3) through the two convolutional neural networks, inputting the fused features into a residual error learning network, and fusing the fused features with the features of the two-time bidirectional LSTM module again to learn time-related features, such as stage conversion rules, namely predicting the possible next stage through the current stage.
Step 5: and (3) obtaining a sleep stage predicted by the model through the softmax layer by the output obtained in the step (3), and combining the softmax function and the cross entropy loss to train the model as a loss function of the model.
And 3, inputting a section of 30s signal with the sampling rate of Fs into a large convolutional neural network, a small convolutional neural network and a pretrained convolutional neural network respectively, and training a filter representing learning through four times of convolution and twice pooling for extracting time invariant features, wherein the method specifically comprises the following steps of:
step 3.1: each CNN consists of four convolutional layers and two max pooling layers. Each convolution layer performs three operations in turn: one-dimensional convolution, batch normalization, and ReLU activation. Each pooling layer samples the input using a maximum operation.
Assume that there are N30 second periods { x } of the electrical signal pattern 1 ,…,x N }. We use these two CNNs to calculate the index from the ith epoch x i Extracting the ith feature a i The following is shown:
Figure SMS_2
Figure SMS_3
Figure SMS_4
wherein CNN (x) i ) Is to use CNN to map 30 seconds of electric signal to epoch x i Conversion into feature vector h i Function of θ s And theta l Are parameters of CNNs in the first layer with a convolution kernel of size, |is a tandem operation that combines the outputs of two CNNs together. The characteristics { a } of these connections or associations 1 ,…,a N The input of the next part of the module will be.
After the obtained features of the two convolutional neural networks in step 4 are fused, the fused features are input to a residual learning network, and are fused with features of the two-time bidirectional LSTM module again to learn time-related features, such as a stage conversion rule, that is, predict a possible next stage through a current stage, and specifically include:
step 4.1: two-layer bi-directional LSTM is used to learn time information, such as phase transition rules. The bi-directional LSTM allows the two LSTMs to independently process forward and backward input sequences. The outputs of the forward LSTM and the backward LSTM are decoupled from each other and can utilize past and future information.
Step 4.2: the features extracted in step 3 are fused with the bi-directional LSTM output using a one-pass connection such that the time information learned in the previous input sequence is added to the features extracted from the CNN. The close-up connection uses a full connection layer.
Step 4.3: assume CNN { a 1 ,…,a N Sequentially, t= … N is a time index of 30 seconds of the electrical signal map cycle, and the sequence residual learning is defined as follows:
Figure SMS_5
Figure SMS_6
Figure SMS_7
wherein LSTM represents the processing feature sequence a t A function of (2) using a value represented by θ f And theta b Parameterized two layers of LSTM for forward and backward directions, respectively; to forward LSTM and reverse LSTM
Figure SMS_8
Set to zero vector. FC denotes a function which will be a t The feature is converted into a vector which can be coupled to the output vector in the bidirectional LSTM
Figure SMS_9
And (5) adding.
And 5, the output obtained in the step 3 passes through a softmax layer to obtain a sleep stage predicted by the model, the softmax function and the cross entropy loss are combined to serve as a loss function of the model, the model is trained, and the trained model is evaluated, and the method specifically comprises the following steps:
step 5.1: the evaluation indexes of the model comprise F1-score, MF1 and ACC, and the specific calculation is as follows:
Figure SMS_10
Figure SMS_11
Figure SMS_12
Figure SMS_13
Figure SMS_14
where TP is correctly predicted and the sample is positive (true positive); FP is a prediction error, the sample is predicted to be positive, but the sample is actually negative (false positive); FN is a prediction error, the sample is predicted negative, but the sample is actually positive (false negative). TPc is true positive for class c, F1c is the per class F1 score for class c, c is the number of sleep stages, and N is the total number of test periods.
The various indexes of the model are calculated as above, compared with the models of different related works, and the performance of the models is evaluated, and the results are shown in table 2.
TABLE 2 Performance score Table for sleep stage predictions
Figure SMS_15
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (4)

1. A multi-mode sleep stage prediction method based on deep learning comprises the following steps:
step 1: obtaining an original PSG signal from a data set SleepEDF-20, wherein the original PSG signal comprises an EEG PZ-Oz signal, a horizontal EOG signal and a chin EMG signal; dividing the signal into 30s segments, preprocessing, and unifying the sampling frequency of various sleep physiological electric signal data information;
step 2: using the class balance dataset to supervise pre-training two different scale Convolutional Neural Networks (CNNs) of the model to prevent overfitting to sleep stages; extracting two CNNs, superposing softmax layers, and optimizing by using an Adam optimizer; at the end of the pre-training, discarding the softmax layer; obtaining a class balance training set by replicating a few sleep stages in the original training set so that all sleep stages have the same number of samples;
step 3: a section of 30s signal with the sampling rate of Fs is respectively input into a large convolutional neural network, a small convolutional neural network and a small convolutional neural network which are pre-trained, and a filter representing learning is trained through four times of convolution and twice pooling for extracting time-invariant features;
step 4: the obtained features of the step 2 through the two convolutional neural networks are fused and then input into a residual error learning network, and are fused again with the features of the two-time bidirectional LSTM module to learn time related features, such as stage conversion rules, namely, the next stage which possibly occurs is predicted through the current stage;
step 5: the output obtained in the step 3 passes through a softmax layer to obtain a sleep stage predicted by the model, and the softmax function and the cross entropy loss are combined to serve as a loss function of the model to train the model; the model was evaluated using a SleepEDF dataset.
2. A method of deep learning based multi-modal sleep stage prediction as claimed in claim 1, wherein: and 3, inputting a section of 30s signal with the sampling rate of Fs into a large convolutional neural network, a small convolutional neural network and a pretrained convolutional neural network respectively, and training a filter representing learning through four times of convolution and twice pooling for extracting time invariant features, wherein the method specifically comprises the following steps of:
each CNN consists of four convolutional layers and two max pooling layers; each convolution layer performs three operations in turn: one-dimensional convolution, batch normalization and ReLU activation; each pooling layer samples the input using a maximum operation;
assume that there are N30 second periods { x } of the electrical signal pattern 1 ,…,x N -a }; we use these two CNNs to calculate the index from the ith epoch x i Extracting the ith feature a i The following is shown:
Figure FDA0004014961910000021
Figure FDA0004014961910000022
Figure FDA0004014961910000023
wherein CNN (x) i ) Is to use CNN to map 30 seconds of electric signal to epoch x i Conversion into feature vector h i Function of θ s And theta l Is the parameter of the CNN with the size convolution kernel in the first layer, respectively, || is the series operation of combining the outputs of two CNNs together; the characteristics { a } of these connections or associations 1 ,…,a N The input of the next part of the module will be.
3. A method of deep learning based multi-modal sleep stage prediction as claimed in claim 1, wherein: after the obtained features of the two convolutional neural networks in step 4 are fused, the fused features are input to a residual learning network, and are fused with features of the two-time bidirectional LSTM module again to learn time-related features, such as a stage conversion rule, that is, predict a possible next stage through a current stage, and specifically include:
step 4.1: learning time information, such as phase transition rules, using two layers of bi-directional LSTM; bi-directional LSTM allows two LSTMs to independently process forward and backward input sequences; the outputs of the forward LSTM and the backward LSTM are not connected to each other, and can utilize past and future information;
step 4.2: using a short-circuit connection, fusing the features extracted in the step 3 with the output of the bidirectional LSTM, so that the time information learned in the previous input sequence is added into the features extracted from the CNN; the close-circuit connection uses a full connection layer;
step 4.3: assume CNN { a 1 ,…,a N Sequentially, t= … N is a time index of 30 seconds of the electrical signal map cycle, and the sequence residual learning is defined as follows:
Figure FDA0004014961910000031
Figure FDA0004014961910000032
Figure FDA0004014961910000033
wherein LSTM represents the processing feature sequence a t A function of (2) using a value represented by θ f And theta b Parameterized two layers of LSTM for forward and backward directions, respectively; to forward LSTM and reverse LSTM
Figure FDA0004014961910000034
Set to zero vector; FC denotes a function which will be a t The feature is converted into a vector which can be connected to the output vector +.>
Figure FDA0004014961910000035
And (5) adding.
4. A method of deep learning based multi-modal sleep stage prediction as claimed in claim 1, wherein: and 5, the output obtained in the step 3 passes through a softmax layer to obtain a sleep stage predicted by the model, the softmax function and the cross entropy loss are combined to serve as a loss function of the model, the model is trained, and the trained model is evaluated, and the method specifically comprises the following steps:
the evaluation indexes of the model comprise F1-score, MF1 and ACC, and the specific calculation is as follows:
Figure FDA0004014961910000036
Figure FDA0004014961910000041
Figure FDA0004014961910000042
Figure FDA0004014961910000043
Figure FDA0004014961910000044
where TP is correctly predicted and the sample is positive (true positive); FP is a prediction error, the sample is predicted to be positive, but the sample is actually negative (false positive); FN is a prediction error, the sample is predicted negative, but the sample is actually positive (false negative); TPc is true positive for class c, F1c is the per class F1 score for class c, c is the number of sleep stages, and N is the total number of test periods.
CN202211666812.0A 2022-12-23 2022-12-23 Multi-mode sleep stage prediction method based on deep learning Pending CN116269212A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211666812.0A CN116269212A (en) 2022-12-23 2022-12-23 Multi-mode sleep stage prediction method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211666812.0A CN116269212A (en) 2022-12-23 2022-12-23 Multi-mode sleep stage prediction method based on deep learning

Publications (1)

Publication Number Publication Date
CN116269212A true CN116269212A (en) 2023-06-23

Family

ID=86793054

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211666812.0A Pending CN116269212A (en) 2022-12-23 2022-12-23 Multi-mode sleep stage prediction method based on deep learning

Country Status (1)

Country Link
CN (1) CN116269212A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116807478A (en) * 2023-06-27 2023-09-29 常州大学 Method, device and equipment for detecting sleepiness starting state of driver
CN117045930A (en) * 2023-10-12 2023-11-14 北京动亮健康科技有限公司 Training method, system, improving method, equipment and medium for sleep improving model

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116807478A (en) * 2023-06-27 2023-09-29 常州大学 Method, device and equipment for detecting sleepiness starting state of driver
CN117045930A (en) * 2023-10-12 2023-11-14 北京动亮健康科技有限公司 Training method, system, improving method, equipment and medium for sleep improving model
CN117045930B (en) * 2023-10-12 2024-01-02 北京动亮健康科技有限公司 Training method, system, improving method, equipment and medium for sleep improving model

Similar Documents

Publication Publication Date Title
CN110619322A (en) Multi-lead electrocardio abnormal signal identification method and system based on multi-flow convolution cyclic neural network
Übeyli Adaptive neuro-fuzzy inference systems for automatic detection of breast cancer
Jin et al. A novel interpretable method based on dual-level attentional deep neural network for actual multilabel arrhythmia detection
CN113095302B (en) Depth model for arrhythmia classification, method and device using same
CN112932501B (en) Method for automatically identifying insomnia based on one-dimensional convolutional neural network
CN110491506A (en) Auricular fibrillation prediction model and its forecasting system
CN117598700B (en) Intelligent blood oxygen saturation detection system and method
Vallabhaneni et al. Deep learning algorithms in eeg signal decoding application: a review
Li et al. Patient-specific seizure prediction from electroencephalogram signal via multichannel feedback capsule network
CN115530847A (en) Electroencephalogram signal automatic sleep staging method based on multi-scale attention
Liang et al. Obstructive sleep apnea detection using combination of CNN and LSTM techniques
CN116269212A (en) Multi-mode sleep stage prediction method based on deep learning
CN112990270B (en) Automatic fusion method of traditional feature and depth feature
Guo et al. IEEG-TCN: a concise and robust temporal convolutional network for intracranial electroencephalogram signal identification
CN113974655A (en) Epileptic seizure prediction method based on electroencephalogram signals
CN116864140A (en) Intracardiac branch of academic or vocational study postoperative care monitoring data processing method and system thereof
CN117407748A (en) Electroencephalogram emotion recognition method based on graph convolution and attention fusion
Liu et al. Automated Machine Learning for Epileptic Seizure Detection Based on EEG Signals.
CN114129138B (en) Automatic sleep staging method based on time sequence multi-scale mixed attention model
Begawan et al. Sleep stage identification based on eeg signals using parallel convolutional neural network and recurrent neural network
Tobias et al. Android Application for Chest X-ray Health Classification From a CNN Deep Learning TensorFlow Model
CN115349821A (en) Sleep staging method and system based on multi-modal physiological signal fusion
Ren et al. Extracting and supplementing method for EEG signal in manufacturing workshop based on deep learning of time–frequency correlation
Wang et al. Sleep staging based on multi scale dual attention network
Zhang et al. Research on lung sound classification model based on dual-channel CNN-LSTM algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination