CN114707561B - PSG data automatic analysis method, device, computer equipment and storage medium - Google Patents

PSG data automatic analysis method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN114707561B
CN114707561B CN202210571946.8A CN202210571946A CN114707561B CN 114707561 B CN114707561 B CN 114707561B CN 202210571946 A CN202210571946 A CN 202210571946A CN 114707561 B CN114707561 B CN 114707561B
Authority
CN
China
Prior art keywords
signal
data
feature
sleep stage
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210571946.8A
Other languages
Chinese (zh)
Other versions
CN114707561A (en
Inventor
王兴军
林国栋
李章博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan Jianda Information Technology Co ltd
Shenzhen International Graduate School of Tsinghua University
Original Assignee
Dongguan Jianda Information Technology Co ltd
Shenzhen International Graduate School of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan Jianda Information Technology Co ltd, Shenzhen International Graduate School of Tsinghua University filed Critical Dongguan Jianda Information Technology Co ltd
Priority to CN202210571946.8A priority Critical patent/CN114707561B/en
Publication of CN114707561A publication Critical patent/CN114707561A/en
Application granted granted Critical
Publication of CN114707561B publication Critical patent/CN114707561B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4809Sleep detection, i.e. determining whether a subject is asleep or not
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4812Detecting sleep stages or cycles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4815Sleep quality
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4818Sleep apnoea
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7225Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Theoretical Computer Science (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Anesthesiology (AREA)
  • Power Engineering (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention discloses a PSG data automatic analysis method, a device, computer equipment and a storage medium, relating to the technical field of sleep analysis; the method comprises the following steps: s10, acquiring raw data, wherein the raw data comprises physiological signal data and sleep stage label data; s20, preprocessing data; s30, processing the data by the sleep stage prediction module, inputting the preprocessed data into the sleep stage prediction module, wherein the processing process comprises the steps of extracting the characteristics of the input data by the characteristic extraction module, carrying out characteristic fusion on the extracted characteristics by the characteristic fusion module to obtain characteristic vectors, fusing the characteristic vectors and the coding vectors to obtain new characteristic vectors, and inputting the new characteristic vectors into the Transformer module to obtain the sleep stage result of the current frame; the invention has the beneficial effects that: the efficiency of doctor's analysis patient PSG signal night is improved, the efficiency of seeing a doctor is improved.

Description

PSG data automatic analysis method, device, computer equipment and storage medium
Technical Field
The invention relates to the technical field of sleep analysis, in particular to a method and a device for automatically analyzing PSG data, computer equipment and a storage medium.
Background
Obstructive sleep apnea-hypopnea syndrome (OSAHS) is a major source disease that can lead to hypertension, diabetes, coronary heart disease, senile dementia, and other diseases. Relevant studies indicate that 9.36 million people suffer from OSAHS in 2019 globally, and the number of people suffering from OSAHS in China is in the front of the world. OSAHS causes huge burden to the socioeconomic performance of China. Disease diagnosis of OSAHS requires monitoring of the patient's nocturnal sleep. At present, a Polysomnography (PSG) is mainly used for monitoring and analyzing signals of electroencephalogram, electrooculogram, geniomeyogram, electrocardio, blood oxygen, pulse wave and the like during the night sleep of a patient.
Where sleep staging is an important component of diagnosis of OSAHS disease. Professional sleep analysts perform sleep staging analysis using the PSG data, which consumes a great deal of time and labor cost. An experienced physician performed 3 events analysis per day relating to diagnosis of OSAHS disease. Differences between different physicians can cause the analysis results to have a variation rate of 5-20%. The sleep monitoring center is mainly concentrated in developed areas, and the distribution of related medical resource areas is unbalanced. Meanwhile, the cultivation threshold of sleep analysis doctors is high, and the sleep analysis talents are in short supply, so that the requirements of OSAHS disease diagnosis cannot be met. The problems can be effectively alleviated by using an artificial intelligence-based automatic sleep staging algorithm.
The automatic sleep staging algorithm uses the PSG-related signals to perform sleep staging analysis on the data of a night frame by frame. Only tens of seconds are needed to output a complete and accurate sleep staging result of a patient. The efficiency of sleep staging analysis can be greatly improved. In the traditional automatic sleep staging algorithm, the feature extraction method mainly comprises Fourier transform, wavelet transform, power spectrum, approximate entropy and the like. And inputting the extracted features into a traditional machine learning classifier for sleep stage analysis. The method needs to manually determine specific characteristics, has great limitation and has a good effect in a large amount of data.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a PSG data automatic analysis method, a PSG data automatic analysis device, computer equipment and a storage medium.
The technical scheme adopted by the invention for solving the technical problems is as follows: in a method for automatic analysis of PSG data, the improvement comprising the steps of:
s10, acquiring raw data, wherein the raw data comprises physiological signal data and sleep stage label data;
s20, preprocessing data, wherein the preprocessing comprises filtering, denoising, resampling and multi-frame splicing of physiological signal data;
and S30, processing the data by the sleep stage prediction module, inputting the preprocessed data into the sleep stage prediction module, wherein the processing process comprises the steps of extracting the characteristics of the input data by the characteristic extraction module, performing characteristic fusion on the extracted characteristics by the characteristic fusion module to obtain a characteristic vector, fusing the characteristic vector and the coding vector to obtain a new characteristic vector, and inputting the new characteristic vector into the Transformer module to obtain the sleep stage result of the current frame.
Further, in step S10, the physiological signal data includes an electroencephalogram signal, an electrooculogram signal, a mandibular electromyogram signal, an electrocardiographic signal, a chest strap signal, an abdominal strap signal, a pulse wave signal, a leg movement signal, a snore signal, a pulse rate signal, a blood oxygen saturation signal, and an electrocardiographic RR interval signal.
Further, in step S20, the filtering is used to remove invalid information of low frequency and high frequency, the denoising is used to remove noise in the signal, the resampling is used to convert the data into the same fixed frequency, and the multi-frame splicing is used to splice adjacent multi-frame data to correspond to a frame of sleep staging result.
Further, in step S30, the input data includes one or more signals selected from an electroencephalogram signal, an electrooculogram signal, a mandibular electromyogram signal, an electrocardiographic signal, a chest strap signal, an abdominal strap signal, a pulse wave signal, a leg movement signal, a snoring signal, a pulse rate signal, a blood oxygen saturation level signal, and an electrocardiographic RR interval signal, and the step of extracting the features of the input data includes:
s301, extracting the features of the input data by adopting a BN layer, a convolution layer, a ReLU layer combination and n serially connected group convolution modules in a feature extraction module;
s302, inputting the input data to a ReLU layer after adjusting the number of channels and fusing information among the channels through a BN layer and a convolution layer of 1 × 1;
s303, obtaining a preliminary characteristic diagram, and inputting the preliminary characteristic diagram into a BN layer, a 3 x 1 group convolution layer and a ReLU layer for further characteristic extraction;
s304, inputting the obtained feature map into the BN layer, the 1 × 1 convolution layer and the ReLU layer, and performing channel number adjustment and information fusion among channels to obtain a new feature map;
and S305, splicing the new feature graph with the input data, and inputting the new feature graph and the input data into a pooling layer to perform feature graph size adjustment and feature information fusion to obtain the output of the group convolution module.
Further, in step S30, the step of performing feature fusion on the extracted features by using a feature fusion module to obtain a feature vector includes:
s306, inputting the features extracted by the feature extraction module into a pooling layer to obtain fusion features;
s307, inputting the fusion features into a Flatten layer and unfolding the fusion features into one-dimensional feature vectors;
s308, the unfolded features pass through a full connection layer to obtain feature vectors;
the step of fusing the feature vector and the coding vector to obtain a new feature vector comprises the following steps:
s309, coding is carried out by using sleep stage results of the front and rear adjacent M frames to obtain a coding vector, and during model training, sleep stage labels of the front and rear adjacent M frames are adopted, wherein the value range of M is 0-100;
and S310, fusing the feature vector and the coding vector to obtain a new feature vector.
Further, in step S30, the step of inputting the new feature vector into the Transformer module to obtain the sleep stage result of the current frame includes:
s311, inputting the new feature vector into a transform module to obtain a final sleep stage result of the current frame;
and S312, because the new feature vector contains the feature information of the current frame and the sleep staging information of the adjacent M frames before and after, the sleep staging result of the current frame can be obtained through the transform module according to the feature of the current frame and the time sequence relation of the adjacent M frames before and after.
Further, the method also comprises the step of performing label smoothing on the sleep stage label data:
one-hot tags were converted to soft tags in training:
Figure 606211DEST_PATH_IMAGE001
each component satisfies:
Figure 273953DEST_PATH_IMAGE002
wherein c is the sleep stage number represented by the label, alpha i A weight coefficient corresponding to each sleep stage, k is the total number of sleep stages, epsilon is a label smoothing parameter, p i Labeling the value of each component for the sleep stage;
the invention also discloses a PSG data automatic analysis device, which is improved in that the device comprises an original data processing module, a data preprocessing module and a sleep stage prediction module;
the original data processing module is used for realizing the acquisition of original data, and the original data comprises physiological signal data and sleep stage label data;
the data preprocessing module is connected with the original data processing module and is used for filtering, denoising, resampling and multi-frame splicing the original data;
the sleep stage prediction module is connected with the data preprocessing module and comprises a feature extraction module, a feature fusion module and a Transformer module which are sequentially connected, wherein the feature extraction module is used for performing feature extraction on input data, the feature fusion module is used for performing feature fusion on extracted features to obtain feature vectors, and fusing the feature vectors and coding vectors to obtain new feature vectors, and the Transformer module is used for realizing new feature vector input and obtaining a sleep stage result of a current frame.
Further, the physiological signal data comprises an electroencephalogram signal, an electrooculogram signal, a mandibular electromyogram signal, an electrocardiosignal, a chest strap signal, an abdominal strap signal, a pulse wave signal, a leg movement signal, a snore signal, a pulse rate signal, a blood oxygen saturation signal and an electrocardio RR interval signal.
Furthermore, the feature extraction module comprises a BN layer, a convolution layer, a ReLU layer combination and n series-connected group convolution modules, and feature extraction is carried out on the input data through the BN layer, the convolution layer, the ReLU layer combination and the n series-connected group convolution modules.
Further, the feature fusion module comprises a pooling layer, a Flatten layer and a full connection layer, wherein the output of the pooling layer is unfolded into a one-dimensional fusion vector through the Flatten layer, and the unfolded one-dimensional fusion vector passes through the full connection layer to obtain a feature vector.
The invention also discloses a computer device, which comprises a memory and a processor, wherein the memory is stored with a computer program, and the improvement is that the processor realizes the steps of the PSG data automatic analysis method when executing the computer program.
The invention also discloses a computer storage medium on which a computer program is stored, the improvement of which is that the computer program realizes the steps of the PSG data automatic analysis method when being executed by a processor.
The invention has the beneficial effects that: the method and the device for automatically analyzing the PSG data can greatly improve the efficiency of doctors in analyzing the PSG signals of patients all night; after clinical application, daily workload of doctors can be greatly relieved, the efficiency of seeing a doctor for a patient is greatly improved, and the cost of seeing a doctor for the patient is reduced.
Drawings
Fig. 1 is a schematic flow chart of an automatic PSG data analysis method according to the present invention.
Fig. 2 is a general framework diagram of a sleep stage prediction module in the PSG data automatic analysis method of the present invention.
Fig. 3 is a schematic diagram of a data processing flow of a group convolution module in the PSG data automatic analysis method of the present invention.
Fig. 4 is a schematic diagram of a PSG data automatic analysis apparatus according to the present invention.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
The conception, the specific structure, and the technical effects produced by the present invention will be clearly and completely described below in conjunction with the embodiments and the accompanying drawings to fully understand the objects, the features, and the effects of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments, and those skilled in the art can obtain other embodiments without inventive effort based on the embodiments of the present invention, and all embodiments are within the protection scope of the present invention. In addition, all the connection relations related in the patent do not mean that the components are directly connected, but mean that a better connection structure can be formed by adding or reducing connection auxiliary components according to specific implementation conditions. All technical characteristics in the invention can be interactively combined on the premise of not conflicting with each other.
Referring to fig. 1, the present invention discloses a method for automatically analyzing PSG data, and specifically, in this embodiment, the method includes the following steps:
s10, acquiring raw data, wherein the raw data comprises physiological signal data and sleep stage label data;
in this embodiment, the physiological signal data includes an electroencephalogram signal, an electrooculogram signal, a mandibular electromyogram signal, an electrocardiograph signal, a chest strap signal, an abdominal strap signal, a pulse wave signal, a leg movement signal, a snore signal, a pulse rate signal, a blood oxygen saturation signal, and an electrocardiograph RR interval signal; training a sleep staging method model can be performed by adopting one or more physiological signal data combinations; in this embodiment, a sleep staging method model is trained by using a combination of an electroencephalogram signal, an electrooculogram signal, and a mandibular electromyogram signal.
S20, preprocessing data, wherein the preprocessing comprises filtering, denoising, resampling and multi-frame splicing of physiological signal data; in the scheme, filtering is used for removing invalid information of low frequency and high frequency, denoising is used for removing noise in signals, resampling is used for converting data into the same fixed frequency, and multi-frame splicing is used for splicing adjacent multi-frame data to correspond to a sleep stage result.
And S30, processing the data by the sleep stage prediction module, inputting the preprocessed data into the sleep stage prediction module, wherein the processing process comprises the steps of extracting the characteristics of the input data by the characteristic extraction module, performing characteristic fusion on the extracted characteristics by the characteristic fusion module to obtain a characteristic vector, fusing the characteristic vector and the coding vector to obtain a new characteristic vector, and inputting the new characteristic vector into the Transformer module to obtain the sleep stage result of the current frame. And the coding vector is coded by using the result of the adjacent multi-frame sleep stages to obtain the coding vector.
In step S30, the input data includes one or more signals selected from an electroencephalogram signal, an electrooculogram signal, a mandibular electromyogram signal, an electrocardiographic signal, a chest strap signal, an abdominal strap signal, a pulse wave signal, a leg movement signal, a snore signal, a pulse rate signal, a blood oxygen saturation signal, and an electrocardiographic RR interval signal.
More specifically, as shown in fig. 2 and 3, the step of extracting the features of the input data in step S30 includes:
s301, extracting the features of the input data by adopting a BN layer, a convolution layer and a ReLU layer combination in a feature extraction module and n serially connected group convolution modules;
s302, inputting the input data into a ReLU layer after adjusting the number of channels and fusing information among the channels through a BN layer and a 1 x 1 convolution layer;
s303, obtaining a preliminary feature map, and inputting the preliminary feature map into the BN layer, the 3 x 1 group convolution layer and the ReLU layer for further feature extraction;
s304, inputting the obtained feature graph into the BN layer, the 1 × 1 convolution layer and the ReLU layer, and performing channel number adjustment and information fusion among channels to obtain a new feature graph;
s305, splicing the new feature graph with input data, and inputting the new feature graph and the input data into a pooling layer to perform feature graph size adjustment and feature information fusion to obtain the output of a group convolution module; wherein the input data is characteristic map data processed by a neural network.
Further, as shown in fig. 2, in step S30, the step of performing feature fusion on the extracted features by using a feature fusion module to obtain a feature vector includes:
s306, inputting the features extracted by the feature extraction module into a pooling layer to obtain fusion features;
s307, inputting the fusion features into a Flatten layer and unfolding the fusion features into one-dimensional feature vectors;
s308, the unfolded features pass through a full connection layer to obtain feature vectors;
and, the step of fusing the feature vector and the encoding vector to obtain a new feature vector comprises:
s309, coding is carried out by using sleep stage results of the front and rear adjacent M frames to obtain a coding vector, and during model training, sleep stage labels of the front and rear adjacent M frames are adopted, wherein the value range of M is 0-100;
and S310, fusing the feature vector and the coding vector to obtain a new feature vector.
Further, in step S30, the step of inputting the new feature vector into the Transformer module to obtain the sleep stage result of the current frame includes:
s311, inputting the new feature vector into a transform module to obtain a final sleep stage result of the current frame;
and S312, because the new feature vector contains the feature information of the current frame and the sleep staging information of the adjacent M frames before and after, the sleep staging result of the current frame can be obtained through the transform module according to the feature of the current frame and the time sequence relation of the adjacent M frames before and after.
In addition, in the above embodiment, the method further includes the step of performing tag smoothing on the sleep stage tag data:
one-hot tags were converted to soft tags in training:
Figure 411673DEST_PATH_IMAGE001
each component satisfies:
Figure 873879DEST_PATH_IMAGE003
wherein c is the markSigned sleep stage number, alpha i A weight coefficient corresponding to each sleep stage, k is the total number of sleep stages, epsilon is a label smoothing parameter, p i Labeling the value of each component for the sleep stage;
with reference to fig. 4, the present invention further discloses an automatic PSG data analysis device, which includes an original data processing module 10, a data preprocessing module 20, and a sleep stage prediction module 30; the original data processing module 10 is configured to acquire original data, where the original data includes physiological signal data and sleep stage tag data; the data preprocessing module 20 is connected to the raw data processing module 10, and is configured to perform filtering, denoising, resampling, and multi-frame splicing on raw data; the sleep stage prediction module 30 is connected to the data preprocessing module 20, and the sleep stage prediction module 30 includes a feature extraction module, a feature fusion module and a Transformer module, which are connected in sequence, where the feature extraction module is configured to perform feature extraction on input data, the feature fusion module is configured to perform feature fusion on extracted features to obtain feature vectors, and fuse the feature vectors and coding vectors to obtain new feature vectors, and the Transformer module is configured to implement new feature vector input, and obtain sleep stage results of a current frame. Because the new feature vector contains the feature information of the current frame and the sleep stage information of the adjacent M frames, the sleep stage result of the current frame can be obtained through the Transformer module according to the features of the current frame and the time sequence relation of the adjacent M frames.
In this embodiment, the feature extraction module includes a BN layer, a convolution layer, a ReLU layer combination and n series-connected group convolution modules, and performs feature extraction on the input data through the BN layer, the convolution layer, the ReLU layer combination and the n series-connected group convolution modules. The feature fusion module comprises a pooling layer, a Flatten layer and a full-connection layer, wherein the output of the pooling layer is unfolded into one-dimensional fusion vectors through the Flatten layer, and the unfolded one-dimensional fusion vectors pass through the full-connection layer to obtain feature vectors.
In the above embodiment, the physiological signal data includes an electroencephalogram signal, an electrooculogram signal, a mandibular electromyogram signal, an electrocardiograph signal, a chest strap signal, an abdominal strap signal, a pulse wave signal, a leg movement signal, a snore signal, a pulse rate signal, a blood oxygen saturation signal, and an electrocardiographic RR interval signal.
In addition, the invention also discloses computer equipment which comprises a memory and a processor, wherein the memory is stored with a computer program, and the processor realizes the PSG data automatic analysis method when executing the computer program.
Furthermore, the present invention also discloses a computer storage medium, on which a computer program is stored, wherein the computer program is executed by a processor to implement the PSG data automatic analysis method as described above.
Based on the above, the method and the device for automatically analyzing the PSG data provided by the invention can greatly improve the efficiency of a doctor for analyzing the PSG signals of a patient all night, about three hours of manual analysis is originally needed, and the accuracy and the consistency of a human doctor can be achieved only by thirty seconds.
After clinical application, daily workload of doctors can be greatly relieved, the doctors have more energy to engage in diagnosis and scientific research work requiring more mental labor, the diagnosis efficiency of patients is greatly improved, and the diagnosis cost of the patients is reduced.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (4)

1. A PSG data automatic analysis method is characterized by comprising the following steps:
s10, acquiring raw data, wherein the raw data comprises physiological signal data and sleep stage label data;
in step S10, the physiological signal data includes an electroencephalogram signal, an electrooculogram signal, a mandibular electromyogram signal, an electrocardiosignal, a chest strap signal, an abdominal strap signal, a pulse wave signal, a leg movement signal, a snore signal, a pulse rate signal, a blood oxygen saturation signal, and an electrocardiographic RR interval signal;
s20, preprocessing data, wherein the preprocessing comprises filtering, denoising, resampling and multi-frame splicing of physiological signal data;
s30, processing the data by the sleep stage prediction module, inputting the preprocessed data into the sleep stage prediction module, wherein the processing process comprises the steps of extracting the characteristics of the input data by the characteristic extraction module, performing characteristic fusion on the extracted characteristics by the characteristic fusion module to obtain a characteristic vector, fusing the characteristic vector and a coding vector to obtain a new characteristic vector, and inputting the new characteristic vector into the Transformer module to obtain the sleep stage result of the current frame;
in step S30, the input data includes one or more signals selected from an electroencephalogram signal, an electrooculogram signal, a mandibular electromyogram signal, an electrocardiographic signal, a chest strap signal, an abdominal strap signal, a pulse wave signal, a leg movement signal, a snoring signal, a pulse rate signal, a blood oxygen saturation signal, and an electrocardiographic RR interval signal, and the step of extracting the characteristics of the input data includes:
s301, extracting the features of the input data by adopting a BN layer, a convolution layer and a ReLU layer combination in a feature extraction module and n serially connected group convolution modules;
s302, inputting the input data to a ReLU layer after adjusting the number of channels and fusing information among the channels through a BN layer and a convolution layer of 1 × 1;
s303, obtaining a preliminary feature map, and inputting the preliminary feature map into the BN layer, the 3 x 1 group convolution layer and the ReLU layer for further feature extraction;
s304, inputting the obtained feature map into the BN layer, the 1 × 1 convolution layer and the ReLU layer, and performing channel number adjustment and information fusion among channels to obtain a new feature map;
s305, splicing the new feature graph with input data, and inputting the new feature graph and the input data into a pooling layer to perform feature graph size adjustment and feature information fusion to obtain the output of a group convolution module;
in step S30, the step of performing feature fusion on the extracted features by using a feature fusion module to obtain feature vectors includes:
s306, inputting the features extracted by the feature extraction module into a pooling layer to obtain fusion features;
s307, inputting the fusion features into a Flatten layer and unfolding the fusion features into one-dimensional feature vectors;
s308, the unfolded features pass through a full connection layer to obtain feature vectors;
the step of fusing the feature vector and the coding vector to obtain a new feature vector comprises the following steps:
s309, coding is carried out by using sleep stage results of the front and rear adjacent M frames to obtain a coding vector, and during model training, sleep stage labels of the front and rear adjacent M frames are adopted, wherein the value range of M is 0-100;
and S310, fusing the feature vector and the coding vector to obtain a new feature vector.
2. The method for automatically analyzing PSG data according to claim 1, wherein in step S20, filtering is used to remove invalid information of low and high frequencies, denoising is used to remove noise in signals, resampling is used to convert data into the same fixed frequency, and multi-frame splicing is used to splice adjacent multi-frame data corresponding to a sleep stage result.
3. The method for automatically analyzing PSG data according to claim 1, wherein the step of inputting the new feature vector into the transform module to obtain the sleep stage result of the current frame in step S30 comprises:
s311, inputting the new feature vector into a transform module to obtain a final sleep stage result of the current frame;
and S312, because the new feature vector contains the feature information of the current frame and the sleep staging information of the adjacent M frames before and after, the sleep staging result of the current frame can be obtained through the transform module according to the feature of the current frame and the time sequence relation of the adjacent M frames before and after.
4. The method for automatic analysis of PSG data according to claim 1, further comprising the step of tag smoothing the sleep stage tag data:
one-hot tags were converted to soft tags in training:
Figure 725349DEST_PATH_IMAGE001
each component satisfies:
Figure 301824DEST_PATH_IMAGE002
wherein c is the sleep stage number represented by the label, alpha i A weight coefficient corresponding to each sleep stage, k is the total number of sleep stages, epsilon is a label smoothing parameter, p i The values of the components are labeled for sleep stages.
CN202210571946.8A 2022-05-25 2022-05-25 PSG data automatic analysis method, device, computer equipment and storage medium Active CN114707561B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210571946.8A CN114707561B (en) 2022-05-25 2022-05-25 PSG data automatic analysis method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210571946.8A CN114707561B (en) 2022-05-25 2022-05-25 PSG data automatic analysis method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114707561A CN114707561A (en) 2022-07-05
CN114707561B true CN114707561B (en) 2022-09-30

Family

ID=82176452

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210571946.8A Active CN114707561B (en) 2022-05-25 2022-05-25 PSG data automatic analysis method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114707561B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115137313B (en) * 2022-08-31 2022-11-18 首都医科大学附属北京同仁医院 Evaluation method and device for simultaneously aiming at sleep quality and myopia risk

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021180028A1 (en) * 2020-03-10 2021-09-16 中国科学院脑科学与智能技术卓越创新中心 Method, apparatus and device for evaluating sleep quality on basis of high-frequency electroencephalography, and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021102398A1 (en) * 2019-11-22 2021-05-27 Northwestern University Methods of sleep stage scoring with unsupervised learning and applications of same
CN111631688B (en) * 2020-06-24 2021-10-29 电子科技大学 Algorithm for automatic sleep staging
CN114399629A (en) * 2021-12-22 2022-04-26 北京沃东天骏信息技术有限公司 Training method of target detection model, and target detection method and device
CN114267444A (en) * 2021-12-23 2022-04-01 成都信息工程大学 Method for detecting obstructive apnea and night frontal epilepsy by using sleep structure

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021180028A1 (en) * 2020-03-10 2021-09-16 中国科学院脑科学与智能技术卓越创新中心 Method, apparatus and device for evaluating sleep quality on basis of high-frequency electroencephalography, and storage medium

Also Published As

Publication number Publication date
CN114707561A (en) 2022-07-05

Similar Documents

Publication Publication Date Title
CN109745033A (en) Dynamic electrocardiogram method for evaluating quality based on time-frequency two-dimensional image and machine learning
CN111202517B (en) Sleep automatic staging method, system, medium and electronic equipment
CN114707561B (en) PSG data automatic analysis method, device, computer equipment and storage medium
CN108836314A (en) A kind of ambulatory ECG analysis method and system based on network and artificial intelligence
Huang et al. Sleep stage classification for child patients using DeConvolutional Neural Network
CN106473705A (en) Electroencephalogram signal processing method and system for sleep state monitoring
CN109064437A (en) Image fusion method based on guided filtering and online dictionary learning
CN112036467A (en) Abnormal heart sound identification method and device based on multi-scale attention neural network
CN114548158B (en) Data processing method for blood sugar prediction
CN113951900A (en) Motor imagery intention recognition method based on multi-mode signals
Tripathy et al. Detection of myocardial infarction from vectorcardiogram using relevance vector machine
CN112862749A (en) Automatic identification method for bone age image after digital processing
CN115500843A (en) Sleep stage staging method based on zero sample learning and contrast learning
CN117562554A (en) Portable sleep monitoring and quality assessment method and system
CN112735480A (en) Vocal cord pathological change detection device based on neural network
Kapfo et al. LSTM based Synthesis of 12-lead ECG Signal from a Reduced Lead Set
CN117373595A (en) AI-based personalized treatment scheme generation system for internal medicine patients
CN110327034B (en) Tachycardia electrocardiogram screening method based on depth feature fusion network
Yang et al. Superimposed semantic communication for iot-based real-time ecg monitoring
CN113729723A (en) Electrocardiosignal quality analysis method and device based on ResNet-50 and transfer learning
CN115399735A (en) Multi-head attention mechanism sleep staging method based on time-frequency double-current enhancement
CN113796830A (en) Automatic sleep signal stage reliability evaluation method
Mao et al. Motion Artifact Reduction In Photoplethysmography For Reliable Signal Selection
Efe et al. Comparison of Time-Frequency Analyzes for a Sleep Staging Application with CNN
CN115063393B (en) Liver and liver tumor automatic segmentation method based on edge compensation attention

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 518000 2nd floor, building a, Tsinghua campus, Shenzhen University Town, Xili street, Nanshan District, Shenzhen City, Guangdong Province

Patentee after: Tsinghua Shenzhen International Graduate School

Patentee after: Dongguan JIANDA INFORMATION Technology Co.,Ltd.

Address before: 518000 2nd floor, building a, Tsinghua campus, Shenzhen University Town, Xili street, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: Tsinghua Shenzhen International Graduate School

Patentee before: DONGGUAN JIANDA INFORMATION TECHNOLOGY Co.,Ltd.

PP01 Preservation of patent right
PP01 Preservation of patent right

Effective date of registration: 20240328

Granted publication date: 20220930