CN114998229B - Non-contact sleep monitoring method based on deep learning and multi-parameter fusion - Google Patents
Non-contact sleep monitoring method based on deep learning and multi-parameter fusion Download PDFInfo
- Publication number
- CN114998229B CN114998229B CN202210561402.3A CN202210561402A CN114998229B CN 114998229 B CN114998229 B CN 114998229B CN 202210561402 A CN202210561402 A CN 202210561402A CN 114998229 B CN114998229 B CN 114998229B
- Authority
- CN
- China
- Prior art keywords
- sleep
- tester
- sleeping
- video image
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000007958 sleep Effects 0.000 title claims abstract description 67
- 238000012544 monitoring process Methods 0.000 title claims abstract description 59
- 238000013135 deep learning Methods 0.000 title claims abstract description 22
- 230000004927 fusion Effects 0.000 title claims abstract description 17
- 238000000034 method Methods 0.000 title claims description 17
- 210000002216 heart Anatomy 0.000 claims abstract description 52
- 230000004424 eye movement Effects 0.000 claims abstract description 38
- 210000001061 forehead Anatomy 0.000 claims abstract description 32
- 238000013528 artificial neural network Methods 0.000 claims abstract description 30
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 29
- 238000001228 spectrum Methods 0.000 claims abstract description 22
- 230000003321 amplification Effects 0.000 claims abstract description 17
- 238000003199 nucleic acid amplification method Methods 0.000 claims abstract description 17
- 230000003860 sleep quality Effects 0.000 claims abstract description 13
- 239000000284 extract Substances 0.000 claims abstract description 5
- 210000001508 eye Anatomy 0.000 claims description 48
- 230000036544 posture Effects 0.000 claims description 28
- 230000007306 turnover Effects 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 13
- 238000012549 training Methods 0.000 claims description 12
- 230000004461 rapid eye movement Effects 0.000 claims description 9
- 206010041235 Snoring Diseases 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 6
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 claims description 6
- 238000003709 image segmentation Methods 0.000 claims description 6
- 210000004279 orbit Anatomy 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 208000023504 respiratory system disease Diseases 0.000 claims description 6
- 238000012795 verification Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 5
- 230000036385 rapid eye movement (rem) sleep Effects 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 4
- 201000010099 disease Diseases 0.000 claims description 4
- 238000010183 spectrum analysis Methods 0.000 claims description 4
- 206010002383 Angina Pectoris Diseases 0.000 claims description 3
- 238000012935 Averaging Methods 0.000 claims description 3
- BHPQYMZQTOCNFJ-UHFFFAOYSA-N Calcium cation Chemical compound [Ca+2] BHPQYMZQTOCNFJ-UHFFFAOYSA-N 0.000 claims description 3
- 206010062519 Poor quality sleep Diseases 0.000 claims description 3
- 208000006673 asthma Diseases 0.000 claims description 3
- 210000005252 bulbus oculi Anatomy 0.000 claims description 3
- 229910001424 calcium ion Inorganic materials 0.000 claims description 3
- 210000004709 eyebrow Anatomy 0.000 claims description 3
- 230000003340 mental effect Effects 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000000926 separation method Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 3
- 230000036387 respiratory rate Effects 0.000 abstract description 2
- 230000009466 transformation Effects 0.000 abstract description 2
- 208000019116 sleep disease Diseases 0.000 description 5
- 208000020685 sleep-wake disease Diseases 0.000 description 4
- 238000012360 testing method Methods 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 208000035475 disorder Diseases 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000000630 rising effect Effects 0.000 description 2
- 208000007590 Disorders of Excessive Somnolence Diseases 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 230000004907 flux Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 206010020765 hypersomnia Diseases 0.000 description 1
- 230000002401 inhibitory effect Effects 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 239000001301 oxygen Substances 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/0205—Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4806—Sleep evaluation
- A61B5/4809—Sleep detection, i.e. determining whether a subject is asleep or not
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4806—Sleep evaluation
- A61B5/4812—Detecting sleep stages or cycles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4806—Sleep evaluation
- A61B5/4815—Sleep quality
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7253—Details of waveform analysis characterised by using transforms
- A61B5/7257—Details of waveform analysis characterised by using transforms using Fourier transforms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/024—Detecting, measuring or recording pulse rate or heart rate
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/163—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Animal Behavior & Ethology (AREA)
- Pathology (AREA)
- Heart & Thoracic Surgery (AREA)
- Surgery (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Physiology (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Psychiatry (AREA)
- Evolutionary Computation (AREA)
- Signal Processing (AREA)
- Cardiology (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Anesthesiology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Radiology & Medical Imaging (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- Pulmonology (AREA)
- Fuzzy Systems (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
The invention relates to a non-contact sleep monitoring system based on deep learning and multi-parameter fusion, and belongs to the fields of image processing and deep learning. The system firstly segments an acquired sleep video image, then builds a deep convolutional neural network for extracting and amplifying physiological signals, amplifies heart rate signals of a forehead area and eye movement frequencies of an eye area by setting different amplification factors of the network to obtain the forehead area video image of the amplified heart rate signals and the eye area video image after the eye movement frequencies are amplified, and then extracts corresponding frequency spectrums by utilizing fast Fourier transformation to find frequencies corresponding to frequency spectrum peaks as monitored heart rate signals and eye movement frequencies. For a three-position body video image, a sleeping gesture monitoring neural network structure based on deep learning is built, sleeping gesture features extracted from the network are input into a full-connection layer to be subjected to six classification, classification results correspond to six sleeping gestures of supine, prone, left straight lying, left spiral lying, right straight lying and right spiral lying, and the number of turning times is counted through switching of different sleeping gestures in the six sleeping gestures. And finally, comprehensively evaluating the sleep quality by combining the monitored physiological signals. The invention has the characteristics of high comfort, multi-parameter fusion and high automation, and realizes the monitoring of physiological parameters such as heart rate, respiratory rate, eye movement frequency, sleeping posture, turning-over times and the like in a non-contact manner.
Description
Technical Field
The invention belongs to the field of image processing and deep learning, and particularly relates to a non-contact sleep monitoring system for realizing multi-parameter fusion by combining a video image processing technology and a deep convolution network.
Technical Field
During sleeping, a series of functions of the brain, muscles, eyes, heart, breath and the like of the human body can be changed, and the judgment of the sleeping quality of the human body can be promoted by monitoring the changes. Sleep disorders generally refer to abnormalities in the amount or quality of sleep or the occurrence of certain clinical symptoms during sleep, such as reduced or hypersomnia, sleep breathing disorders, rapid eye movement sleep behavioral disorders, and the like. Medical science proves that a crowd with sleep disorder for a long time can induce various diseases, so that timely diagnosis and treatment of the sleep disorder are of great significance to human health.
Polysomnography is called a golden method for sleep disorder diagnosis and treatment, and is mainly used for monitoring physiological signals of channels such as electroencephalogram, electrocardiogram, electrooculogram, oronasal airflow flux, blood oxygen saturation and the like of a patient and diagnosing according to the collected signals. When the polysomnography is used for monitoring, a plurality of sensors are required to be arranged on a tested person, so that great uncomfortable feeling is brought to the tested person, in addition, even if the polysomnography monitors a plurality of parameters, doctors can take the subjective feeling of the tested person according to the past medical history and the monitoring period as one of the basis of evaluation, and the interpretation result has stronger subjectivity. With the development of deep learning, many miniaturized sleep monitoring devices such as intelligent pillows, mattresses, bracelets, etc. are emerging. The intelligent pillow and the mattress monitor the pressure change in the sleeping process through the pressure sensor, and count the turn-over times of the tested person. The heart rate during sleep can be monitored when intelligent bracelet is worn. Although the existing sleep monitoring equipment reduces the uncomfortable feeling of wearing the sensor in the monitoring process, the existing sleep monitoring equipment can only monitor a single physiological parameter, so that the evaluation result of sleep quality is inaccurate and incomplete.
Aiming at the problems in the sleep monitoring, a non-contact sleep monitoring system based on deep learning and multi-parameter fusion is designed. For heart rate and eye movement frequency, extracting and amplifying micro physiological signals from sleep video through a deep convolutional neural network, simultaneously inhibiting the generation of artifacts, realizing effective physiological signal amplification, and finally carrying out spectrum analysis on the amplified physiological signals. For sleeping positions and turning times, the convolutional neural network is utilized to automatically extract sleeping position features of video frames acquired by the three-position cameras, and six classification is carried out on the sleeping position features, wherein the turning times in the sleeping process can be monitored according to switching between different sleeping positions.
Disclosure of Invention
Aiming at the problems that the contact monitoring of the polysomnography brings discomfort to a tested person, subjectivity of manual interpretation, singleness of monitoring physiological parameters by other sleep monitoring equipment and the like, the invention designs a non-contact sleep monitoring system based on deep learning and multi-parameter fusion so as to realize non-contact monitoring of multiple physiological parameters such as heart rate, eye movement frequency, sleeping posture, turning-over times and the like.
The technical scheme of the invention is a non-contact sleep monitoring method based on deep learning and multi-parameter fusion, which comprises the following steps:
step 1: setting up a sleep monitoring platform, and respectively placing three cameras at the upper, left and right positions of the body of a tester to obtain video images of the tester in the sleeping process;
step 2: performing image segmentation on the video image obtained by the camera above the body of the tester in the step 1 to obtain a forehead area video image and an eye area video image of the tester;
step 3: constructing a deep convolutional neural network for extracting and amplifying physiological signals, and extracting and amplifying tiny physiological signals in a video by using the deep convolutional neural network;
step 4: respectively inputting the forehead region video image and the eye region video image obtained in the step 2 into the deep convolutional neural network built in the step 3, respectively extracting and amplifying heart rate signals of the forehead region and eye movement signals of the eye region, and outputting the forehead region video image after amplifying the heart rate signals and the eye region video image after amplifying the eye movement frequency;
step 5: performing RGB three-channel separation on each frame of the forehead region video image obtained in the step 4 after amplifying the heart rate signals, averaging pixel points in R, G, B channels, and then performing time sequence stacking to obtain pulse wave signals;
step 6: obtaining a time sequence frequency spectrum of the human body pulse wave by fast Fourier transform on the pulse wave signal obtained in the step 5;
step 7: performing spectrum analysis on the time sequence spectrum obtained in the step 6, and selecting the frequency corresponding to the spectrum peak value as a heart rate monitoring result;
step 8: stacking the eye region video images obtained in the step 4 after the eye movement frequency is amplified according to a time sequence, and performing fast Fourier transform to obtain an eye region video image frequency spectrum;
step 9: extracting the frequency corresponding to the frequency spectrum peak value from the frequency spectrum of the eye area video image obtained in the step 8 to be used as a monitoring result of the eye movement frequency;
step 10: building a sleeping gesture monitoring neural network structure based on deep learning, if the sleeping gesture monitoring neural network is not trained, executing step 11, and if the sleeping gesture monitoring neural network is trained, executing step 13;
step 11: the method comprises the steps that more than 1000 images of a tester in sleeping are collected through three cameras arranged on the human body, on the left and on the right in advance, the obtained images are marked with characteristics, the sleeping posture states of the tester are marked manually, and the six sleeping postures correspond to supine, prone, left-side straight lying, left-side crouching, right-side straight lying and right-side crouching;
step 12: the obtained image data marked in the step 11 is sent to a neural network for training, a training set and a verification set are divided according to the proportion of 8:2 for training the network until the accuracy of the verification set reaches more than 95%, and the network training is finished;
step 13: inputting three-position body video images of the testers in the step 1 into a trained neural network, and outputting six classification results of the trained neural network, wherein the classification results correspond to six sleeping positions of supine, prone, left-side straight lying, left-side crouch, right-side straight lying and right-side crouch;
step 14: if the tester switches any two of the six sleeping postures in the step 13, the tester marks the switching as one turn-over, but the mutual conversion between the left straight lying and the left spiral lying and the mutual conversion between the right straight lying and the right spiral lying does not count the turn-over times;
step 15: and (3) comprehensively evaluating the sleep quality result of the tester by combining the heart rate parameter obtained in the step (7), the eye movement frequency obtained in the step (9), the sleeping posture state obtained in the step (13) and the turnover times obtained in the step (14).
Wherein, the step 2 specifically comprises the following steps:
step 2.1: calling dlib library in python to segment out the area where the face is located and extract the face video image of the tester;
step 2.2: performing face key point detection on the face video image obtained in the step 2.1, and calling dlib library in python to perform face key point detection to obtain positions of 68 key points of the face of the tester;
step 2.3: performing image segmentation again on the video image obtained in the step 2.1 through the key point positions of the human face obtained in the step 2.2; dividing a forehead area video image of a rectangular area by marking key points of the centers of left and right eyebrow hairs of a human face and the upper boundary of the human face identified by a dlib library;
step 2.4: by marking key points of left and right eyes of a human face, finding key points representing left eye corners, right eye corners, uppermost eyesocket and lowermost eyesocket, and dividing an eye area video image of a rectangular area through the four key points;
wherein, the step 3 specifically comprises the following steps:
step 3.1: an encoder structure of a deep convolutional neural network for extracting and amplifying physiological signals is built, and the structure is firstly as follows: after each of the 2 convolution layers, a Relu activation function is used, then 3 residual error networks are used, physiological signals in the sleeping video are extracted through one convolution layer, the step length is set to be 2, and finally two residual error structure outputs are connected;
step 3.2: constructing a modulation amplification structure of a deep convolutional neural network, carrying out convolutional operation on the difference of two frames of sleep image physiological signals through a convolutional layer, enabling an activation function to be a Relu function, multiplying the function by an amplification factor alpha, and carrying out nonlinear change on the amplified characteristics by utilizing the convolutional layer and a residual structure again to obtain amplified physiological signal difference characteristics;
step 3.3: and constructing a decoder structure of the deep convolutional neural network, superposing the amplified physiological signal differential characteristics on the initial sleep image, and decoding and outputting the amplified video through up-sampling and two convolutional layers to amplify the physiological signal in the sleep video.
Wherein, the step 4 specifically comprises the following steps:
step 4.1: inputting the forehead region video image obtained in the step 2 into the deep convolutional neural network built in the step 3, setting an amplification factor alpha=15, extracting and amplifying the heart rate of the forehead region, and outputting the forehead region video image after amplifying the heart rate signal;
step 4.2: inputting the video image of the eye area obtained in the step 2 into the deep convolutional neural network built in the step 3, setting an amplification factor alpha=30, extracting and amplifying the eye movement frequency of the eye area, and outputting the video image of the eye area after amplifying the eye movement frequency;
the step 10 specifically includes:
step 10.1: building a neural network structure, wherein the neural network structure comprises 4 convolution layers, 3 maximum pooling layers, 1 full-connection layer and 1 classifier;
step 10.2: the method comprises the steps of preventing the calculated amount from being excessively large, extracting a key frame of a video at the current moment every 1s, respectively extracting 1 frame of images from cameras positioned on the body, left and right of a tester, forming three-channel image data, and inputting the three-channel image data into a neural network in the step 10.1;
step 10.3: the three-channel image data in the step 10.2 are respectively processed by a convolution layer with a convolution kernel of 10 multiplied by 10, a maximum pooling layer with a convolution kernel of 2 multiplied by 2, and a convolution layer with a convolution kernel of 10 multiplied by 10, so as to extract image characteristics;
step 10.4: inputting the extracted features into a full connection layer for six classification, wherein the classification results correspond to six sleeping positions: supine, prone, left recumbent, right recumbent and right recumbent.
The step 15 specifically includes:
step 15.1: the heart rate of a person in normal sleep is 60-100 times per minute, the heart rate of the person in deep sleep can be reduced to 50 times per minute, the heart rate of the person in deep sleep is monitored, when the heart rate is obviously reduced, the person in the test is considered to enter a deep sleep period, when the heart rate is gradually increased, the person in the test is considered to exit the deep sleep period, the proportion of the deep sleep period in the sleep of the person in the test is finally counted, and the larger the proportion of the deep sleep period is, the higher the sleep quality is;
step 15.2: during the rapid eye movement period in the sleep sub-period, the eye ball can rapidly rotate to monitor the eye movement of the tester, if the eye movement frequency is obviously increased, the tester can be considered to enter the rapid eye movement period by combining with the heart rate rising in the step 14.1, the duration of the rapid eye movement period is counted, and if the rapid eye movement sleep is suddenly interrupted, the signals of the attacks of diseases such as angina, asthma and the like are often generated;
step 15.3: lying is considered to be a good sleeping posture during sleeping, but is not suitable for people suffering from respiratory diseases or frequently snoring, and should adopt a side sleeping posture; monitoring the sleeping posture of a tester, and if the tester suffers from respiratory diseases or snores, performing sleeping posture adjustment suggestion on the tester when the tester adopts a non-side sleeping posture;
step 15.4: the turn-over times of the testers are monitored, and if the turn-over times are too large, the testers are prompted that the testers possibly lack calcium ions or have high mental stress and poor sleep quality.
The invention relates to a non-contact sleep monitoring system based on deep learning and multi-parameter fusion, which is characterized in that firstly, an acquired video image is segmented, a forehead area video image, an eye area video image and a three-party body video image are segmented, then a physiological signal extraction and amplification deep convolutional neural network is built, heart rate signals in the forehead area and eye movement frequencies in the eye area are amplified by setting different amplification factors of the network, a forehead area video image of the amplified heart rate signals and eye area video images after the eye movement frequencies are amplified are obtained, and then, corresponding frequency spectrums are extracted by utilizing fast Fourier transformation, and frequencies corresponding to frequency spectrum peaks are found out to be used as monitored heart rate signals and eye movement frequencies. For a three-position body video image, a sleeping gesture monitoring neural network structure based on deep learning is built, sleeping gesture features extracted from the network are input into a full-connection layer to be subjected to six classification, classification results correspond to six sleeping gestures of supine, prone, left straight lying, left spiral lying, right straight lying and right spiral lying, and the number of turning-over times d is counted through switching of different sleeping gestures in the six sleeping gestures. And finally, comprehensively evaluating the sleep quality by combining the monitored physiological signals. The invention provides a high-comfort, multi-parameter fusion and high-automation sleep monitoring system for testers, realizes the monitoring of physiological parameters such as heart rate, respiratory rate, eye movement frequency, sleeping posture, turning-over times and the like in a non-contact manner, improves the reliability of monitoring, and has key effects on clinical diagnosis of sleep quality, clinical treatment and early intervention of patients with sleep disorder or potential patients.
Drawings
FIG. 1 is a deep convolutional neural network diagram of physiological signal extraction and amplification
FIG. 2 is a flow chart of heart rate monitoring
Fig. 3 is a flow chart of eye movement monitoring
FIG. 4 is a flow chart for monitoring sleeping position and turning-over times
FIG. 5 is a block diagram of a sleep posture monitoring neural network
Detailed Description
The following describes a non-contact sleep monitoring system based on deep learning and multi-parameter fusion in detail with reference to the accompanying drawings:
step 1: and building a sleep monitoring platform. Three cameras are respectively arranged at the upper, left and right positions of the body of the tester so as to acquire video images of the tester in the sleeping process;
step 2: performing image segmentation on the video image obtained by the camera above the body of the tester in the step 1 to obtain a forehead area video image and an eye area video image of the tester;
step 2.1: calling dlib library in python to segment out the area where the face is located and extract the face video image of the tester;
step 2.2: performing face key point detection on the face video image obtained in the step 2.1, and calling dlib library in python to perform face key point detection to obtain positions of 68 key points of the face of the tester;
step 2.3: and (3) performing image segmentation again on the video image obtained in the step (2.1) through the key point positions of the human faces obtained in the step (2.2). Dividing a forehead area video image of a rectangular area by marking key points of the centers of left and right eyebrow hairs of a human face and the upper boundary of the human face identified by a dlib library;
step 2.4: and finding out key points representing left eye corners, right eye corners, uppermost eyesockets and lowermost eyesockets by marking key points of left eyes and right eyes of a human face, and dividing an eye area video image of a rectangular area by the four key points.
Step 3: constructing a deep convolutional neural network for extracting and amplifying physiological signals, and extracting and amplifying tiny physiological signals in a video by using the deep convolutional neural network;
step 3.1: constructing an encoder structure of a deep convolutional neural network for extracting and amplifying physiological signals, wherein the encoder structure comprises 2 convolutional layers and 3 residual error networks, a Relu activation function is used after each convolutional layer, physiological signals in a sleeping video are extracted through one convolutional layer, the step length of the encoder structure is set to be 2, and finally, the encoder structure is connected with two residual error structure outputs;
step 3.2: constructing a modulation amplification structure of a deep convolutional neural network, carrying out convolutional operation on the difference of two frames of sleep image physiological signals through a convolutional layer, enabling an activation function to be a Relu function, multiplying the function by an amplification factor alpha, and carrying out nonlinear change on the amplified characteristics by utilizing the convolutional layer and a residual structure to obtain amplified physiological signal difference characteristics;
step 3.3: and constructing a decoder structure of the deep convolutional neural network, superposing the amplified physiological signal differential characteristics on the initial sleep image, and decoding and outputting the amplified video through up-sampling and two convolutional layers to amplify the physiological signal in the sleep video.
Step 4: respectively inputting the forehead region video image and the eye region video image obtained in the step 2 into the deep convolutional neural network built in the step 3, respectively extracting and amplifying heart rate signals of the forehead region and eye movement signals of the eye region, and outputting the forehead region video image after amplifying the heart rate signals and the eye region video image after amplifying the eye movement frequency;
step 4.1: inputting the forehead region video image obtained in the step 2 into the deep convolutional neural network built in the step 3, setting an amplification factor alpha=15, extracting and amplifying the heart rate of the forehead region, and outputting the forehead region video image after amplifying the heart rate signal;
step 4.2: inputting the video image of the eye area obtained in the step 2 into the deep convolutional neural network built in the step 3, setting an amplification factor alpha=30, extracting and amplifying the eye movement frequency of the eye area, and outputting the video image of the eye area after amplifying the eye movement frequency;
step 5: performing RGB three-channel separation on each frame of the forehead region video image obtained in the step 4 after amplifying the heart rate signals, averaging pixel points in R, G, B channels, and then performing time sequence stacking to obtain pulse wave signals;
step 6: obtaining a time sequence frequency spectrum of the human body pulse wave by fast Fourier transform on the pulse wave signal obtained in the step 5;
step 7: performing spectrum analysis on the time sequence spectrum obtained in the step 6, and selecting the frequency corresponding to the spectrum peak value as a heart rate monitoring result;
step 8: stacking the eye region video images obtained in the step 4 after the eye movement frequency is amplified according to a time sequence, and performing fast Fourier transform to obtain an eye region video image frequency spectrum;
step 9: extracting the frequency corresponding to the frequency spectrum peak value from the frequency spectrum of the eye area video image obtained in the step 8 to be used as a monitoring result of the eye movement frequency;
step 10: and (3) building a sleeping gesture monitoring neural network structure based on deep learning, and if the sleeping gesture monitoring neural network is not trained, executing step 11. If the sleeping gesture monitoring neural network is trained, executing the step 13;
step 10.1: building a neural network structure, wherein the neural network structure comprises 4 convolution layers, 3 maximum pooling layers, 1 full-connection layer and 1 classifier;
step 10.2: the method comprises the steps of preventing the calculated amount from being excessively large, extracting a key frame of a video at the current moment every 1s, respectively extracting 1 frame of images from cameras positioned on the body, left and right of a tester, forming three-channel image data, and inputting the three-channel image data into a neural network in the step 10.1;
step 10.3: the three-channel image data in the step 10.2 are respectively processed by a convolution layer with a convolution kernel of 10 multiplied by 10, a maximum pooling layer with a convolution kernel of 2 multiplied by 2, and a convolution layer with a convolution kernel of 10 multiplied by 10, so as to extract image characteristics;
step 10.4: inputting the extracted features into a full connection layer for six classification, wherein the classification results correspond to six sleeping positions: supine, prone, left recumbent, right recumbent and right recumbent.
Step 11: the method comprises the steps that more than 1000 images of a tester in sleeping are collected through three cameras arranged on the human body, on the left and on the right in advance, the obtained images are marked with characteristics, the sleeping posture states of the tester are marked manually, and the six sleeping postures correspond to supine, prone, left-side straight lying, left-side crouching, right-side straight lying and right-side crouching;
step 12: the obtained image data marked in the step 11 is sent to a neural network for training, a training set and a verification set are divided according to the proportion of 8:2 for training the network until the accuracy of the verification set reaches more than 95%, and the network training is finished;
step 13: inputting three-position body video images of the testers in the step 1 into a trained neural network, and outputting six classification results of the trained neural network, wherein the classification results correspond to six sleeping positions of supine, prone, left-side straight lying, left-side crouch, right-side straight lying and right-side crouch;
step 14: if the tester switches any two of the six sleeping postures in the step 13, the tester marks the switching as one turn-over, but the mutual conversion between the left straight lying and the left spiral lying and the mutual conversion between the right straight lying and the right spiral lying does not count the turn-over times;
step 15: and (3) comprehensively evaluating the sleep quality result of the tester by combining the heart rate parameter obtained in the step (7), the eye movement frequency obtained in the step (9), the sleeping posture state obtained in the step (13) and the turnover times obtained in the step (14).
Step 15.1: the heart rate of a person during normal sleep is 60-100 times per minute, and the heart rate can be reduced to 50 times per minute during deep sleep. The heart rate of the tester is monitored during sleep, and the tester is considered to enter a deep sleep period when the heart rate is obviously reduced, and is considered to exit the deep sleep period when the heart rate is gradually increased. Finally, counting the proportion of the deep sleep period in the sleep of the tester, wherein the sleep quality is higher as the proportion of the deep sleep period is larger;
step 15.2: during the rapid eye movement phase of sleep, the eyeball can rapidly rotate. According to the invention, eye movement of a tester is monitored, if the eye movement frequency is obviously increased, the tester can be considered to enter a rapid eye movement period by combining with the heart rate rising in the step 14.1, the duration of the rapid eye movement period is counted, and if the rapid eye movement sleep is suddenly interrupted, the signals of attacks of diseases such as angina, asthma and the like are often generated;
step 15.3: lying is considered to be a good sleeping position during sleeping, but is not suitable for people suffering from respiratory diseases or frequently snoring, and should adopt a side sleeping position. The invention monitors the sleeping posture of the testers, and if the testers suffer from respiratory diseases or snore, the testers can carry out sleeping posture adjustment suggestion when adopting the sleeping posture which is not on the side;
step 15.4: the invention monitors the turn-over times of the testers, and if the turn-over times are too large, the testers are prompted to possibly lack calcium ions or have high mental stress and poor sleep quality.
Claims (6)
1. A non-contact sleep monitoring method based on deep learning and multi-parameter fusion, the method comprising the following steps:
step 1: building a sleep monitoring platform; three cameras are respectively arranged at the upper, left and right positions of the body of the tester so as to acquire video images of the tester in the sleeping process;
step 2: performing image segmentation on the video image obtained by the camera above the body of the tester in the step 1 to obtain a forehead area video image and an eye area video image of the tester;
step 3: constructing a deep convolutional neural network for extracting and amplifying physiological signals, and extracting and amplifying tiny physiological signals in a video by using the deep convolutional neural network;
step 4: respectively inputting the forehead region video image and the eye region video image obtained in the step 2 into the deep convolutional neural network built in the step 3, respectively extracting and amplifying heart rate signals of the forehead region and eye movement signals of the eye region, and outputting the forehead region video image after amplifying the heart rate signals and the eye region video image after amplifying the eye movement frequency;
step 5: performing RGB three-channel separation on each frame of the forehead region video image obtained in the step 4 after amplifying the heart rate signals, averaging pixel points in R, G, B channels, and then performing time sequence stacking to obtain pulse wave signals;
step 6: obtaining a time sequence frequency spectrum of the human body pulse wave by fast Fourier transform on the pulse wave signal obtained in the step 5;
step 7: performing spectrum analysis on the time sequence spectrum obtained in the step 6, and selecting the frequency corresponding to the spectrum peak value as a heart rate monitoring result;
step 8: stacking the eye region video images obtained in the step 4 after the eye movement frequency is amplified according to a time sequence, and performing fast Fourier transform to obtain an eye region video image frequency spectrum;
step 9: extracting the frequency corresponding to the frequency spectrum peak value from the frequency spectrum of the eye area video image obtained in the step 8 to be used as a monitoring result of the eye movement frequency;
step 10: building a sleeping gesture monitoring neural network structure based on deep learning, and executing step 11 if the sleeping gesture monitoring neural network is not trained; if the sleeping gesture monitoring neural network is trained, executing the step 13;
step 11: the method comprises the steps that more than 1000 images of a tester in sleeping are collected through three cameras arranged on the human body, on the left and on the right in advance, the obtained images are marked with characteristics, the sleeping posture states of the tester are marked manually, and the six sleeping postures correspond to supine, prone, left-side straight lying, left-side crouching, right-side straight lying and right-side crouching;
step 12: the obtained image data marked in the step 11 is sent to a neural network for training, a training set and a verification set are divided according to the proportion of 8:2 for training the network until the accuracy of the verification set reaches more than 95%, and the network training is finished;
step 13: inputting three-position body video images of the testers in the step 1 into a trained neural network, and outputting six classification results of the trained neural network, wherein the classification results correspond to six sleeping positions of supine, prone, left-side straight lying, left-side crouch, right-side straight lying and right-side crouch;
step 14: if the tester switches any two of the six sleeping postures in the step 13, the tester marks the switching as one turn-over, but the mutual conversion between the left straight lying and the left spiral lying and the mutual conversion between the right straight lying and the right spiral lying does not count the turn-over times;
step 15: and (3) comprehensively evaluating the sleep quality result of the tester by combining the heart rate parameter obtained in the step (7), the eye movement frequency obtained in the step (9), the sleeping posture state obtained in the step (13) and the turnover times obtained in the step (14).
2. The non-contact sleep monitoring system based on deep learning and multi-parameter fusion as set forth in claim 1, wherein the step 2 is specifically:
step 2.1: calling dlib library in python to segment out the area where the face is located and extract the face video image of the tester;
step 2.2: performing face key point detection on the face video image obtained in the step 2.1, and calling dlib library in python to perform face key point detection to obtain positions of 68 key points of the face of the tester;
step 2.3: performing image segmentation again on the video image obtained in the step 2.1 through the key point positions of the human face obtained in the step 2.2; dividing a forehead area video image of a rectangular area by marking key points of the centers of left and right eyebrow hairs of a human face and the upper boundary of the human face identified by a dlib library;
step 2.4: and finding out key points representing left eye corners, right eye corners, uppermost eyesockets and lowermost eyesockets by marking key points of left eyes and right eyes of a human face, and dividing an eye area video image of a rectangular area by the four key points.
3. The non-contact sleep monitoring system based on deep learning and multi-parameter fusion as set forth in claim 1, wherein the step 3 is specifically:
step 3.1: constructing an encoder structure of a deep convolutional neural network for extracting and amplifying physiological signals, wherein the encoder structure comprises 2 convolutional layers and 3 residual error networks, a Relu activation function is used after each convolutional layer, physiological signals in a sleeping video are extracted through one convolutional layer, the step length of the encoder structure is set to be 2, and finally, the encoder structure is connected with two residual error structure outputs;
step 3.2: constructing a modulation amplification structure of a deep convolutional neural network, carrying out convolutional operation on the difference of two frames of sleep image physiological signals through a convolutional layer, enabling an activation function to be a Relu function, multiplying the function by an amplification factor alpha, and carrying out nonlinear change on the amplified characteristics by utilizing the convolutional layer and a residual structure to obtain amplified physiological signal difference characteristics;
step 3.3: and constructing a decoder structure of the deep convolutional neural network, superposing the amplified physiological signal differential characteristics on the initial sleep image, and decoding and outputting the amplified video through up-sampling and two convolutional layers to amplify the physiological signal in the sleep video.
4. The non-contact sleep monitoring system based on deep learning and multi-parameter fusion as set forth in claim 1, wherein the step 4 is specifically:
step 4.1: inputting the forehead region video image obtained in the step 2 into the deep convolutional neural network built in the step 3, setting an amplification factor alpha=15, extracting and amplifying the heart rate of the forehead region, and outputting the forehead region video image after amplifying the heart rate signal;
step 4.2: inputting the video image of the eye area obtained in the step 2 into the deep convolutional neural network built in the step 3, setting an amplification factor alpha=30, extracting and amplifying the eye movement frequency of the eye area, and outputting the video image of the eye area after amplifying the eye movement frequency.
5. The non-contact sleep monitoring system based on deep learning and multi-parameter fusion as set forth in claim 1, wherein the step 10 is specifically:
step 10.1: building a neural network structure, wherein the neural network structure comprises 4 convolution layers, 3 maximum pooling layers, 1 full-connection layer and 1 classifier;
step 10.2: the method comprises the steps of preventing the calculated amount from being excessively large, extracting a key frame of a video at the current moment every 1s, respectively extracting 1 frame of images from cameras positioned on the body, left and right of a tester, forming three-channel image data, and inputting the three-channel image data into a neural network in the step 10.1;
step 10.3: the three-channel image data in the step 10.2 are respectively processed by a convolution layer with a convolution kernel of 10 multiplied by 10, a maximum pooling layer with a convolution kernel of 2 multiplied by 2, and a convolution layer with a convolution kernel of 10 multiplied by 10, so as to extract image characteristics;
step 10.4: inputting the extracted features into a full connection layer for six classification, wherein the classification results correspond to six sleeping positions: supine, prone, left recumbent, right recumbent and right recumbent.
6. The non-contact sleep monitoring system based on deep learning and multi-parameter fusion as set forth in claim 1, wherein the step 15 is specifically:
step 15.1: the heart rate of a person in normal sleep is 60-100 times per minute, and the heart rate can be reduced to 50 times per minute in deep sleep; monitoring the heart rate of the tester during sleep, and considering the tester to enter a deep sleep period when the heart rate is obviously reduced, and considering the tester to exit the deep sleep period when the heart rate is gradually increased; finally, counting the proportion of the deep sleep period in the sleep of the tester, wherein the sleep quality is higher as the proportion of the deep sleep period is larger;
step 15.2: during the rapid eye movement period in sleep division, the eyeball can rapidly rotate; monitoring eye movement of a tester, if the eye movement frequency is obviously increased, and combining with the heart rate rise in the step 14.1, the tester can be considered to enter a rapid eye movement period, the duration of the rapid eye movement period is counted, and if the rapid eye movement sleep is suddenly interrupted, the rapid eye movement sleep is often a signal of the attack of diseases such as angina, asthma and the like;
step 15.3: lying is considered to be a good sleeping posture during sleeping, but is not suitable for people suffering from respiratory diseases or frequently snoring, and should adopt a side sleeping posture; monitoring the sleeping posture of a tester, and if the tester suffers from respiratory diseases or snores, performing sleeping posture adjustment suggestion on the tester when the tester adopts a non-side sleeping posture;
step 15.4: the turn-over times of the testers are monitored, and if the turn-over times are too large, the testers are prompted that the testers possibly lack calcium ions or have high mental stress and poor sleep quality.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210561402.3A CN114998229B (en) | 2022-05-23 | 2022-05-23 | Non-contact sleep monitoring method based on deep learning and multi-parameter fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210561402.3A CN114998229B (en) | 2022-05-23 | 2022-05-23 | Non-contact sleep monitoring method based on deep learning and multi-parameter fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114998229A CN114998229A (en) | 2022-09-02 |
CN114998229B true CN114998229B (en) | 2024-04-12 |
Family
ID=83027622
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210561402.3A Active CN114998229B (en) | 2022-05-23 | 2022-05-23 | Non-contact sleep monitoring method based on deep learning and multi-parameter fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114998229B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116563887B (en) * | 2023-04-21 | 2024-03-12 | 华北理工大学 | Sleeping posture monitoring method based on lightweight convolutional neural network |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004187961A (en) * | 2002-12-12 | 2004-07-08 | Toshiba Corp | Sleeping condition detector and sleeping condition management system |
CN105997004A (en) * | 2016-06-17 | 2016-10-12 | 美的集团股份有限公司 | Sleep reminding method and sleep monitoring device |
US9993166B1 (en) * | 2013-06-21 | 2018-06-12 | Fitbit, Inc. | Monitoring device using radar and measuring motion with a non-contact device |
CN108836269A (en) * | 2018-05-10 | 2018-11-20 | 电子科技大学 | It is a kind of to merge the dynamic sleep mode automatically of heart rate breathing body method by stages |
CN109431681A (en) * | 2018-09-25 | 2019-03-08 | 吉林大学 | A kind of intelligent eyeshade and its detection method detecting sleep quality |
CN110957030A (en) * | 2019-12-04 | 2020-04-03 | 中国人民解放军第二军医大学 | Sleep quality monitoring and interaction system |
CN111248868A (en) * | 2020-02-20 | 2020-06-09 | 长沙湖湘医疗器械有限公司 | Quick eye movement sleep analysis method, system and equipment |
CN112451834A (en) * | 2020-11-24 | 2021-03-09 | 珠海格力电器股份有限公司 | Sleep quality management method, device, system and storage medium |
CN112806975A (en) * | 2021-02-01 | 2021-05-18 | 深圳益卡思科技发展有限公司 | Sleep monitoring device, method and medium based on millimeter wave radar |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10374863B2 (en) * | 2012-12-05 | 2019-08-06 | Origin Wireless, Inc. | Apparatus, systems and methods for event recognition based on a wireless signal |
US11439344B2 (en) * | 2015-07-17 | 2022-09-13 | Origin Wireless, Inc. | Method, apparatus, and system for wireless sleep monitoring |
-
2022
- 2022-05-23 CN CN202210561402.3A patent/CN114998229B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004187961A (en) * | 2002-12-12 | 2004-07-08 | Toshiba Corp | Sleeping condition detector and sleeping condition management system |
US9993166B1 (en) * | 2013-06-21 | 2018-06-12 | Fitbit, Inc. | Monitoring device using radar and measuring motion with a non-contact device |
CN105997004A (en) * | 2016-06-17 | 2016-10-12 | 美的集团股份有限公司 | Sleep reminding method and sleep monitoring device |
CN108836269A (en) * | 2018-05-10 | 2018-11-20 | 电子科技大学 | It is a kind of to merge the dynamic sleep mode automatically of heart rate breathing body method by stages |
CN109431681A (en) * | 2018-09-25 | 2019-03-08 | 吉林大学 | A kind of intelligent eyeshade and its detection method detecting sleep quality |
CN110957030A (en) * | 2019-12-04 | 2020-04-03 | 中国人民解放军第二军医大学 | Sleep quality monitoring and interaction system |
CN111248868A (en) * | 2020-02-20 | 2020-06-09 | 长沙湖湘医疗器械有限公司 | Quick eye movement sleep analysis method, system and equipment |
CN112451834A (en) * | 2020-11-24 | 2021-03-09 | 珠海格力电器股份有限公司 | Sleep quality management method, device, system and storage medium |
CN112806975A (en) * | 2021-02-01 | 2021-05-18 | 深圳益卡思科技发展有限公司 | Sleep monitoring device, method and medium based on millimeter wave radar |
Non-Patent Citations (3)
Title |
---|
Deep learning for automated sleep staging using instantaneous heart rate;Niranjan Sridhar等;《npj digital medicine》;20200820;全文 * |
基于IPPG技术的生理参数检测综述;张煜;刘保真;单聪淼;牟锴钰;;医疗卫生装备;20200215(02);全文 * |
基于心率和呼吸特征结合的睡眠分期研究;冯静达;焦学军;李启杰;郭娅美;杨涵钧;楚洪祚;;航天医学与医学工程;20200415(02);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114998229A (en) | 2022-09-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104834946B (en) | A kind of contactless sleep monitor method and system | |
Pouyan et al. | A pressure map dataset for posture and subject analytics | |
JP5859979B2 (en) | Health indicators based on multivariate residuals for human health monitoring | |
US9443289B2 (en) | Compensating for motion induced artifacts in a physiological signal extracted from multiple videos | |
US20140275829A1 (en) | Sleep stage annotation device | |
US20190343457A1 (en) | Pain assessment method and apparatus for patients unable to self-report pain | |
US9436984B2 (en) | Compensating for motion induced artifacts in a physiological signal extracted from a single video | |
Bousefsaf et al. | Remote assessment of the heart rate variability to detect mental stress | |
CN113347916A (en) | System and method for multivariate stroke detection | |
CN111887858B (en) | Ballistocardiogram signal heart rate estimation method based on cross-modal mapping | |
Waltisberg et al. | Detecting disordered breathing and limb movement using in-bed force sensors | |
CN114998229B (en) | Non-contact sleep monitoring method based on deep learning and multi-parameter fusion | |
Bennett et al. | The detection of breathing behavior using Eulerian-enhanced thermal video | |
CN111544001A (en) | Non-contact apnea detection device and method | |
CN109044275B (en) | Non-invasive sensing sleep quality analysis system and method based on fuzzy inference system | |
JP2023521573A (en) | Systems and methods for mapping muscle activation | |
Alamudun et al. | Removal of subject-dependent and activity-dependent variation in physiological measures of stress | |
CN116807405A (en) | Sleep state and sleep disease detection system based on human body pressure distribution image | |
Kau et al. | Pressure-sensor-based sleep status and quality evaluation system | |
Adami et al. | A method for classification of movements in bed | |
Suriani et al. | Non-contact facial based vital sign estimation using convolutional neural network approach | |
CN107280673A (en) | A kind of infrared imaging breath signal detection method based on key-frame extraction technique | |
KR102340670B1 (en) | Deep learning-based psychophysiological test system and method | |
Pediaditis et al. | Contactless respiratory rate estimation from video in a real-life clinical environment using eulerian magnification and 3D CNNs | |
CN113040734A (en) | Non-contact blood pressure estimation method based on signal screening |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |