Connect public, paid and private patent data with Google Patents Public Datasets

Sleep monitoring method and system

Info

Publication number
WO2016193030A1
WO2016193030A1 PCT/EP2016/061525 EP2016061525W WO2016193030A1 WO 2016193030 A1 WO2016193030 A1 WO 2016193030A1 EP 2016061525 W EP2016061525 W EP 2016061525W WO 2016193030 A1 WO2016193030 A1 WO 2016193030A1
Authority
WO
Grant status
Application
Patent type
Prior art keywords
state
sleep
subject
movement
eye
Prior art date
Application number
PCT/EP2016/061525
Other languages
French (fr)
Inventor
Petronella Hendrika Zwartkruis-Pelgrim
Roy Joan Eli Marie Raymann
Vincent Jeanne
Original Assignee
Koninklijke Philips N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Detecting, measuring or recording for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Detecting, measuring or recording for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4812Detecting sleep stages or cycles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Detecting, measuring or recording for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4815Sleep quality

Abstract

A method and system is provided for determining a sleep state of a subject based on sensor data obtained from a microphone (407) and a camera (406) which monitor the subject and from a body movement sensor (408) which senses body movement of the subject. Audio data, video data and body movement data are analyzed to determine the sleep state of the subject to be (a) an awake-vocalization state based on the detection of vocalization, (b) an awake-non-vocalization state based on the detection of eye openness, (c) a REM-sleep state based on the detection of eye movement in closed-eye state, (d) a deep-sleep state or a light-sleep state based on the detection of respiration. The provided method and system offer reliable and accurate determination of sleep states of a subject in an unobtrusive manner.

Description

SLEEP MONITORING METHOD AND SYSTEM

FIELD OF THE INVENTION

The invention relates to a method and a system for determining sleep states of a subject. The invention further relates to a computer program product comprising instructions for causing a processor system to perform the method.

BACKGROUND OF THE INVENTION

It is known that both the quantity and quality of sleep is of a major importance on the general health and overall functioning of a person. As such, it may be desirable to monitor sleep states of a person. For example, in case of a baby, the determining of sleep states of the baby may be of interest for evaluating the impact of sleep on the maturation of the central nervous system or on the future cognitive development of the baby.

Polysomnography (PSG) is a known method which provides information on a sleeping behavior of a subject. PSG is a recording of the biophysiological changes that occur during sleep. It is usually performed at night, when most people sleep. The PSG monitors a number of body functions including brain (EEG), eye movements (EOG), muscle activity or skeletal muscle activation (EMG) and heart rhythm (ECG) during sleep. Optionally, breathing functions like respiratory airflow and respiratory effort indicators may as well be used. However, because of attachments of several sensor to the subject, PSG is an obtrusive method and it may be difficult to conduct PSG for prolonged periods of time.

Alternative, less obtrusive, approaches have been proposed. An example of a known monitoring device is given by US patent 6280392 Bl . The device is described to include a sensor sheet placed on a bed. The sensor sheet has plural pressure sensitive cells disposed at equal intervals therein, and is placed underneath a mattress on the bed floor. A control unit determines an infant's breathing and sleeping posture from the digital signals obtained from the sensor sheet.

In another example, image data correlations have been used to determine a sleep state of a subject. US20070156060 uses correlation between image content changes to determine sleeping state. A frame comparator compares pixel values between a pair of images obtained one second from each other. When the difference between image gradients at corresponding locations in images obtained at a pair of time points exceeds a threshold, motion is detected. This results in a map of image locations where motion has been detected for the pair of time points. A processor computes correlations between maps for different pairs of time points. The correlations are used to detect whether there is a temporally repetitive pattern. The detection results are used as an indication of sleep state.

In a further example, US 2014/0046184 describes a systems for providing contact-less sleep disorder diagnosis, including a sound input device and a movement detector that receive sound and movement data originating from a patient in a sleeping environment, respectively. The system further includes a computer-implemented device that receives and stores the sound and movement data and determines a sleep disorder diagnosis based on the received data.

Even though such techniques can provide some information on a sleep state of a subject, it remains a problem to enhance the reliability and accuracy of such unobtrusive determination of sleep states of a subject. In particular these techniques may not be able to reliably and accurately distinguish between light sleep, deep sleep and REM sleep.

SUMMARY OF THE INVENTION

It would be advantageous to have a method or a system for providing a more reliable and accurate unobtrusive determination of sleeping states of a subject in an unobtrusive manner.

To better address this concern, a first aspect of the invention provides a method for determining a sleep state of a subject based on sensor data obtained from a microphone and a camera which monitor the subject and from a body movement sensor which senses body movement of the subject, the method comprising:

analyzing audio data of the microphone to detect vocalization of the subject; analyzing video data of the camera to detect eye openness and eye movement in closed-eye state of the subject;

analyzing body movement data of the body movement sensor to detect respiration of the subject;

classifying the sleep state of the subject to be one of:

an awake- vocalization state based on the detection of vocalization;

an awake-non- vocalization state based on the detection of eye openness;

a REM-sleep state based on the detection of eye movement in closed-eye state; a deep-sleep state or a light-sleep state based on the detection of respiration. The above measures involve analyzing audio data of the microphone to detect vocalization of the subject. The audio data may represent an audio recording of the subject and may thus contain acoustic events with a detectable pattern and intensity such as sounds e.g. ambient noise, verbal vocalizations e.g. talking, calling or screaming, or non-verbal vocalizations e.g. snoring, coughing or crying. The microphone may be a standalone microphone, a microphone integrated in another apparatus such as a mobile phone, etc.

The above measures further involve analyzing video data of the camera to detect eye openness and eye movement in closed-eye state of the subject. The video data may represent data obtained from a visual recording of the subject. The camera may refer to any suitable type of camera, e.g. a monochrome camera configured for visible and/or infrared light imaging. The camera may be a standalone camera, a camera integrated in another apparatus such as a mobile phone, etc. The camera may be directed at the subject.

The above measures further involve analyzing body movement data of the body movement sensor to detect respiration of the subject. The body movement data may represent a presence or absence of subtle or gross body movement of the subject. Examples of the body movement sensor may include load cells or piezo-electric elements, or a camera, which can sense fine and gross movements of the subject. It is noted that body movement of the subject may be sensed unobtrusively by suitably positioning load cells or piezo-electric elements, for example, by mounting to the bed of the subject or in a mattress, etc.

The above measures further involve determining the sleep state of the subject to be: (a) an awake-vocalization state based on the detection of vocalization, (b) an awake- non- vocalization state based on the detection of eye openness, (c) a REM-sleep state based on the detection of eye movement in closed-eye state, (d) a deep-sleep state or a light-sleep state based on the detection of respiration. It is noted that the eye movement may be an eye rolling, a random movement, a movement with a specific pattern, etc. It is further noted that the term "based on a detection" may mean "in response to a detection". As such, the determining may be performed in direct response to, e.g. upon occurrence of, a detection. For example, the awake-vocalization sleep state may be determined immediately in response to a vocalization having been detected. However, the detection of vocalization , eye openness, eye movement in closed-eye state or respiration may equally be used indirectly in determining the sleep state. For example, said detections may be post-filtered. An example is that the audio data may be filtered before analysis, and only vocalizations with an intensity above a predetermined threshold may be considered important for the determination of the sleep state. It is noted that here and also in general in the literature, awake states may be considered as a type of sleep state when classifying the sleep states. It is further noted that other terms for describing the sleep states of a subject may be used. For example, awake states may be also referred to as non-sleeping resting states.

In accordance with the above measures, a method has been provided which makes use of a multimodal sensor set-up to determine sleep stages of a subject by analyzing signals from the sensors, i.e. a microphone, a video camera and a body movement sensor.

The inventors have recognized that, using the proposed multimodal sensor setup, physiologically relevant sleep stages of the subject can be determined accurately and reliably in an unobtrusive manner. Based on this recognition, the detection and analysis of vocalization, eye openness, eye movement in closed-eye state and respiration of the subject provides sufficient information for the accurate and reliable determination of the sleep state of the subject. These modalities may be detected unobtrusively, namely using a microphone and a camera which do not involve direct body contact and a body movement sensor which may likewise not need direct body contact, e.g. by being integrated into a mattress.

Optionally, the sleep state of the subject is determined according to the following conditions to be: (a) the awake-vocalization state based on the detection of vocalization, (b) or else, the awake-non- vocalization state based on the detection of eye openness, (c) or else, the REM-sleep state based on the detection of eye movement in closed- eye state, (d) or else, the deep-sleep state or a light-sleep state based on the detection of respiration. It is an insight of the inventors that this particular sequence may make the determining of the sleep stage of the subject more reliable. Namely, in accordance with this sequence, the determination is first based on the most reliable detections of bio-signals and then based on less reliable detections. For example, vocalization may be detected with high reliability while eye movements may be detected with less reliability as compared to vocalization. Detection of vocalization or eye openness, for example, may rule out a REM sleep state. As such, the proposed sequence may make the determining more reliable.

Optionally, the method further comprises analyzing the video data to detect mouth movement of the subject, and the determining the sleep state of the subject to be the awake-vocalization state is further based on the detection of the mouth movement. The analyzing the video data to detect mouth movement of the subject may comprise using a face detection technique to detect a face area of the subject and estimating mouth movement using a motion estimation technique. When the determining the sleep state is based both on the detection of vocalization and the detection of mouth movement, it may be determined whether or not a detected vocalization is related to the subject as a vocalization of a subject is typically accompanied by mouth movement of the subject. Furthermore, when the detection of vocalization is not sufficiently accurate or reliable because of the presence of, for example, ambient noise, it may be more accurate to determine the sleep state based on a correlation between the detection of mouth movement and vocalization. As such, the determining the sleep state of the subject to be the awake-vocalization state may be more reliable and accurate when the determining is based on both the detection of vocalization and mouth movement.

Optionally, the method further comprises analyzing the body movement data to detect gross body movements of the subject, and the determining the sleep state of the subject to be the awake-non- vocalization state is further based on the detection of the gross movement. Examples of gross body movements of the subject may include whole body movement or limb movements. In case of, for example, darkness or a particular positioning of the face of the subject, in addition to eye openness it may be more reliable to account for the gross body movements as well. Namely, for example, presence of gross body movements during a prolong period of time may indicate an awake state and absence of gross body movements may indicate sleep. As such, the determining the sleep state of the subject to be the awake-non- vocalization state may be more reliable and accurate when the determining is based on both on the detection of eye openness and gross body movements.

Optionally, the method further comprises analyzing the body movement data to detect heart rate, and the determining the sleep state of the subject to be the deep-sleep state or the light-sleep state is further based on the detection of the heart rate. In general, a decline in respiration rate may indicate a deep sleep. In case of, for example, irregular respiration of the subject, it may be more accurate and reliable to account for heart beat as well. Namely, for example, detection of a normal heart rate may indicate an awake state and detection of a low heart rate may indicate deep sleep. As such, the determining the sleep state of the subject to be deep-sleep state or light sleep state may be more reliable and accurate when the determining is based on both on detected respiration and heart rate.

Optionally, the analyzing the audio data to detect the vocalization of the subject comprises detecting harmonic components characteristic of human voice, and determining an intensity of the harmonic components. For example, if an intensity of the harmonic components is above a predetermined threshold one may conclude that the subject is awake. This may advantageously allow to distinguish between, for example, ambient noise, verbal vocalizations, non-verbal vocalizations, etc. Optionally, the analyzing the video data to detect the eye openness of the subject comprises using a face detection technique to detect a face area of the subject, and applying an eye open/eye closed detection algorithm to the detected face area. This may advantageously allow to accurately detect eye openness of the subject by focusing on the face area of the subject. False positives in other, non-face, areas of the subject may be avoided.

Optionally, the analyzing the video data to detect the eye movement in the closed-eye state of the subject comprises estimating eye motion using a first motion estimation technique and estimating face motion of the subject using a second motion estimation technique and, in estimating the eye motion, compensating for an effect of the face motion. Compensating for an effect of face motion may advantageously allow to distinguish between independent motion and dependent eye movements caused by an occurrence of a face motion. It is noted that the first motion estimation technique and the second motion estimation technique may be selected to be the same or different techniques.

Optionally, the body movement sensor is embodied by the camera, and the video data of the camera also represents the body movement data. This may be advantageous in that the body movement data may be obtained remotely without a need to precisely position a sensor, e.g. in contact with a suitable area of a bed or mattress of the subject. As such, a single sensor, namely the video camera, may be used to obtain both the video recording and the body movement data. Experiments have shown that body movements, including fine movements as caused by respiration and the heart beating, may be reliably detected from video, for example, by monitoring variations in skin color.

Optionally, the body movement sensor comprises or is constituted by a piezoelectric sensor. This may advantageously allow to accurately detect different features of a movement of the subject e.g. respiration, hart rate or gross limb movements. It is noted that the piezo-electric sensor may be mounted to a bed of the subject or may be placed in a mattress, or otherwise suitably positioned to detect the subject's body movement.

Optionally, the determining the sleep state of the subject comprises determining a data vector comprising as elements valuations of at least the detected vocalization, eye openness, eye movement in closed-eye state and respiration of the subject. The determining further comprises calculating a matching value between the data vector and each of a plurality of classifying vectors, each of the plurality of the classifying vectors being associated with a different one of the sleep states, thereby obtaining multiple matching values. The determining further comprises selecting a sleep state associated with a largest matching value of the multiple matching values as the determined sleep state. Optionally, the determining the sleep state of the subject further comprises determining whether the determined sleep state of the subject matches with a pre-selected sleep state and if the determined sleep state of the subject matches with a pre-selected sleep state, generating an indicative signal. The pre-selected sleep state may be, for example, REM sleep and the signal may be generated when a REM sleep state is determined. The indicative signal may be a non-auditory alarm signal so as to warn people not to disturb the subject.

Optionally, the method further comprises outputting data representing the determined sleep state of the subject. The output may be for example to a display, storage or a mobile phone. This may allow, e.g., warning other people not to disturb the subject, or the logging of data for further analysis of the sleep states of the subject.

In a further aspect of the invention, a computer program product is provided comprising a computer readable medium having computer readable code embodied therein, the computer readable code being configured such that, on execution by a suitable computer or processor, the computer or processor is caused to perform the method.

In a further aspect of the invention, a system is provided for determining a sleep state of a subject based on sensor data obtained from a microphone and a camera which monitor the subject and from a body movement sensor which senses body movement of the subject, the system comprising:

an audio analyzer for analyzing audio data of the microphone to detect vocalization of the subject;

a video analyzer for analyzing video data of the camera to detect eye openness, and eye movement in closed-eye state, of the subject;

a sensor data analyzer for analyzing body movement data of the body movement sensor to respiration of the subject;

- a classifier for classifying the sleep state of the subject to be one of:

(a) an awake- vocalization state based on the detection of vocalization;

(b) an awake-non- vocalization state based on the detection of eye openness;

(c) a REM-sleep state based on the detection of eye movement in closed-eye state;

(d) a deep-sleep state or a light-sleep state based on the detection of respiration.

It will be appreciated by those skilled in the art that two or more of the above- mentioned embodiments, implementations, and/or aspects of the invention may be combined in any way deemed useful. Modifications and variations of the system and/or the computer program product, which correspond to the described modifications and variations of the method, can be carried out by a person skilled in the art on the basis of the present description.

BRIEF DESCRIPTION OF THE DRAWINGS

These and other aspects of the invention are apparent from and will be elucidated with reference to the embodiments described hereinafter. In the drawings,

Fig. 1 shows a method for determining a sleep state of a subject based on sensor data;

Fig. 2 shows a flow chart of the sleep state of a subject being determined in a conditional manner;

Fig. 3A shows an image illustrating a result of a face detection technique;

Fig. 3B shows an image illustrating a result of an eye movement detection technique in a closed-eye state of a subject when no eye movement is detected;

Fig. 3C shows an image illustrating a result of an analyzing of the video data to detect an eye movement in the closed-eye state of a subject when eye movement is detected;

Fig. 4 shows a system for determining a sleep state of a subject based on sensor data; and

Fig. 5 shows a computer program product comprising instructions for causing a processor system to perform the method.

DETAILED DESCRIPTION OF THE EMBODIMENTS

Fig. 1 shows a method 100 for determining a sleep state of a subject based on sensor data. The method 100 comprises analyzing 110 audio data of the microphone to detect vocalization of the subject. The method 100 may comprises analyzing 120 video data of the camera to detect eye openness and eye movement in closed-eye state of the subject. The method 100 further comprises analyzing 130 body movement data of the body movement sensor to detect respiration of the subject. The method 100 further comprises determining 140 the sleep state of the subject to be: (a) an awake-vocalization state based on the detection of vocalization, (b) an awake-non- vocalization state based on the detection of eye openness, (c) a REM-sleep state based on the detection of eye movement in closed-eye state, (d) a deep- sleep state or a light-sleep state based on the detection of respiration. In performing the method 100, the audio data may be received from a microphone. Analog audio signals may be converted to digital audio signals. The audio signals may be analyzed using audio analysis techniques which are known, per se, in the field of audio signal processing (e.g. Natalampiras et al, Acoustic detection of human activities in natural environments, Journal Audio Eng. Soc, Vol. 60, No. 9, 2012). The analyzing 110 the audio data to detect the vocalization of the subject may, for example, comprise detecting harmonic components characteristic of human voice, and determining an intensity of the harmonic components. The audio data may, for example, contain acoustic events with a detectable pattern and intensity such as sounds e.g. ambient noise, verbal vocalizations e.g. talking, calling or screaming, or non-verbal vocalizations e.g. snoring, coughing or crying.

The analyzing 120 the video data to detect eye openness may comprise using a face detection algorithm to detect a face area of the subject, and applying an eye open/eye closed detection algorithm to detect eye openness. For example, a face detector described by Viola & Jones (Rapid object detection using a boosted cascade of simple features.

Proceedings of the 2001 IEEE Computer Society Conference. Vol. 1, pp. 1-511) may be used to segment the face area from the rest of the body of the subject, and then a pupil detection algorithm as described by Morimoto et al. (Pupil detection and tracking using multiple light sources, Image and vision computing, 18(4), 331-335, 2000) and Ebisawa (Improved video- based eye-gaze detection method, Instrumentation and Measurement, IEEE Transactions on, 47(4), 948-955, 1998) may be applied to the detected face area. In particular, the subject may be illuminated by an infrared light source and the camera may sensitive to infrared light. In a further example, the pupil detection algorithm may be replaced by an eye open/eye closed detector trained using a similar framework as proposed by Viola & Jones (2001). It is noted that the face location of the subject may be updated every time motion is detected.

The method 100 may further comprise analyzing the video data to detect mouth movement of the subject, and the determining 140 the sleep state of the subject to be the awake-vocalization state may be further based on the detection of the mouth movement. Detecting mouth movement may, for example, comprise using a face detection algorithm to detect a face area of the subject, and applying a mouth movement detection algorithm to detect the mouth movement. For example, the same segmentation technique of Viola & Jones (2001) may be used to detect the face area of the subject. Mouth movement detection may be also trained using a similar framework as proposed by Viola & Jones.

The analyzing 120 video data of the camera to detect eye movement in closed- eye state may, for example, comprise estimating eye motion using a first motion estimation algorithm, estimating face motion of the subject using a second motion estimation algorithm, and in estimating the eye motion, compensating for the face motion. In an example, in estimating the eye motion, an effect of the motion of the face may be compensated by (a) applying global motion estimation to the face area and (b) subtracting this global motion estimation from a detected eye motion. Known motion estimation techniques such as motion estimation algorithms as described in De Haan et al. (True-motion estimation with 3-D recursive search block matching. Circuits and Systems for Video Technology, IEEE

Transactions on, 3(5), 368-379, 1993) and Horn & Schunck (Determining optical flow.

Artificial intelligence, 17(1), 185-203, 1981) may be used. For example, as describe in Horn & Schunck, optical flow from a sequence of images may be computed and based on the computed optical flow the eye motion and/or the face motion may be estimated.

The analyzing 130 body movement data may be performed, for example, by analyzing variations of pressure of the subject on a mattress which may indicate changes in a sleep state of the subject. For example, a decline in respiration rate may indicate a deep sleep. The pressure may be measured using one or more load cells. The load cells may, for example, respond to changes in amplitudes and/or duration of changes in one or more of the following signals: load or pressure associated with fine or gross body movements such as whole body movements, respiratory, ballistic effects of the beating heart, etc.

It is also noted that the method 100 may further comprise analyzing the body movement data to detect gross body movements of the subject, and the determining 140 the sleep state of the subject to be the awake-non- vocalization state may further be based on the detection of the gross body movements. Alternatively or additionally, the method 100 may further comprise analyzing the body movement data to detect heart rate, and the determining 140 the sleep state of the subject to be the deep-sleep state or the light-sleep state may further be based on the detection of the heart rate. In a specific example, the body movement data may be split up into gross body movements data, heart beat data and respiration data, e.g., by using frequency analysis. For example, a band filtering may allow only signals to pass that have a frequency that is consistent with heart, respiration and gross body movements respectively. The typical frequency of heart rate is higher than that of respiration which in turn is higher than gross body movements. In an example, when the subject is a baby, for babies up to 1 year, the resting heart rate lays between 100-160 bpm, for children aged 1-10 years old 60-140 bpm is considered healthy. Regarding respiration rate, for babies up to 6 weeks, the resting respiration rate lays between 30-60, for children aged up to 6 months old 25-40 is considered healthy and at the age of 3 this range is lowered to 20-30. In a further example, amplitudes of signals may be used in analyzing 130 body movement data. Heart rate has a smaller amplitude than respiration which is smaller than gross body movements. Amplitude may be used to identify within the body movement data whether it is hear rate, respiration rate or gross body movement. In general, a signal from a body movement sensor is typically most energetic and high amplitude when it is of a gross body movement. The signal of respiration is typically of lesser amplitude and lower frequency, whereas the signal representing heart rate is of least amplitude, and thus is weakest but typically has a higher frequency as compared to respiratory.

The determining 140 the sleep state of the subject may be performed in a direct response, e.g. upon occurrence, or an indirect response, e.g. after post filtering, to a detection. For example, a sleep state may be determined immediately after a detection of vocalization or the sleep state may be determined after filtering the audio data. For example, after analyzing and filtering, only vocalizations with an intensity above a threshold may be considered important for the determination of the sleep state. In another example, detections may be post- filtered. The determining the sleep state of the subject may comprise

determining a data vector comprising as elements valuations of at least the detected vocalization, eye openness, eye movement in closed-eye state and respiration of the subject. The determining the sleep state of the subject may further comprise calculating a matching value between the data vector and each of a plurality of classifying vectors, each of the plurality of the classifying vectors being associated with a different one of the sleep states, thereby obtaining multiple matching values. The determining the sleep state of the subject may further comprise selecting a sleep state associated with a largest matching value of the multiple matching values as the determined sleep state.

Fig. 2 shows a flow chart 200 of the sleep state of a subject being determined in a conditional manner. In this example, if vocalization 210 is detected, the sleep state is determined to be awake-vocalization state 202. Else, if eye openness 220 is detected, the sleep state is determined to be the awake-non- vocalization state 204. Else, if eye movement 230 in closed-eye state is detected, the sleep state is determined to be REM-sleep state 206. Else, the deep-sleep state 208 or a light-sleep state 209 is determined based on, for example, a frequency or amplitude of a signal representing the respiration 240 of the subject.

Fig. 3A shows, on the left, an image 305 illustrating a result of a face detection technique applied to an image of a subject, yielding a face area 310. In an example, a face detector described by Viola & Jones (2001) may be used to segment the face area 310 from the rest of the body of the subject as shown in Fig. 3A, on the right. Fig. 3B shows, on the left, the detected face area 310 and, on the right, an image 312 illustrating a result of an eye movement detection algorithm in a closed-eye state of the subject when no eye movement is detected. Fig. 3C shows, on the left, the detected face area 310 and, on the right, an image 314 illustrating a result of an analyzing of the video data to detect an eye movement in the closed-eye state of a subject when an eye movement 320 is detected. In Fig. 3C, on the right, bright areas indicate movements and the dark areas indicate no movement. In an example, motion estimation techniques as described in De Haan et al. (1993) and Horn & Schunck (1981) may be used.

Fig. 4 shows a system 400 for determining a sleep state of a subject 405 based on sensor data. The system may comprise an audio analyzer 420 for analyzing audio data 013 of a microphone 407 to detect vocalization of the subject 405. The system may further comprise a video analyzer 410 for analyzing video data 011 of the camera 406 to detect eye openness, and eye movement in closed-eye state of the subject 405. The system may further comprise a sensor data analyzer 430 for analyzing body movement data 015 of the body movement sensor 408 to detect respiration of the subject. The system may further comprise a classifier 440 for determining the sleep state of the subject. Data 016 representing the determined sleep state of the subject may be outputted, for example to a display, storage or a mobile phone (shown schematically as a box 480 in Fig. 4).

It is noted that the camera 406 may be located above a pillow or head location in bed so that at least one eye is visible in a camera image for the cases when the subject 405 is facing towards the left or right side of the pillow. The body movement sensor 408 may be mounted to bed of the subject or may be placed in mattress, etc. As body movement sensor 408, the "3Bender" as described in the co-pending application appl. no. PCT/EP2014/075726 may be used.

It is also noted that, alternatively, the body movement sensor may be embodied by the camera, and the video data of the camera may also represent the body movement data. As such, a single sensor, namely the video camera, may be used to obtain both the video recording and the body movement data. For example, camera-based methods as described by Bartula et al. (Camera-based system for contactless monitoring of respiration. In Engineering in Medicine and Biology Society IEEE; 2672-2675, 2013) may be used. It is also known to detect heart rate from a video recording, as well as gross body movement.

It is noted that the subject 405 may be a baby and the system 400 may be a baby monitoring device. It may have advantages to determine sleep states of a baby, for example, when parents consider waking up the baby. Parents may like to know whether the baby is in deep sleep or light sleep in order to wake up the baby during light sleep, since it may result in a happier baby than waking the baby up from deep sleep. Also parents may like to know whether there are abnormalities in sleeping patterns and in the behavior of the baby. In such sleep state monitoring, it is of interest to determine whether or not a subject is sleeping and if the subject sleeping, in which phase of sleep the subject is.

The system 400 may be embodied as, or in, a single device or apparatus. The device or apparatus may comprise one or more microprocessors which execute appropriate software. The software may have been downloaded and/or stored in a corresponding memory, e.g., a volatile memory such as RAM or a non-volatile memory such as Flash. Alternatively, the functional units of the system, e.g., the analyzers 410-430 and classifier 440, may be implemented in the device or apparatus in the form of programmable logic, e.g., as a Field- Programmable Gate Array (FPGA). In general, each functional unit of the system may be implemented in the form of a circuit. It is noted that the system 400 may also be implemented in a distributed manner, e.g., involving different devices or apparatuses. For example, the distribution may be in accordance with a client-server model.

The method 100 may be implemented on a computer as a computer implemented method, as dedicated hardware, or as a combination of both. As illustrated in Fig. 5, instructions for the computer, i.e., executable code, may be stored on a computer program product 510, e.g., in the form of a series 520 of machine readable physical marks and/or as a series of elements having different electrical, e.g., magnetic, or optical properties or values. The executable code may be stored in a transitory or non-transitory manner.

Examples of computer program products include memory devices, optical storage devices 510, integrated circuits, servers, online software, etc. Fig. 5 shows an optical disc.

The above described system and method enable a distinction between vocalization, wake, REM, light and deep sleep stages. This distinction enables an alternative for expensive and time-consuming polysomnographic sleep monitoring, which is currently only done in dedicated sleep labs. In an experimental set-up the inventors have used a microphone, piezoelectric sensor, video camera and PSG to derive the sleep states of a baby. PSG was used to derive physiological sleep stages according to AASM guidelines (Grigg- Damberger et al, The Visual Scoring of Sleep and Arousal in Infants and Children. Journal of Clinical Sleep Medicine, 3, 201-243, 2007), while the microphone, piezoelectric sensor and video images were used to derive the behavioural sleep stages (according to Prechtl, The behavioural states of the newborn infant (a review). Brain Research, 76, 185-212, 1974). The comparison resulted in a close agreement for the detection of wake and deep sleep. That is, of the 279 epochs indicated by PSG as wake, 95.3% as was identified as alertness by behavioural sleep scoring. Also, of the 471 epochs indicated by PSG as stage 3 sleep (deep sleep), 77.9%) was identified as quiet sleep (deep sleep) by behavioural sleep scoring. Known behavioural sleep scoring, which is based on the detection of observable manifestations of sleep may not enable detection of REM sleep. However, REM sleep may be detected by means of analysis of video images. For example, eye movements may be detected by means of video analysis. By combining information from the multimodal sensor set-up, the inventors have provided a system and method for 5 -state sleep classification in infants. More specifically, the inventors have been able to distinguish between vocalization, wake, light sleep, deep sleep and REM sleep using the provided system and method.

It should be noted that the figures are purely diagrammatic and not drawn to scale. In the figures, elements which correspond to elements already described may have the same reference numerals.

It will be appreciated that the invention also applies to computer programs, particularly computer programs on or in a carrier, adapted to put the invention into practice. The program may be in the form of a source code, an object code, a code intermediate source and an object code such as in a partially compiled form, or in any other form suitable for use in the implementation of the method according to the invention. It will also be appreciated that such a program may have many different architectural designs. For example, a program code implementing the functionality of the method or system according to the invention may be sub-divided into one or more sub-routines. Many different ways of distributing the functionality among these sub-routines will be apparent to the skilled person. The subroutines may be stored together in one executable file to form a self-contained program. Such an executable file may comprise computer-executable instructions, for example, processor instructions and/or interpreter instructions (e.g. Java interpreter instructions). Alternatively, one or more or all of the sub-routines may be stored in at least one external library file and linked with a main program either statically or dynamically, e.g. at run-time. The main program contains at least one call to at least one of the sub-routines. The sub-routines may also comprise function calls to each other. An embodiment relating to a computer program product comprises computer-executable instructions corresponding to each processing stage of at least one of the methods set forth herein. These instructions may be sub-divided into sub-routines and/or stored in one or more files that may be linked statically or dynamically. Another embodiment relating to a computer program product comprises computer-executable instructions corresponding to each means of at least one of the systems and/or products set forth herein. These instructions may be sub-divided into sub-routines and/or stored in one or more files that may be linked statically or dynamically.

The carrier of a computer program may be any entity or device capable of carrying the program. For example, the carrier may include a data storage, such as a ROM, for example, a CD ROM or a semiconductor ROM, or a magnetic recording medium, for example, a hard disk. Furthermore, the carrier may be a transmissible carrier such as an electric or optical signal, which may be conveyed via electric or optical cable or by radio or other means. When the program is embodied in such a signal, the carrier may be constituted by such a cable or other device or means. Alternatively, the carrier may be an integrated circuit in which the program is embedded, the integrated circuit being adapted to perform, or used in the performance of, the relevant method.

It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. Use of the verb "comprise" and its conjugations does not exclude the presence of elements or stages other than those stated in a claim. The article "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Examples, embodiments or optional features, whether indicated as non- limiting or not, are not to be understood as limiting the invention as claimed.

Claims

CLAIMS:
1. A method (100) for determining a sleep state of a subject based on sensor data obtained from a microphone and a camera which monitor the subject and from a body movement sensor which senses body movement of the subject, the method comprising:
analyzing (110) audio data of the microphone to detect vocalization of the subject;
analyzing (120) video data of the camera to detect eye openness and eye movement in closed-eye state of the subject;
analyzing (130) body movement data of the body movement sensor to detect respiration of the subject;
- classifying (140) the sleep state of the subject to be one of:
(a) an awake-vocalization state based on the detection of vocalization;
(b) an awake-non- vocalization state based on the detection of eye openness;
(c) a REM-sleep state based on the detection of eye movement in closed-eye state;
(d) a deep-sleep state or a light-sleep state based on the detection of respiration.
2. The method according to claim 1, the sleep state of the subject is determined according to the following conditions to be:
(a) the awake-vocalization state based on the detection of vocalization;
(b) or else, the awake-non- vocalization state based on the detection of eye openness;
(c) or else, the REM-sleep state based on the detection of eye movement in closed-eye state;
(d) or else, the deep-sleep state or a light-sleep state based on the detection of respiration.
3. The method according to claim 1 and 2, wherein the method further comprises analyzing the video data to detect mouth movement of the subject, and wherein the determining the sleep state of the subject to be the awake-vocalization state is further based on the detection of the mouth movement.
4. The method according to any of preceding claims, wherein the method further comprises analyzing the body movement data to detect body movements of the subject, and wherein the determining the sleep state of the subject to be the awake-non- vocalization state is further based on the detection of the body movements.
5. The method according to any of preceding claims, wherein the method further comprises analyzing the body movement data to detect heart rate, and wherein the determining the sleep state of the subject to be the deep-sleep state or the light-sleep state is further based on the detection of the heart rate.
6. The method according to any of preceding claims, wherein the analyzing the audio data to detect the vocalization of the subject comprises:
detecting harmonic components characteristic of human voice, and determining an intensity of the harmonic components.
7. The method according to any of preceding claims, wherein the analyzing the video data to detect the eye openness of the subject comprises:
using a face detection technique to detect a face area of the subject, and applying an eye open/eye closed detection algorithm to the detected face area.
8. The method according to any of preceding claims, wherein the analyzing the video data to detect the eye movement in the closed-eye state of the subject comprises:
estimating eye motion using a first motion estimation technique, estimating face motion of the subject using a second motion estimation technique, and
in estimating the eye motion, compensating for the face motion.
9. The method according to any of the preceding claims, wherein the body movement sensor is embodied by the camera, and wherein the video data of the camera also represents the body movement data.
10. The method according to claim 1-8, wherein the body movement sensor comprises or is constituted by a piezo-electric sensor.
11. The method according to claim 1 , wherein the determining the sleep state of the subject comprises:
determining a data vector comprising as elements valuations of at least the detected vocalization, eye openness, eye movement in closed-eye state and respiration of the subject;
calculating a matching value between the data vector and each of a plurality of classifying vectors, each of the plurality of the classifying vectors being associated with a different one of the sleep states, thereby obtaining multiple matching values;
selecting a sleep state associated with a largest matching value of the multiple matching values as the determined sleep state.
12. The method according to any of preceding claims, wherein the determining the sleep state of the subject further comprises:
determining whether the determined sleep state of the subject matches with a pre-selected sleep state; and
if the determined sleep state of the subject matches with the pre-selected sleep state, generating an indicative signal.
13. The method according to any of the preceding claims, wherein the method further comprises outputting data representing the determined sleep state of the subject.
14. A computer program product comprising a computer readable medium having computer readable code embodied therein, the computer readable code being configured such that, on execution by a suitable computer or processor, the computer or processor is caused to perform any of the methods of claims 1 to 13.
15. A system (400) for determining a sleep state of a subject (405) based on sensor data (011, 013, 015) obtained from a microphone (407) and a camera (406) which monitor the subject and from a body movement sensor (408) which senses body movement of the subject (405), the system (400) comprising:
an audio analyzer (420) for analyzing audio data (013) of the microphone (407) to detect vocalization of the subject (405);
a video analyzer (410) for analyzing video data (011) of the camera (406) to detect eye openness, and eye movement in closed-eye state, of the subject (405); a sensor data analyzer (410, 430) for analyzing body movement data (011, 015) of the body movement sensor (406, 408) to detect respiration of the subject (405);
a classifier (440) for classifying the sleep state of the subject (405) to be one of:
(a) an awake- vocalization state based on the detection of vocalization;
(b) an awake-non- vocalization state based on the detection of eye openness;
(c) a REM-sleep state based on the detection of eye movement in closed-eye state;
(d) a deep-sleep state or a light-sleep state based on the detection of respiration.
PCT/EP2016/061525 2015-06-03 2016-05-23 Sleep monitoring method and system WO2016193030A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP15170480 2015-06-03
EP15170480.6 2015-06-03

Publications (1)

Publication Number Publication Date
WO2016193030A1 true true WO2016193030A1 (en) 2016-12-08

Family

ID=53284099

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2016/061525 WO2016193030A1 (en) 2015-06-03 2016-05-23 Sleep monitoring method and system

Country Status (1)

Country Link
WO (1) WO2016193030A1 (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5853005A (en) * 1996-05-02 1998-12-29 The United States Of America As Represented By The Secretary Of The Army Acoustic monitoring system
US6280392B1 (en) 1998-07-29 2001-08-28 Denso Corporation Infant condition monitoring system and method using load cell sensor sheet
JP2003334251A (en) * 2002-05-21 2003-11-25 Daikin Ind Ltd Sleep controlling device and capsule bed
US20070156060A1 (en) 2005-12-29 2007-07-05 Cervantes Miguel A Real-time video based automated mobile sleep monitoring using state inference
US20080157956A1 (en) * 2006-12-29 2008-07-03 Nokia Corporation Method for the monitoring of sleep using an electronic device
US20140046184A1 (en) 2011-03-30 2014-02-13 Koninklijke Philips N.V. Contactless sleep disorder screening system
US20140057232A1 (en) * 2011-04-04 2014-02-27 Daniel Z. Wetmore Apparatus, system, and method for modulating consolidation of memory during sleep
KR20140074567A (en) * 2012-12-10 2014-06-18 계명대학교 산학협력단 prevention method of drowsy driving and system thereof
US20140276090A1 (en) * 2011-03-14 2014-09-18 American Vehcular Sciences Llc Driver health and fatigue monitoring system and method using optics
WO2014151577A1 (en) * 2013-03-15 2014-09-25 Stryker Corporation Patient support apparatus with patient information sensors

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5853005A (en) * 1996-05-02 1998-12-29 The United States Of America As Represented By The Secretary Of The Army Acoustic monitoring system
US6280392B1 (en) 1998-07-29 2001-08-28 Denso Corporation Infant condition monitoring system and method using load cell sensor sheet
JP2003334251A (en) * 2002-05-21 2003-11-25 Daikin Ind Ltd Sleep controlling device and capsule bed
US20070156060A1 (en) 2005-12-29 2007-07-05 Cervantes Miguel A Real-time video based automated mobile sleep monitoring using state inference
US20080157956A1 (en) * 2006-12-29 2008-07-03 Nokia Corporation Method for the monitoring of sleep using an electronic device
US20140276090A1 (en) * 2011-03-14 2014-09-18 American Vehcular Sciences Llc Driver health and fatigue monitoring system and method using optics
US20140046184A1 (en) 2011-03-30 2014-02-13 Koninklijke Philips N.V. Contactless sleep disorder screening system
US20140057232A1 (en) * 2011-04-04 2014-02-27 Daniel Z. Wetmore Apparatus, system, and method for modulating consolidation of memory during sleep
KR20140074567A (en) * 2012-12-10 2014-06-18 계명대학교 산학협력단 prevention method of drowsy driving and system thereof
WO2014151577A1 (en) * 2013-03-15 2014-09-25 Stryker Corporation Patient support apparatus with patient information sensors

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
BARTULA ET AL.: "Camera-based system for contactless monitoring of respiration", ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY IEEE, 2013, pages 2672 - 2675
DE HAAN ET AL.: "True-motion estimation with 3-D recursive search block matching", CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, IEEE TRANSACTIONS ON, vol. 3, no. 5, 1993, pages 368 - 379
EBISAWA: "Improved video-based eye-gaze detection method", INSTRUMENTATION AND MEASUREMENT, IEEE TRANSACTIONS ON, vol. 47, no. 4, 1998, pages 948 - 955
GRIGG-DAMBERGER ET AL.: "The Visual Scoring of Sleep and Arousal in Infants and Children", JOURNAL OF CLINICAL SLEEP MEDICINE, vol. 3, 2007, pages 201 - 243
HORN; SCHUNCK: "Determining optical flow", ARTIFICIAL INTELLIGENCE, vol. 17, no. 1, 1981, pages 185 - 203
MORIMOTO ET AL.: "Pupil detection and tracking using multiple light sources", IMAGE AND VISION COMPUTING, vol. 18, no. 4, 2000, pages 331 - 335
NATALAMPIRAS ET AL.: "Acoustic detection of human activities in natural environments", JOURNAL AUDIO ENG. SOC., vol. 60, no. 9, 2012
PRECHTL: "The behavioural states of the newborn infant (a review", BRAIN RESEARCH, vol. 76, 1974, pages 185 - 212
VIOLA; JONES: "Rapid object detection using a boosted cascade of simple features", PROCEEDINGS OF THE 2001 IEEE COMPUTER SOCIETY CONFERENCE, vol. 1, pages 1 - 511

Similar Documents

Publication Publication Date Title
Dorr et al. Variability of eye movements when viewing dynamic natural scenes
Parra et al. Response error correction-a demonstration of improved human-machine performance using real-time EEG monitoring
Mognon et al. ADJUST: An automatic EEG artifact detector based on the joint use of spatial and temporal features
US5775330A (en) Neurometric assessment of intraoperative anesthetic
Balakrishnan et al. Detecting pulse from head motions in video
Poh et al. Non-contact, automated cardiac pulse measurements using video imaging and blind source separation.
Grünerbl et al. Smartphone-based recognition of states and state changes in bipolar disorder patients
US20070179396A1 (en) Method and System for Detecting and Classifying Facial Muscle Movements
US7035685B2 (en) Apparatus and method for measuring electroencephalogram
US20080221401A1 (en) Identification of emotional states using physiological responses
US20080319281A1 (en) Device for Detecting and Warning of Medical Condition
Karlen et al. Sleep and wake classification with ECG and respiratory effort signals
Li et al. Dynamic time warping and machine learning for signal quality assessment of pulsatile signals
Zhang et al. Activity monitoring using a smart phone's accelerometer with hierarchical classification
US20070060831A1 (en) Method and system for detecting and classifyng the mental state of a subject
EP2380493A1 (en) Respiratory motion detection apparatus
US20100177968A1 (en) Detection of animate or inanimate objects
King et al. Single-trial decoding of auditory novelty responses facilitates the detection of residual consciousness
Khosrowabadi et al. EEG-based emotion recognition using self-organizing map for boundary detection
Pogorelc et al. Automatic recognition of gait-related health problems in the elderly using machine learning
Zhao et al. Remote measurements of heart and respiration rates for telemedicine
US20130345569A1 (en) Determining cardiac arrhythmia from a video of a subject being monitored for cardiac function
De Vos et al. Automated artifact removal as preprocessing refines neonatal seizure detection
Clarke et al. From perception to conception: how meaningful objects are processed over time
US20090192402A1 (en) System and method providing biofeedback for anxiety and stress reduction

Legal Events

Date Code Title Description
DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16724061

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE