WO2013008150A1 - Signal processor for determining an alertness level - Google Patents

Signal processor for determining an alertness level Download PDF

Info

Publication number
WO2013008150A1
WO2013008150A1 PCT/IB2012/053434 IB2012053434W WO2013008150A1 WO 2013008150 A1 WO2013008150 A1 WO 2013008150A1 IB 2012053434 W IB2012053434 W IB 2012053434W WO 2013008150 A1 WO2013008150 A1 WO 2013008150A1
Authority
WO
WIPO (PCT)
Prior art keywords
event
respiration
user
yawn
signal
Prior art date
Application number
PCT/IB2012/053434
Other languages
French (fr)
Inventor
Mirela Alina Weffers-Albu
Jens MÜHLSTEFF
Stijn De Waele
Igor Berezhnyy
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2013008150A1 publication Critical patent/WO2013008150A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/087Measuring breath flow
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/113Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb occurring during breathing
    • A61B5/1135Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb occurring during breathing by monitoring thoracic expansion
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/168Evaluating attention deficit, hyperactivity
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/18Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2562/00Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
    • A61B2562/02Details of sensors specially adapted for in-vivo measurements
    • A61B2562/0257Proximity sensors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6887Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient mounted on external non-worn devices, e.g. non-medical devices
    • A61B5/6893Cars
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7282Event detection, e.g. detecting unique waveforms indicative of a medical condition

Definitions

  • Signal processor for determining an alertness level
  • the present invention relates to a signal processor and a method for determining an alertness level of a user, in particular to detect drowsiness of a user.
  • the present invention further relates to a system comprising such signal processor and a computer program for implementing such method.
  • US 7,397,382 B2 discloses a drowsiness detecting apparatus having a pulse wave sensor and a determination circuit.
  • the sensor is provided to a steering wheel to detect a pulse wave of a vehicle driver gripping the steering wheel.
  • the determination circuit generates a thorax pressure signal indicative of the depth of breathing by envelope-detecting a pulse wave signal of the sensor and determines whether the driver is drowsy by comparing a pattern of the thorax pressure signal with a reference pattern.
  • a depth of breathing of a person is detected, and drowsiness of the person is determined when the depth of breathing falls in a predetermined breathing condition including at least one of a sudden decrease in the depth of breathing and a periodic repetition of deep breathing and shallow breathing.
  • a signal processor for determining an alertness level of a user is presented, the signal processor is adapted to receive a respiration signal of a user, the respiration signal having an amplitude over time, to detect at least one yawn event and/or speech event based on the respiration signal, and to determine an alertness level of the user based on the at least one detected yawn event and/or detected speech event.
  • a system for determining an alertness level of a user comprises the signal processor of the invention, and a respiration sensor providing the respiration signal of the user.
  • a method for determining an alertness level of a user comprises receiving a respiration signal of a user, the respiration signal having an amplitude over time, detecting at least one yawn event and/or speech event based on the respiration signal, and determining an alertness level of the user based on the at least one detected yawn event and/or detected speech event.
  • a computer program comprising program code means for causing a computer to carry out the steps of the method of the invention when said computer program is carried out on the computer.
  • the basic idea of the invention is to detect yawn event(s) and/or speech event(s) based on a respiration signal provided by a respiration sensor, and to determine the alertness level of the user based on the detected yawn event(s) and/or detected speech event(s).
  • yawn event it is meant that the respiration signal indicates that the user is yawning.
  • speech event it is meant that the respiration signal indicates that the user is speaking.
  • a yawn event and a speech event each are a signal anomaly in the respiration signal. In particular, if a low alertness level is determined, drowsiness of the user can be detected.
  • a yawn event it is determined that the alertness level of the user is low.
  • a speech event it is determined that the alertness level of the driver is high.
  • both a yawn event and a speech event can be detected based on one single respiration signal or respiration sensor. Thus, only one respiration sensor is needed to reliably detect the alertness level of the user.
  • Each of a yawn event and a speech event can be clearly distinguished from normal breathing of a user based on the respiration signal.
  • a classification of the respiration signal or pattern can be performed, thereby classifying into a yawn event, a speech event, or normal breathing.
  • the use of the respiration signal to determine the state of the user or the alertness level is particularly advantageous for detecting drowsiness of the user (e.g. driver) early in time, for example before the user already lost full control over the car he/she is driving due to fatigue.
  • using a respiration sensor for measuring or providing a respiration signal of the user, in order to detect yawn event(s) and/or speech event(s) has at least one of the following advantages: usability at night time, insensitivity to changes in illumination, insensitivity to special movements of the user (e.g. covering his/her mouth while yawning or speaking), and insensitivity to clothing of the user (e.g. wearing thick winter clothes).
  • the signal processor is adapted to detect the at least one yawn event by detecting an amplitude peak if the amplitude of the respiration signal exceeds a preset threshold. This provides an easy and computationally inexpensive way of detecting the yawn event in the respiration signal.
  • the preset threshold is selected to be less than an amplitude of a speech event and/or an amplitude of a normal breathing of the user. In this way, a precise distinction in the respiration signal between yawning of the user and other activities of the user, such as speaking or normal breathing, can be made.
  • the signal processor is adapted to determine an amplitude frequency distribution of the amplitudes over time, or its histogram representation, of at least a part of the respiration signal, and to detect the at least one yawn event and/or speech event based on the determined amplitude frequency distribution, or its histogram representation.
  • the histogram representation is a representation of the amplitude frequency distribution of the amplitudes over time.
  • the amplitude frequency distribution can be the distribution of the frequency of occurrence of different amplitudes (amplitude values) in different ranges.
  • a frequency distribution is thus a statistical measure to analyse the distribution of amplitudes of the respiration signal.
  • the shape of a histogram representation of the frequency distribution can be determined, and the at least one yawn event and/or speech event can be detected based on the shape of the histogram representation. This provides for an easy and reliable detention.
  • the part of the respiration signal comprises exactly one yawn event.
  • a time window sized to only measure exactly one yawn event can be used.
  • Exactly one yawn event can be reliably detected based on the amplitude frequency distribution, or its histogram representation, in particular the shape of its histogram representation.
  • the part of the respiration signal (or time window) can be in the range of an average time of a yawn event (or equal to or bigger than an average time of a yawn event), for example between 3 and 8 second, or about 5 seconds.
  • the signal processor is adapted to determine at least one feature of at least part of the respiration signal, the at least one feature selected from the group comprising an amplitude frequency distribution (or its histogram representation), an average number of respiration cycles, a median number of respiration cycles per epoch, an inclination coefficient (showing whether the respiration rate goes up or down), an average of amplitudes of respiration cycles, a median of amplitudes of respiration cycles, and an inclination coefficient (showing whether the amplitude goes up or down).
  • the signal processor is adapted to detect the at least one yawn event and/or speech event using a machine learning algorithm. This provides for a reliable detection.
  • This embodiment can in particular be used in combination with the embodiment of determining an amplitude frequency distribution (or its histogram
  • the amplitude frequency distribution (or its histogram representation) can be used as an input for the machine learning algorithm.
  • at least one, in particular a number of, features of the previous embodiment can be used as an input for the machine learning algorithm.
  • the signal processor can be adapted to determine a multi-dimensional feature vector based on (at least part of) the respiration signal, wherein the dimension corresponds to the number of features.
  • the signal processor is adapted to receive at least one respiration training signal selected from the group comprising a respiration training signal indicative of normal breathing of the user, a respiration training signal indicative of yawning of the user, and a respiration training signal indicative of speech of the user. In this way an adaptive system can be provided.
  • the signal processor is adapted to use a clustering technique to determine a yawn event cluster and/or a speech event cluster using the at least one respiration training signal. This provides for an easy classification and/or visual representation of the classification.
  • the signal processor is adapted to determine the alertness level based on at least one criterion selected from the group comprising
  • the respiration sensor is a radar based respiration sensor.
  • This provides an unobtrusive way of measuring the respiration signal of the user.
  • a standard respiration detection such as for example a body-worn respiration band or glued electrodes (e.g. used in a hospital)
  • the radar based respiration sensor provides increased usability and comfort. It is therefore particularly suitable for consumer applications (e.g. in a car).
  • the radar-based respiration sensor is a contactless sensor. Thus, it is in particular suitable to be embedded in a small sized system.
  • the radar based respiration sensor is disposed on or integrated into a seat belt wearable by the user or a steering wheel. This way, the respiration signal of the user can be unobtrusively measured, when the user is for example sitting in a car seat having a seat belt on.
  • system further comprising a feedback unit adapted to provide feedback to the user based on the determined alertness level.
  • the feedback unit is adapted to provide feedback if the determined alertness level is below a preset value. In this way, a warning can be provided to the user, in particular if it is determined that the alertness level is too low, for example when the user is drowsy and is about to fall asleep.
  • Fig. 1 shows a schematic representation of a system for determining an alertness level of a user according to an embodiment
  • Fig. 2 shows a schematic representation of a user wearing a seat belt having a respiration sensor of a system according to an embodiment
  • Fig. 3 shows a diagram of a respiration signal indicating normal breathing of a user
  • Fig. 4 shows a diagram of a respiration signal indicating speech of a user
  • Fig. 5 shows a diagram of a respiration signal indicating yawning of the user
  • Fig. 6 shows a diagram of an amount of time variance between local minima in a respiration signal for four different users
  • Fig. 7 shows a diagram of an amount of amplitude variance in a respiration signal for four different users
  • Fig. 8 shows a diagram of a respiration signal having yawn events detected by a signal processor, system or method according to a first embodiment
  • Fig. 9 shows a diagram of a respiration signal, when the user speaks
  • Fig. 10 shows an exemplary respiration signal
  • Fig. 11 shows a first part of an exemplary respiration signal of Fig. 10, having exactly one yawn event
  • Fig. 12 shows a histogram representation of the part of the respiration signal of
  • Fig. 11 used by a signal processor, system or method according to a second embodiment
  • Fig. 13 shows a second part of the respiration signal of Fig. 10, having a speech event
  • Fig. 14 shows a histogram representation of the part of the respiration signal of Fig. 13, used in a signal processor, system or method according to the second embodiment;
  • Fig. 15 shows a diagram of clusters obtained by a signal processor, system or method according to the second embodiment
  • Fig. 16 shows a respiration signal having yawn events and the mapping of the yawn events to points in the yawn event cluster of Fig. 15;
  • Fig. 17 shows a flow diagram of a method for determining an alertness level of a user according to an embodiment
  • Fig. 18 shows a flow diagram of a method for determining an alertness level of the user according to another embodiment.
  • Fig. 1 shows a schematic representation of a system for determining an alertness level of a user according to an embodiment of the present invention.
  • the system 100 comprises a signal processor 10, and a respiration sensor 20 measuring or providing a respiration signal 12 of the user 1.
  • the respiration signal 12 is transmitted from the respiration sensor 20 to the signal processor 10.
  • the signal processor 10 receives the respiration signal 12 from the respiration sensor 20.
  • the signal processor 10 detects at least one yawn event and/or speech event based on the respiration signal 12.
  • the determination of the yawn event and/or speech event can in particular be performed in real-time.
  • the signal processor 10 determines an alertness level 14 of the user 1 based on the at least one detected yawn event 16 and/or detected speech event.
  • the signal processor can perform or be adapted to perform a classification of the respiration signal or pattern into a yawn event, a speech event, or normal breathing.
  • a respiration pattern classifier component can for example be performed by a respiration pattern classifier component.
  • the classification can in particular be performed in real-time.
  • the alertness level can then be determined based on the classification into yawn event, speech event or normal breathing. This can for example be performed by an alertness classifier component.
  • the respiration pattern classifier component and/or the alertness classifier component can be part of or implemented in the signal processor.
  • the system further comprises a feedback unit 30 adapted to provide feedback to the user 1 based on the determined alertness level 14.
  • the signal processor 10 transmits the alertness level 14 to the feedback unit 30.
  • the feedback unit 30 receives the alertness level 14 from the signal processor 10.
  • the feedback unit 30 is adapted to provide feedback to the user 1, if the determined alertness level 14 is below a preset value. In this way, a warning can be provided to the user 1, for example when the user 1 is drowsy and is about to fall asleep.
  • Fig. 2 shows a schematic representation of a user 1 wearing a seatbelt 21.
  • the seatbelt 21 can in particular be a seatbelt 21 of a seat in a car.
  • a respiration sensor 20 is integrated into the seatbelt 21, which is worn by the user 1.
  • the seatbelt 21 here refers to safety seatbelt designed to secure the user 1 against harmful movement that may result from a collision or a sudden stop.
  • the seatbelt 21 is intended to reduce injuries by stopping the user 1 from hitting hard interior elements of the vehicle or other passengers and by preventing the user 1 from being thrown from the vehicle.
  • the respiration sensor 20 is a radar-based respiration sensor, in particular a Doppler radar-based respiration sensor.
  • the radar-based respiration sensor is a contactless sensor, it is in particular suitable to be embedded in a small sized system like the seatbelt 21. It will be understood that the radar- based respiration sensor can also be disposed on or integrated in any other suitable object, such as for example a steering wheel, portable device or a mattress.
  • the radar-based respiration sensor 20 is used to measure and provide the respiration signal of the user 1. This approach enables to monitor breathing-related thorax motion, thus breathing of the user, as well as context information, such as the activity of the user.
  • the radar-based respiration signal 20 is adapted to transmit electro -magnetic waves which are reflected at the chest wall of the user and undergo a Doppler- frequency shift, if the chest wall of the user 1 is moving due to respiration of the user 1. Therefore, the received signal measured by the radar-based respiration sensor 20 contains information about the thorax motion.
  • Fig. 3 shows a diagram of a respiration signal 12 indicating normal breathing of a user.
  • Fig. 4 shows a diagram of a respiration signal 12 indicating speech of a user.
  • Fig. 5 shows a diagram of a respiration signal 12 indicating yawning of a user. Each diagram shows the amplitude of the respiration signal over time.
  • Non-eventful breathing is meant to be normal breathing or also called baseline session.
  • Eventful breathing is meant to be a yawning session, during which the user is quiet, but yawns, and/or a speech session, during which the user speaks (e.g. reading a passage from a book to simulate a discussion in a car).
  • yawn events 16 can clearly be distinguished in the respiration signal 12. For illustration and simplification purposes only four yawn events of the plurality of yawn events 16 are marked by a circle in Fig. 5.
  • Fig. 6 shows a diagram of an amount of time variance between local minima in a respiration signal for four different users.
  • Fig. 7 shows a diagram of an amount of amplitude variance in a respiration signal for four different users.
  • Fig. 6 and Fig. 7 are based on an evaluation the three types of respiration signals shown in Fig. 3 to 5.
  • Fig. 6 and Fig. 7 are here used for mere illustration purposes, to show that a differentiation between non-eventful and eventful breathing is possible. As can be seen in Fig.
  • Fig. 7 also shows that for all participants the amplitude variance of the respiration signal during the yawning session is significantly higher than the amplitude variance of the respiration signal during the speech session. This is due to the fact that yawns involve deep inhalations, much deeper than when the user speaks, which results in the fact that the minima peaks of the respiration signal have much lower values in the case of a yawning event than in the case of a speech event.
  • Fig. 8 shows a diagram of a respiration signal 12 having yawn events 16 (yawning session) detected by a signal processor, system or method according to a first embodiment of the present invention.
  • the signal processor is adapted to detect the at least one yawn event 16 by detecting an amplitude peak if the amplitude of the respiration signal 12 exceeds a preset threshold 17.
  • the preset threshold 17 is selected to be less than an amplitude of a speech event and an amplitude of a normal breathing of the user.
  • Fig. 9 shows a diagram of a respiration signal, when the user speaks (speech session). In Fig. 9 the same preset threshold 17 is indicated as in Fig. 8. As can be seen in Fig. 9, the respiration signal 12 has only speech and no yawn events are detected. This is due to the fact that the preset threshold 17 is selected to be less than the amplitude of a speech event.
  • Fig. 10 shows an exemplary respiration signal 12.
  • the signal processor is adapted to determine at least one feature of the respiration signal.
  • one of the at least one feature is an amplitude frequency distribution of the amplitudes over time (or its histogram representation) of at least a part of the respiration signal 12.
  • the histogram representation is a representation of the amplitude frequency distribution of the amplitudes over time.
  • the amplitude frequency distribution can be the distribution of the frequency of occurrence of different amplitudes (amplitude values) in different ranges.
  • the signal processor is adapted to determine the amplitude frequency distribution (or its histogram representation) of at least the part of the respiration signal, and to detect the at least one yawn event and/or speech event based on the determined amplitude frequency distribution (or its histogram representation).
  • Fig. 11 shows a first part of a respiration signal of Fig. 10.
  • Fig.13 shows a second part of the respiration signal of Fig. 10.
  • the part of the respiration signal in Fig. 11 comprises exactly one yawn event 16. In this way, a time window sized to cover only exactly one yawn event can be applied to the respiration signal 12.
  • Fig. 12 shows a histogram representation of the part of the respiration signal of Fig. 11.
  • Fig. 14 shows a histogram representation of the part of the respiration signal of Fig. 13.
  • the amplitude frequency distribution (or its histogram representation) of the respiration signal part having the yawn event (Fig. 11) can clearly be distinguished from the amplitude frequency distribution (or its histogram representation) of the respiration signal part having no yawn event (Fig. 13).
  • Fig. 12 which shows the histogram representation of the respiration signal part having the yawn event 16
  • there is activity in the lower bins of the histogram representation there is activity in the lower bins of the histogram representation.
  • Fig. 15 shows a diagram of clusters obtained by a signal processor, system or method according to the second
  • the signal processor is adapted to detect the at least one yawn event 16 and/or speech event using a machine learning algorithm.
  • the signal processor is adapted to receive the at least one respiration training signal (or reference signal) selected from the group comprising a respiration training signal indicative of normal breathing of the user, a respiration training signal indicative of yawning of the user, and a respiration training signal indicative of a speech of the user.
  • a clustering technique can then be used to determine a yawn event cluster 32 and a speech event cluster 34 using the at least one respiration training signal, as shown in Fig. 15. Further, additional clusters can be determined, such as normal breathing cluster 36 shown in Fig. 15.
  • respiration training signal indicative of normal breathing of the user was determined by yawn (yawning session), and speak (speech session) for a few seconds in order to determine the respiration training signal indicative of normal breathing of the user, the respiration training signal indicative of yawning of the user, and the respiration training signal indicative of speech of the user.
  • respiration training signals are then used at run-time to distinguish between normal breathing, yawning and speech.
  • the signal processor is adapted to perform the machine learning algorithm based on at least one feature of the respiration signal.
  • one of the at least one feature is the amplitude frequency distribution (or its histogram representation) as explained in connection with Fig. 10 to 14.
  • the amplitude frequency distribution (or its histogram representation) is used as an input for the machine learning algorithm.
  • the machine learning algorithm can be based on a number of features. In the experiment in connection with Fig. 15, ten features were used. In this way a multidimensional feature vector can be determined based on (at least part of) the respiration signal, wherein the dimension corresponds to the number of features, thus in this example a 10- dimensional feature vector.
  • Fig. 15 shows a two-dimensional projection of this 10- dimensional feature vector.
  • any other number of features can be used. Examples of such feature are for example an average number of respiration cycles, a median number of respiration cycles per epoch, an inclination coefficient (showing whether the respiration rate goes up or down), an average of amplitudes of respiration cycles, a median of amplitudes of respiration cycles, and an inclination coefficient (showing whether the amplitude goes up or down).
  • any other suitable feature can be used.
  • the combination of features can be such that both yawn events and speech events can be reliably detected.
  • Fig. 16 shows a respiration signal 12 having yawn events 16 and the mapping of the yawn events 16 to points in the yawn event cluster 32 of Fig. 15.
  • each point of the yawn event cluster 32 corresponds to one of the yawn events 16 in the respiration signal 12.
  • the results of the unsupervised clustering as shown in Fig. 15 by applying a machine- learning algorithm are successful.
  • a fully adaptive system for detection of yawn event(s) and/or speech event(s) based on the respiration signal can be provided.
  • Fig. 17 shows a flow diagram of a method for determining an alertness level of a user according to an embodiment
  • Fig. 18 shows a flow diagram of a method for determining an alertness level of a user according to another embodiment.
  • a respiration signal is received in an initial step 101 .
  • step 102 it is determined if at least one yawn event is detected. In particular, it can be determined if a specific amount for a specific frequency of yawn events has been detected. If yawn event(s) has or have been detected, the method turns to step 104 of determining if at least one speech event is detected. If at least one speech event has been detected, the alertness level is determined to be a medium alertness level 108. Thus, a medium alertness level is detected, if both at least one yawn event and at least one speech event are detected.
  • the alertness level is determined to be a low alertness level 107.
  • a low alertness level 107 can be detected if the amount or frequency of detected yawn events is above a preset threshold. If a frequency of yawn events is increasing, this means that the user is yawning more often.
  • the result of the determination is that no yawn event is detected, in step 103 it is then determined if at least one speech event is detected. If at least one speech event is detected (no yawn event detected) the alertness level is determined to be a high alertness level 106. If no speech event is detected (and no yawn event is detected) the alertness level is determined to be neutral, e.g. indicating normal breathing 105.
  • step 111 at least one respiration training signal is received, in particular the respiration training signals previously described.
  • step 112 the current respiration signal of the user is received.
  • step 113 yawn event, speech event and/or normal breathing is detected or classified based on the respiration signal using a machine learning algorithm.
  • at least one or a number of features can be used as an input for the machine learning algorithm.
  • a multidimensional feature vector based on (at least part of) the respiration signal can be determined, wherein the dimension corresponds to the number of features. If at least one speech event is detected and no yawn event is detected, indicated by step 114, the alertness level is determined to be a high alertness level 106.
  • the alertness level is determined to be a medium alertness level 108, indicated by step 115. If it is determined that at least one yawn event and no speech event is detected, it is determined that the alertness level is a low alertness level 107, indicated by step 116. In particular, in step 116 it can be determined if an amount or frequency of detected yawn events is above a preset threshold, as previously explained.
  • the present invention can in particular be used in the automotive context for detecting drowsiness of a driver. Drowsiness of a driver can be detected, when the alertness level of the user is determined to be low.
  • the present invention cannot only be applied in an automotive context, but in any other suitable context that requires a high alertness of the user, for example in a plane, a hospital or industrial shift working.
  • Another example is the consumer lifestyle domain, for example for relaxation or sleep application.
  • a computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
  • a suitable medium such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Social Psychology (AREA)
  • Psychology (AREA)
  • Psychiatry (AREA)
  • Hospice & Palliative Care (AREA)
  • Physiology (AREA)
  • Pulmonology (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The present invention relates to a signal processor (10) and method for determining an alertness level of a user. The signal processor (10) is adapted to receive a respiration signal (12) of a user (1), the respiration signal (12) having an amplitude over time, detect at least one yawn event (16) and/or speech event based on the respiration signal (12), and determine an alertness level (14) of the user based on the at least one detected yawn event (16) and/or detected speech event.

Description

Signal processor for determining an alertness level
FIELD OF THE INVENTION
The present invention relates to a signal processor and a method for determining an alertness level of a user, in particular to detect drowsiness of a user. The present invention further relates to a system comprising such signal processor and a computer program for implementing such method.
BACKGROUND OF THE INVENTION
There are two general ways of detecting drowsiness (or fatigue) of a user, in particular in the automotive context for detecting drowsiness of a driver. On the one hand, there are techniques that focus on the car behavior and/or context information to determine the state of the driver. These techniques can be inaccurate as they do not focus on the user (e.g. driver), but on the car and/or the context. Furthermore, these techniques provide relevant information only when the driver consistently does not have the car under full control for a certain time duration, meaning that traffic risk has already been high for some time.
On the other hand, there are techniques that focus on the user (e.g. driver) to determine the state of the user. For example, US 7,397,382 B2 discloses a drowsiness detecting apparatus having a pulse wave sensor and a determination circuit. The sensor is provided to a steering wheel to detect a pulse wave of a vehicle driver gripping the steering wheel. The determination circuit generates a thorax pressure signal indicative of the depth of breathing by envelope-detecting a pulse wave signal of the sensor and determines whether the driver is drowsy by comparing a pattern of the thorax pressure signal with a reference pattern. A depth of breathing of a person is detected, and drowsiness of the person is determined when the depth of breathing falls in a predetermined breathing condition including at least one of a sudden decrease in the depth of breathing and a periodic repetition of deep breathing and shallow breathing.
However, this drowsiness detection might not be reliable in a number of situations, in particular not reliable in all possible situations that can occur, for example in a situation with a high noise level. SUMMARY OF THE INVENTION
It is an object of the present invention to provide a signal processor and a method for determining an alertness level of a user, in particular for detecting drowsiness of the user, that provides a more reliable determination or detection in particular a robust detection or determination. It is a further object of the present invention to provide a system comprising such signal processor and a computer program for implementing such method.
In a first aspect of the present invention a signal processor for determining an alertness level of a user is presented, the signal processor is adapted to receive a respiration signal of a user, the respiration signal having an amplitude over time, to detect at least one yawn event and/or speech event based on the respiration signal, and to determine an alertness level of the user based on the at least one detected yawn event and/or detected speech event.
In a further aspect of the present invention a system for determining an alertness level of a user is presented, the system comprises the signal processor of the invention, and a respiration sensor providing the respiration signal of the user.
In a further aspect of the present invention a method for determining an alertness level of a user is presented. The method comprises receiving a respiration signal of a user, the respiration signal having an amplitude over time, detecting at least one yawn event and/or speech event based on the respiration signal, and determining an alertness level of the user based on the at least one detected yawn event and/or detected speech event.
In yet a further aspect of the present invention a computer program comprising program code means for causing a computer to carry out the steps of the method of the invention when said computer program is carried out on the computer.
The basic idea of the invention is to detect yawn event(s) and/or speech event(s) based on a respiration signal provided by a respiration sensor, and to determine the alertness level of the user based on the detected yawn event(s) and/or detected speech event(s). With yawn event it is meant that the respiration signal indicates that the user is yawning. With speech event it is meant that the respiration signal indicates that the user is speaking. A yawn event and a speech event each are a signal anomaly in the respiration signal. In particular, if a low alertness level is determined, drowsiness of the user can be detected. For example, if a yawn event is detected, it is determined that the alertness level of the user is low. For example, if a speech event is detected, it is determined that the alertness level of the driver is high. In particular, both a yawn event and a speech event can be detected based on one single respiration signal or respiration sensor. Thus, only one respiration sensor is needed to reliably detect the alertness level of the user. Each of a yawn event and a speech event can be clearly distinguished from normal breathing of a user based on the respiration signal. In particular, a classification of the respiration signal or pattern can be performed, thereby classifying into a yawn event, a speech event, or normal breathing.
For example , in the automotive context, the use of the respiration signal to determine the state of the user or the alertness level is particularly advantageous for detecting drowsiness of the user (e.g. driver) early in time, for example before the user already lost full control over the car he/she is driving due to fatigue. Compared to for example the use of a camera, using a respiration sensor for measuring or providing a respiration signal of the user, in order to detect yawn event(s) and/or speech event(s), has at least one of the following advantages: usability at night time, insensitivity to changes in illumination, insensitivity to special movements of the user (e.g. covering his/her mouth while yawning or speaking), and insensitivity to clothing of the user (e.g. wearing thick winter clothes).
Preferred embodiments of the invention are defined in the dependent claims. It shall be understood that the claimed method and computer program have similar and/or identical preferred embodiments as the claimed signal processor or system and as defined in the dependent claims.
In one embodiment the signal processor is adapted to detect the at least one yawn event by detecting an amplitude peak if the amplitude of the respiration signal exceeds a preset threshold. This provides an easy and computationally inexpensive way of detecting the yawn event in the respiration signal.
In a variant of this embodiment the preset threshold is selected to be less than an amplitude of a speech event and/or an amplitude of a normal breathing of the user. In this way, a precise distinction in the respiration signal between yawning of the user and other activities of the user, such as speaking or normal breathing, can be made.
In a further embodiment the signal processor is adapted to determine an amplitude frequency distribution of the amplitudes over time, or its histogram representation, of at least a part of the respiration signal, and to detect the at least one yawn event and/or speech event based on the determined amplitude frequency distribution, or its histogram representation. The histogram representation is a representation of the amplitude frequency distribution of the amplitudes over time. In other words, the amplitude frequency distribution can be the distribution of the frequency of occurrence of different amplitudes (amplitude values) in different ranges. A frequency distribution is thus a statistical measure to analyse the distribution of amplitudes of the respiration signal. In particular, the shape of a histogram representation of the frequency distribution can be determined, and the at least one yawn event and/or speech event can be detected based on the shape of the histogram representation. This provides for an easy and reliable detention.
In a variant of this embodiment the part of the respiration signal comprises exactly one yawn event. In this way a time window sized to only measure exactly one yawn event can be used. Exactly one yawn event can be reliably detected based on the amplitude frequency distribution, or its histogram representation, in particular the shape of its histogram representation. The part of the respiration signal (or time window) can be in the range of an average time of a yawn event (or equal to or bigger than an average time of a yawn event), for example between 3 and 8 second, or about 5 seconds.
In another embodiment, the signal processor is adapted to determine at least one feature of at least part of the respiration signal, the at least one feature selected from the group comprising an amplitude frequency distribution (or its histogram representation), an average number of respiration cycles, a median number of respiration cycles per epoch, an inclination coefficient (showing whether the respiration rate goes up or down), an average of amplitudes of respiration cycles, a median of amplitudes of respiration cycles, and an inclination coefficient (showing whether the amplitude goes up or down).
In a further embodiment the signal processor is adapted to detect the at least one yawn event and/or speech event using a machine learning algorithm. This provides for a reliable detection. This embodiment can in particular be used in combination with the embodiment of determining an amplitude frequency distribution (or its histogram
representation). The amplitude frequency distribution (or its histogram representation) can be used as an input for the machine learning algorithm. Also, at least one, in particular a number of, features of the previous embodiment can be used as an input for the machine learning algorithm. In particular, the signal processor can be adapted to determine a multi-dimensional feature vector based on (at least part of) the respiration signal, wherein the dimension corresponds to the number of features. In a variant of this embodiment the signal processor is adapted to receive at least one respiration training signal selected from the group comprising a respiration training signal indicative of normal breathing of the user, a respiration training signal indicative of yawning of the user, and a respiration training signal indicative of speech of the user. In this way an adaptive system can be provided.
In a further variant of this embodiment the signal processor is adapted to use a clustering technique to determine a yawn event cluster and/or a speech event cluster using the at least one respiration training signal. This provides for an easy classification and/or visual representation of the classification. In a further embodiment the signal processor is adapted to determine the alertness level based on at least one criterion selected from the group comprising
determination of a low alertness level if an amount or a frequency of detected yawn events is above a preset threshold, determination of a medium alertness level if both at least one yawn event and at least one speech event are detected, and determination of a high alertness level if only at least one speech event is detected and no yawn event is detected. This provides for a reliable determination of the alertness level of the user, in particular for detecting drowsiness of the user.
In a further embodiment the respiration sensor is a radar based respiration sensor. This provides an unobtrusive way of measuring the respiration signal of the user. Compared to a standard respiration detection, such as for example a body-worn respiration band or glued electrodes (e.g. used in a hospital), the radar based respiration sensor provides increased usability and comfort. It is therefore particularly suitable for consumer applications (e.g. in a car). The radar-based respiration sensor is a contactless sensor. Thus, it is in particular suitable to be embedded in a small sized system.
In a variant of this embodiment the radar based respiration sensor is disposed on or integrated into a seat belt wearable by the user or a steering wheel. This way, the respiration signal of the user can be unobtrusively measured, when the user is for example sitting in a car seat having a seat belt on.
In a further embodiment the system further comprising a feedback unit adapted to provide feedback to the user based on the determined alertness level. In a variant of this embodiment, the feedback unit is adapted to provide feedback if the determined alertness level is below a preset value. In this way, a warning can be provided to the user, in particular if it is determined that the alertness level is too low, for example when the user is drowsy and is about to fall asleep.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other aspects of the invention will be apparent from and elucidated with reference to the embodiment(s) described hereinafter. In the following drawings
Fig. 1 shows a schematic representation of a system for determining an alertness level of a user according to an embodiment;
Fig. 2 shows a schematic representation of a user wearing a seat belt having a respiration sensor of a system according to an embodiment; Fig. 3 shows a diagram of a respiration signal indicating normal breathing of a user;
Fig. 4 shows a diagram of a respiration signal indicating speech of a user; Fig. 5 shows a diagram of a respiration signal indicating yawning of the user; Fig. 6 shows a diagram of an amount of time variance between local minima in a respiration signal for four different users;
Fig. 7 shows a diagram of an amount of amplitude variance in a respiration signal for four different users;
Fig. 8 shows a diagram of a respiration signal having yawn events detected by a signal processor, system or method according to a first embodiment;
Fig. 9 shows a diagram of a respiration signal, when the user speaks;
Fig. 10 shows an exemplary respiration signal;
Fig. 11 shows a first part of an exemplary respiration signal of Fig. 10, having exactly one yawn event;
Fig. 12 shows a histogram representation of the part of the respiration signal of
Fig. 11, used by a signal processor, system or method according to a second embodiment;
Fig. 13 shows a second part of the respiration signal of Fig. 10, having a speech event;
Fig. 14 shows a histogram representation of the part of the respiration signal of Fig. 13, used in a signal processor, system or method according to the second embodiment;
Fig. 15 shows a diagram of clusters obtained by a signal processor, system or method according to the second embodiment;
Fig. 16 shows a respiration signal having yawn events and the mapping of the yawn events to points in the yawn event cluster of Fig. 15;
Fig. 17 shows a flow diagram of a method for determining an alertness level of a user according to an embodiment; and
Fig. 18 shows a flow diagram of a method for determining an alertness level of the user according to another embodiment.
DETAILED DESCRIPTION OF THE INVENTION
Fig. 1 shows a schematic representation of a system for determining an alertness level of a user according to an embodiment of the present invention. The system 100 comprises a signal processor 10, and a respiration sensor 20 measuring or providing a respiration signal 12 of the user 1. The respiration signal 12 is transmitted from the respiration sensor 20 to the signal processor 10. The signal processor 10 receives the respiration signal 12 from the respiration sensor 20. The signal processor 10 detects at least one yawn event and/or speech event based on the respiration signal 12. The determination of the yawn event and/or speech event can in particular be performed in real-time. The signal processor 10 determines an alertness level 14 of the user 1 based on the at least one detected yawn event 16 and/or detected speech event.
In particular, the signal processor can perform or be adapted to perform a classification of the respiration signal or pattern into a yawn event, a speech event, or normal breathing. This can for example be performed by a respiration pattern classifier component. The classification can in particular be performed in real-time. The alertness level can then be determined based on the classification into yawn event, speech event or normal breathing. This can for example be performed by an alertness classifier component. The respiration pattern classifier component and/or the alertness classifier component can be part of or implemented in the signal processor.
The system further comprises a feedback unit 30 adapted to provide feedback to the user 1 based on the determined alertness level 14. The signal processor 10 transmits the alertness level 14 to the feedback unit 30. The feedback unit 30 receives the alertness level 14 from the signal processor 10. The feedback unit 30 is adapted to provide feedback to the user 1, if the determined alertness level 14 is below a preset value. In this way, a warning can be provided to the user 1, for example when the user 1 is drowsy and is about to fall asleep.
Fig. 2 shows a schematic representation of a user 1 wearing a seatbelt 21. The seatbelt 21 can in particular be a seatbelt 21 of a seat in a car. A respiration sensor 20 is integrated into the seatbelt 21, which is worn by the user 1. The seatbelt 21 here refers to safety seatbelt designed to secure the user 1 against harmful movement that may result from a collision or a sudden stop. The seatbelt 21 is intended to reduce injuries by stopping the user 1 from hitting hard interior elements of the vehicle or other passengers and by preventing the user 1 from being thrown from the vehicle. In this embodiment, the respiration sensor 20 is a radar-based respiration sensor, in particular a Doppler radar-based respiration sensor. Since the radar-based respiration sensor is a contactless sensor, it is in particular suitable to be embedded in a small sized system like the seatbelt 21. It will be understood that the radar- based respiration sensor can also be disposed on or integrated in any other suitable object, such as for example a steering wheel, portable device or a mattress.
The radar-based respiration sensor 20 is used to measure and provide the respiration signal of the user 1. This approach enables to monitor breathing-related thorax motion, thus breathing of the user, as well as context information, such as the activity of the user. The radar-based respiration signal 20 is adapted to transmit electro -magnetic waves which are reflected at the chest wall of the user and undergo a Doppler- frequency shift, if the chest wall of the user 1 is moving due to respiration of the user 1. Therefore, the received signal measured by the radar-based respiration sensor 20 contains information about the thorax motion. The Doppler-radar signal for a single target, which is a good approximation of the thorax of the user, is given by: x(t) = a(t) · cos(0( ) (Ί )
The amplitude a(t) can be assumed to be constant, thus a(t) = a0, since only small distance changes are considered, for example in the centimeter range. This is due to breathing and the beating heart, disregarding large movements of the user. The phase term in the equation (1) above can be ex ressed as:
Figure imgf000009_0001
where λ is the wavelength of the transmitted waves, and Ξ is the sensor-thorax distance for t = 0. In this example, the sum term in the equation above consists of four terms due to four different motions that are considered. First, the breathing motion (amplitude A of 5 mm to 30 mm at 0,1 Hz to 0,8 Hz), second the beating heart, (typically less than 5 mm at 0,5 Hz to 3 Hz), third the user's global motion and fourth - if applicable - movement of the sensor itself. For an ideal measurement situation with breathing motion only and a perfect estimation of the phase term, equation (2) above reduces too:
D(t) = ^ ( » + x(t))
λ (3).
Fig. 3 shows a diagram of a respiration signal 12 indicating normal breathing of a user. Fig. 4 shows a diagram of a respiration signal 12 indicating speech of a user. Fig. 5 shows a diagram of a respiration signal 12 indicating yawning of a user. Each diagram shows the amplitude of the respiration signal over time. These diagrams are results of an
experiment, where the respiration signal 12 of a user was measured during non-eventful breathing and eventful breathing. Non-eventful breathing is meant to be normal breathing or also called baseline session. Eventful breathing is meant to be a yawning session, during which the user is quiet, but yawns, and/or a speech session, during which the user speaks (e.g. reading a passage from a book to simulate a discussion in a car). As can be seen in Fig. 5, yawn events 16 can clearly be distinguished in the respiration signal 12. For illustration and simplification purposes only four yawn events of the plurality of yawn events 16 are marked by a circle in Fig. 5.
Fig. 6 shows a diagram of an amount of time variance between local minima in a respiration signal for four different users. Fig. 7 shows a diagram of an amount of amplitude variance in a respiration signal for four different users. For each user normal breathing (baseline session), yawning (yawn session) and speech (speech session) was investigated. Thus, Fig. 6 and Fig. 7 are based on an evaluation the three types of respiration signals shown in Fig. 3 to 5. Fig. 6 and Fig. 7 are here used for mere illustration purposes, to show that a differentiation between non-eventful and eventful breathing is possible. As can be seen in Fig. 6, for all users the time variance between local minima in the respiration signal during the non-eventful breathing (normal breathing or baseline session) is significantly lower than the same time variance during the eventful breathing sessions (yawning session and speech session). As can be seen in Fig. 7, for all users the amplitude variance of the respiration signal during the non-eventful breathing (normal breathing or baseline session) is significantly lower than the amplitude variance of the respiration signal during the eventful breathing (yawning session and speech session).
Fig. 7 also shows that for all participants the amplitude variance of the respiration signal during the yawning session is significantly higher than the amplitude variance of the respiration signal during the speech session. This is due to the fact that yawns involve deep inhalations, much deeper than when the user speaks, which results in the fact that the minima peaks of the respiration signal have much lower values in the case of a yawning event than in the case of a speech event.
Fig. 8 shows a diagram of a respiration signal 12 having yawn events 16 (yawning session) detected by a signal processor, system or method according to a first embodiment of the present invention. In this embodiment, the signal processor is adapted to detect the at least one yawn event 16 by detecting an amplitude peak if the amplitude of the respiration signal 12 exceeds a preset threshold 17. The preset threshold 17 is selected to be less than an amplitude of a speech event and an amplitude of a normal breathing of the user. Fig. 9 shows a diagram of a respiration signal, when the user speaks (speech session). In Fig. 9 the same preset threshold 17 is indicated as in Fig. 8. As can be seen in Fig. 9, the respiration signal 12 has only speech and no yawn events are detected. This is due to the fact that the preset threshold 17 is selected to be less than the amplitude of a speech event.
Now, a second embodiment of the present invention will be explained with reference to Fig. 10 to Fig. 16. This embodiment can be used as an alternative to the previous first embodiment described in connection with Figs. 8 and 9. This embodiment can also be used in addition to the first embodiment described above in connection with Fig. 8 and 9.
Fig. 10 shows an exemplary respiration signal 12. In this embodiment, the signal processor is adapted to determine at least one feature of the respiration signal. In this example, one of the at least one feature is an amplitude frequency distribution of the amplitudes over time (or its histogram representation) of at least a part of the respiration signal 12. The histogram representation is a representation of the amplitude frequency distribution of the amplitudes over time. In other words, the amplitude frequency distribution can be the distribution of the frequency of occurrence of different amplitudes (amplitude values) in different ranges. The signal processor is adapted to determine the amplitude frequency distribution (or its histogram representation) of at least the part of the respiration signal, and to detect the at least one yawn event and/or speech event based on the determined amplitude frequency distribution (or its histogram representation).
Fig. 11 shows a first part of a respiration signal of Fig. 10. Fig.13 shows a second part of the respiration signal of Fig. 10. In the respiration signal part of Fig. 11 a yawn event 16 is present, whereas in the respiration signal part of Fig. 13 no yawn event is present. The part of the respiration signal in Fig. 11 comprises exactly one yawn event 16. In this way, a time window sized to cover only exactly one yawn event can be applied to the respiration signal 12.
Fig. 12 shows a histogram representation of the part of the respiration signal of Fig. 11. Fig. 14 shows a histogram representation of the part of the respiration signal of Fig. 13. When comparing Fig. 12 to Fig. 14, it can clearly be seen that the amplitude frequency distribution (or its histogram representation) of the respiration signal part having the yawn event (Fig. 11) can clearly be distinguished from the amplitude frequency distribution (or its histogram representation) of the respiration signal part having no yawn event (Fig. 13). In Fig. 12 which shows the histogram representation of the respiration signal part having the yawn event 16, there is activity in the lower bins of the histogram representation. This indicates lower amplitudes in the respiration signal part, as can be seen in Fig. 11. In the histogram representation of Fig. 14, no activity in the lower bins is present. Thus, there are no lower amplitudes in the respiration signal part, as shown in Fig. 13. In this way, the yawn event 16 can be detected based on the amplitude frequency distribution, or its histogram representation), in particular the shape of the histogram representation. Therefore, the difference between the histogram representations, as shown in each of Fig. 12 and Fig. 14, can be used for an automated classification of the respiration signal into yawn event(s). In the same way as explained with reference to Fig. 11 to 14, speech event(s) can be detected in the respiration signal. Furthermore, also other events can be manifested in the respiration signal in this way.
An example of a classification into a yawn event, speech event and normal breathing will now be explained with reference to Fig. 15. Fig. 15 shows a diagram of clusters obtained by a signal processor, system or method according to the second
embodiment of the present invention. The signal processor is adapted to detect the at least one yawn event 16 and/or speech event using a machine learning algorithm. The signal processor is adapted to receive the at least one respiration training signal (or reference signal) selected from the group comprising a respiration training signal indicative of normal breathing of the user, a respiration training signal indicative of yawning of the user, and a respiration training signal indicative of a speech of the user. A clustering technique can then be used to determine a yawn event cluster 32 and a speech event cluster 34 using the at least one respiration training signal, as shown in Fig. 15. Further, additional clusters can be determined, such as normal breathing cluster 36 shown in Fig. 15. In this experiment, a user was asked to breath normally (baseline session), yawn (yawning session), and speak (speech session) for a few seconds in order to determine the respiration training signal indicative of normal breathing of the user, the respiration training signal indicative of yawning of the user, and the respiration training signal indicative of speech of the user. These respiration training signals are then used at run-time to distinguish between normal breathing, yawning and speech.
In particular, in this embodiment the signal processor is adapted to perform the machine learning algorithm based on at least one feature of the respiration signal. In this case one of the at least one feature is the amplitude frequency distribution (or its histogram representation) as explained in connection with Fig. 10 to 14. The amplitude frequency distribution (or its histogram representation) is used as an input for the machine learning algorithm. The machine learning algorithm can be based on a number of features. In the experiment in connection with Fig. 15, ten features were used. In this way a multidimensional feature vector can be determined based on (at least part of) the respiration signal, wherein the dimension corresponds to the number of features, thus in this example a 10- dimensional feature vector. Fig. 15 shows a two-dimensional projection of this 10- dimensional feature vector. It will be understood that any other number of features can be used. Examples of such feature are for example an average number of respiration cycles, a median number of respiration cycles per epoch, an inclination coefficient (showing whether the respiration rate goes up or down), an average of amplitudes of respiration cycles, a median of amplitudes of respiration cycles, and an inclination coefficient (showing whether the amplitude goes up or down). However, it will be understood that any other suitable feature can be used. In particular, the combination of features can be such that both yawn events and speech events can be reliably detected.
Fig. 16 shows a respiration signal 12 having yawn events 16 and the mapping of the yawn events 16 to points in the yawn event cluster 32 of Fig. 15. As can be seen in Fig. 16, each point of the yawn event cluster 32 corresponds to one of the yawn events 16 in the respiration signal 12. Thus, the results of the unsupervised clustering as shown in Fig. 15 by applying a machine- learning algorithm are successful. In this way, a fully adaptive system for detection of yawn event(s) and/or speech event(s) based on the respiration signal can be provided.
Fig. 17 shows a flow diagram of a method for determining an alertness level of a user according to an embodiment, and Fig. 18 shows a flow diagram of a method for determining an alertness level of a user according to another embodiment. By detecting yawn event(s), speech event(s) or classifying into normal breathing, yawn event or speech event, the alertness level of the user can be determined.
In the embodiment of Fig. 17, in an initial step 101 a respiration signal is received. Then, in step 102, it is determined if at least one yawn event is detected. In particular, it can be determined if a specific amount for a specific frequency of yawn events has been detected. If yawn event(s) has or have been detected, the method turns to step 104 of determining if at least one speech event is detected. If at least one speech event has been detected, the alertness level is determined to be a medium alertness level 108. Thus, a medium alertness level is detected, if both at least one yawn event and at least one speech event are detected. If at least one yawn event has been detected, but not speech event is detected, the alertness level is determined to be a low alertness level 107. In particular, a low alertness level 107 can be detected if the amount or frequency of detected yawn events is above a preset threshold. If a frequency of yawn events is increasing, this means that the user is yawning more often. Returning to step 102, if the result of the determination is that no yawn event is detected, in step 103 it is then determined if at least one speech event is detected. If at least one speech event is detected (no yawn event detected) the alertness level is determined to be a high alertness level 106. If no speech event is detected (and no yawn event is detected) the alertness level is determined to be neutral, e.g. indicating normal breathing 105.
In the embodiment of Fig. 18, in an initial step 111 at least one respiration training signal is received, in particular the respiration training signals previously described. In another step 112 the current respiration signal of the user is received. Then in step 113 yawn event, speech event and/or normal breathing is detected or classified based on the respiration signal using a machine learning algorithm. In particular, at least one or a number of features can be used as an input for the machine learning algorithm. In particular, a multidimensional feature vector based on (at least part of) the respiration signal can be determined, wherein the dimension corresponds to the number of features. If at least one speech event is detected and no yawn event is detected, indicated by step 114, the alertness level is determined to be a high alertness level 106. If it is determined that both at least one yawn and at least one speech event are detected, the alertness level is determined to be a medium alertness level 108, indicated by step 115. If it is determined that at least one yawn event and no speech event is detected, it is determined that the alertness level is a low alertness level 107, indicated by step 116. In particular, in step 116 it can be determined if an amount or frequency of detected yawn events is above a preset threshold, as previously explained.
The present invention can in particular be used in the automotive context for detecting drowsiness of a driver. Drowsiness of a driver can be detected, when the alertness level of the user is determined to be low. However, it will be understood that the present invention cannot only be applied in an automotive context, but in any other suitable context that requires a high alertness of the user, for example in a plane, a hospital or industrial shift working. Another example is the consumer lifestyle domain, for example for relaxation or sleep application.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single element or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
Any reference signs in the claims should not be construed as limiting the scope.

Claims

CLAIMS:
1. A signal processor (10) for determining an alertness level of a user, the signal processor (10) adapted to:
- receive a respiration signal (12) of a user (1), the respiration signal (12) having an amplitude over time,
- detect at least one yawn event (16) and/or speech event based on the respiration signal (12), and
- determine an alertness level (14) of the user based on the at least one detected yawn event (16) and/or detected speech event.
2. The signal processor of claim 1, adapted to detect the at least one yawn event by detecting an amplitude peak if the amplitude of the respiration signal exceeds a preset threshold (17).
3. The signal processor of claim 2, wherein the preset threshold (17) is selected to be less than an amplitude of a speech event and/or an amplitude of a normal breathing of the user (1).
4. The signal processor of claim 1, adapted to determine an amplitude frequency distribution of the amplitudes over time, or its histogram representation, of at least a part of the respiration signal, and to detect the at least one yawn event and/or speech event based on the determined amplitude frequency distribution, or its histogram representation.
5. The signal processor of claim 4, wherein the part of the respiration signal comprises exactly one yawn event.
6. The signal processor of claim 1, adapted to detect the at least one yawn event
(16) and/or speech event using a machine learning algorithm.
7. The signal processor of claim 6, adapted to receive at least one respiration training signal selected from the group comprising a respiration training signal indicative of normal breathing of the user (1), a respiration training signal indicative of yawning of the user (1), and a respiration training signal indicative of speech of the user (1).
8. The signal processor of claim 6, adapted to use a clustering technique to determine a yawn event cluster (32) and/or a speech event cluster (34) using the at least one respiration training signal.
9. The signal processor of claim 1, adapted to determine the alertness level (14) based on at least one criterion selected from the group comprising determination of a low alertness level if an amount or a frequency of detected yawn events (16) is above a preset threshold, determination of a medium alertness level if both at least one yawn event (16) and at least one speech event are detected, and determination of a high alertness level if only at least one speech event is detected and no yawn event is detected.
10. A system (100) for determining an alertness level of a user, the system (100) comprising:
- the signal processor (10) of claim 1, and
- a respiration sensor (20) providing the respiration signal of the user (1).
11. The system of claim 10, wherein the respiration sensor (20) is a radar based respiration sensor.
12. The system of claim 11 , wherein the radar based respiration sensor (20) is disposed on or integrated into a seat belt (21) wearable by the user (1) or a steering wheel.
13. The system of claim 10, further comprising a feedback unit (30) adapted to provide feedback to the user (1) based on the determined alertness level (14).
14. A method for determining an alertness level of a user, the method comprising:
- receiving a respiration signal (12) of a user (1), the respiration signal (12) having an amplitude over time, - detecting at least one yawn event (16) and/or speech event based on the respiration signal (12), and
- determining an alertness level (14) of the user based on the at least one detected yawn event (16) and/or detected speech event.
15. A computer program comprising program code means for causing a computer to carry out the steps of the method as claimed in claim 14 when said computer program is carried out on the computer.
PCT/IB2012/053434 2011-07-13 2012-07-05 Signal processor for determining an alertness level WO2013008150A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161507284P 2011-07-13 2011-07-13
US61/507,284 2011-07-13

Publications (1)

Publication Number Publication Date
WO2013008150A1 true WO2013008150A1 (en) 2013-01-17

Family

ID=46642590

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2012/053434 WO2013008150A1 (en) 2011-07-13 2012-07-05 Signal processor for determining an alertness level

Country Status (1)

Country Link
WO (1) WO2013008150A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016181390A1 (en) 2015-05-10 2016-11-17 Omega Life Science Ltd. Nebulizers and uses thereof
CN107157498A (en) * 2017-06-08 2017-09-15 苏州大学 A kind of voice fatigue strength detection method for mental fatigue
DE102017110295A1 (en) * 2017-05-11 2018-11-15 HÜBNER GmbH & Co. KG Apparatus and method for detecting and categorizing vital data of a subject with the aid of radar radiation
WO2023188055A1 (en) * 2022-03-30 2023-10-05 三菱電機株式会社 Yawn detection device, occupant detection device, and yawn detection method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994024935A1 (en) * 1993-04-26 1994-11-10 I Am Fine, Inc. Respiration monitor with simplified breath detector
US20070282227A1 (en) * 2006-05-31 2007-12-06 Denso Corporation Apparatus for detecting vital functions, control unit and pulse wave sensor
US7397382B2 (en) 2004-08-23 2008-07-08 Denso Corporation Drowsiness detecting apparatus and method
WO2009040711A2 (en) * 2007-09-25 2009-04-02 Koninklijke Philips Electronics N.V. Method and system for monitoring vital body signs of a seated person
US20100152600A1 (en) * 2008-04-03 2010-06-17 Kai Sensors, Inc. Non-contact physiologic motion sensors and methods for use
JP2010155072A (en) * 2008-12-01 2010-07-15 Fujitsu Ltd Awakening degree decision apparatus and method
JP2010204984A (en) * 2009-03-04 2010-09-16 Nissan Motor Co Ltd Driving support device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994024935A1 (en) * 1993-04-26 1994-11-10 I Am Fine, Inc. Respiration monitor with simplified breath detector
US7397382B2 (en) 2004-08-23 2008-07-08 Denso Corporation Drowsiness detecting apparatus and method
US20070282227A1 (en) * 2006-05-31 2007-12-06 Denso Corporation Apparatus for detecting vital functions, control unit and pulse wave sensor
WO2009040711A2 (en) * 2007-09-25 2009-04-02 Koninklijke Philips Electronics N.V. Method and system for monitoring vital body signs of a seated person
US20100152600A1 (en) * 2008-04-03 2010-06-17 Kai Sensors, Inc. Non-contact physiologic motion sensors and methods for use
JP2010155072A (en) * 2008-12-01 2010-07-15 Fujitsu Ltd Awakening degree decision apparatus and method
JP2010204984A (en) * 2009-03-04 2010-09-16 Nissan Motor Co Ltd Driving support device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016181390A1 (en) 2015-05-10 2016-11-17 Omega Life Science Ltd. Nebulizers and uses thereof
EP3294392A4 (en) * 2015-05-10 2018-12-26 Omega Life Science Ltd Nebulizers and uses thereof
DE102017110295A1 (en) * 2017-05-11 2018-11-15 HÜBNER GmbH & Co. KG Apparatus and method for detecting and categorizing vital data of a subject with the aid of radar radiation
CN107157498A (en) * 2017-06-08 2017-09-15 苏州大学 A kind of voice fatigue strength detection method for mental fatigue
CN107157498B (en) * 2017-06-08 2020-06-09 苏州大学 Voice fatigue degree detection method for mental fatigue
WO2023188055A1 (en) * 2022-03-30 2023-10-05 三菱電機株式会社 Yawn detection device, occupant detection device, and yawn detection method

Similar Documents

Publication Publication Date Title
JP6998564B2 (en) Arousal level estimation device and arousal level estimation method
US7397382B2 (en) Drowsiness detecting apparatus and method
US7109872B2 (en) Apparatus and method for postural assessment while performing cognitive tasks
JP5423872B2 (en) Biological condition determination device
WO2010001962A1 (en) Drowsiness detector
US9050045B2 (en) Psychological state estimation device
JP4811255B2 (en) State estimation device
WO2013008150A1 (en) Signal processor for determining an alertness level
US20190117144A1 (en) Heart rate variability and drowsiness detection
JP5454703B2 (en) Sleep state estimation device
EP3649920B1 (en) System for detecting whether a visual behavior monitor is worn by the user
Bonde et al. VVRRM: Vehicular vibration-based heart RR-interval monitoring system
JP2007229218A (en) Apparatus, system, and method for vigilance estimation
JP2011123653A (en) Test device for driver's arousal level
JP6558328B2 (en) Biological information output device and chair provided with biological information output device
US20150158494A1 (en) Method and apparatus for determining carelessness of driver
WO2007086222A1 (en) Caution area estimating system and method
CN117222360A (en) Multi-sensor system for cardiovascular and respiratory tracking
JP2010162069A (en) Apparatus, method and program for estimating age by non-contact biological information collection
JP2011125620A (en) Biological state detector
JP4609539B2 (en) Sleepiness detection device
KR20180112295A (en) System for real-time determining driving condition of a driver based on bio-signal
KR101509959B1 (en) Ballistocardiogram analysis system and method for vehicle
Haescher et al. SmartMove: a smartwatch algorithm to distinguish between high-and low-amplitude motions as well as doffed-states by utilizing noise and sleep
KR102021932B1 (en) Driver sleepiness detection method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12745911

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC

122 Ep: pct application non-entry in european phase

Ref document number: 12745911

Country of ref document: EP

Kind code of ref document: A1