CN116234496A - Non-contact sleep detection and disturbance attribution - Google Patents

Non-contact sleep detection and disturbance attribution Download PDF

Info

Publication number
CN116234496A
CN116234496A CN202180055343.6A CN202180055343A CN116234496A CN 116234496 A CN116234496 A CN 116234496A CN 202180055343 A CN202180055343 A CN 202180055343A CN 116234496 A CN116234496 A CN 116234496A
Authority
CN
China
Prior art keywords
user
sleep
radar
data
processing system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180055343.6A
Other languages
Chinese (zh)
Inventor
东吉克·辛
迈克尔·狄克逊
安德鲁·威廉·戈登森
杰克·加里森
阿杰·坎南
杰弗里·于
阿什顿·尤德尔
肯·米克斯特
李瑞娜
大卫·詹森斯
德斯蒙德·奇克
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/990,714 external-priority patent/US20220047209A1/en
Priority claimed from US16/990,720 external-priority patent/US11406281B2/en
Priority claimed from US16/990,746 external-priority patent/US11808839B2/en
Priority claimed from US16/990,726 external-priority patent/US11754676B2/en
Priority claimed from US16/990,705 external-priority patent/US11832961B2/en
Application filed by Google LLC filed Critical Google LLC
Publication of CN116234496A publication Critical patent/CN116234496A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4809Sleep detection, i.e. determining whether a subject is asleep or not
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/0507Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  using microwaves or terahertz waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2560/00Constructional details of operational features of apparatus; Accessories for medical measuring apparatus
    • A61B2560/02Operational features
    • A61B2560/0242Operational features adapted to measure environmental factors, e.g. temperature, pollution
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6887Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient mounted on external non-worn devices, e.g. non-medical devices
    • A61B5/6891Furniture

Abstract

Various systems, devices, and methods for contactless sleep tracking are presented. Based on data received from a non-contact sensor, such as a radar sensor, it is determined that the user has entered a sleep state. A transition time for the user to transition from the sleep state to the awake state may be determined. Based on the data received from the environmental sensor, the environmental event may be identified as occurring within a period of the transition time. Based on the environmental event occurring within the period of the transition time, the user may be awakened to be attributed to the environmental event. An indication of the due environmental event may be output as a cause of the user waking up.

Description

Non-contact sleep detection and disturbance attribution
Cross reference to related applications
The present application claims priority from U.S. non-provisional patent application No. 16/990,705 entitled "Contactless Sleep Detection and Disturbance Attribution (non-contact sleep detection and disturbance attribution)" filed 8/11/2020. The present application also relates to the following: PCT application US2019/031290 entitled "Sleep Tracking and Vital Sign Monitoring Using Low Power Radio Waves (sleep tracking and vital sign monitoring using low power radio waves)" filed on 5/8 of 2019; U.S. non-provisional application No. 16/990,746 (attorney docket No. 090421-1198051) entitled "Initializing Sleep Tracking on a Contactless Health Tracking Device (initiating sleep tracking on a contactless health tracking device)" filed 8/11/2020. The present application also relates to the following: non-provisional application No. 16/990,714 (attorney docket No. 090421-1183289) entitled "Contactless Sleep Detection and Disturbance Attribution for Multiple Users (non-contact sleep detection and interference attribution for multiple users)" filed 8/11/2020. The present application also relates to the following: non-provisional application No. 16/990,720 (attorney docket No. 090421-1183290) entitled "Contactless Cough Detection and Attribution (non-contact cough detection and attributable)", filed 8.11 in 2020. The present application also relates to the following: non-provisional application number 16/990,726 (attorney docket number 090421-1190042) filed on 11/8/2020 entitled "Precision Sleep Tracking Using a Contactless Sleep Tracking Device (performing accurate sleep tracking using a non-contact sleep tracking device)". The entire disclosure of each is incorporated herein by reference for all purposes.
Background
A person may wake up multiple times during the night. This person may have difficulty determining the cause of his wake, especially if the source of the wake-up disturbance is short lived. To improve the sleep quality of a person, by the person knowing what is interfering with his sleep, the person can take precautions to address the source and reduce the presence or impact of future interference.
Disclosure of Invention
Various embodiments are described in relation to a contactless sleep tracking device. In some embodiments, a contactless sleep tracking device is described. The device may include a housing. The device may include a first environmental sensor housed by a housing. The device may include a non-contact sensor housed by the housing that may remotely monitor movement of the user. The device may include a processing system housed by the housing, the processing system including one or more processors that may receive data from the first environmental sensor and the non-contact sensor. The processing system may be configured to determine that the user has entered a sleep state based on data received from the non-contact sensor. The processing system may be configured to determine a transition time for the user to transition from the sleep state to the awake state. The processing system may be configured to identify an environmental event occurring within a period of the transition time based on data received from the first environmental sensor. The processing system may be configured to wake the user based on the environmental event occurring within the period of the transition time due to the environmental event. The processing system may be configured to output an indication of the attributed environmental event as a cause for the user to wake up.
Embodiments of such a device may include one or more of the following features: the non-contact sensor may use a low power Continuous Wave (CW) radar. The first environmental sensor may be an ambient light sensor. The processing system configured to identify the environmental event may include: the processing system is configured to determine that the ambient light level has increased by at least a threshold amount (or an amount significant enough to wake up the user). The first environmental sensor may be a microphone. The processing system being configured to identify the environmental event may include the processing system being configured to determine that a sound that is louder than a sound event threshold has been detected or a sound event that is significant enough to wake up the user. The device may further comprise a second environmental sensor. The apparatus may further include: the processing system being configured to identify the environmental event may include the processing system being configured to compare data received from the first environmental sensor to a first threshold (or some other form of criteria). The apparatus may further include: the processing system is configured to compare the data received from the second environmental sensor to a second threshold (or some other form of criteria). The processing system identifying the environmental event may be further based on: data received from the second environmental sensor, comparing the data received from the first environmental sensor to a first threshold (or some other form of criteria), and comparing the data received from the second environmental sensor to a second threshold (or some other form of criteria). The first environmental sensor may be a temperature sensor and the environmental event may be a temperature change greater than a temperature threshold (or some other form of temperature-based criteria). The device may further include a wireless network interface housed by the housing. The device may further include an electronic display screen housed by the housing. The device may further include a microphone housed by the housing. The device may further comprise a speaker housed by the housing. The device may further include a stand incorporated as part of the housing. The processing system may be in communication with a wireless network interface, a display screen, a microphone, and a speaker. The processing system may be further configured to receive a voice-based query via the microphone. The processing system may be further configured to output information based on the voice-based query via the wireless network interface. The processing system may be further configured to receive data from the cloud-based server system via the wireless network interface. The processing system may be further configured to output a response to the voice-based query via the speaker. The response may indicate the attributive environmental event as the reason for the user to wake up. The processing system may be further configured to output an indication of the due environmental event via the electronic display screen, a speaker using synthesized speech, or both. The processing system may be further configured to output an indication of the attributed environmental event mapped to a transition time of the user from the sleep state to the awake state.
In some embodiments, a method for performing contactless sleep monitoring is described. The method may include determining that the user may have entered a sleep state based on data received from the non-contact sensor. The method may include determining a transition time for a user to transition from a sleep state to an awake state. The method may include identifying an environmental event occurring within a period of the transition time based on data received from the first environmental sensor. The method may include attributing the user's wake to the environmental event based on the environmental event occurring within the period of the transition time. The method may include outputting an indication due to the environmental event as a cause for the user to wake up.
Embodiments of such a method may include one or more of the following features: the data received from the non-contact sensor may be based on low power Frequency Modulated Continuous Wave (FMCW) radar. The first environmental sensor may be an ambient light sensor. Identifying the environmental event may include determining that the ambient light level has increased by at least a threshold amount (or some other form of light-based criteria). The first environmental sensor may be an ambient light sensor. Identifying the environmental event may include determining that the ambient light level has increased more than a threshold ambient light level (or some other form of light-based criteria). The first environmental sensor may be a microphone. Identifying the environmental event may include determining that a sound that is louder than a sound event threshold (or some other form of sound-based criteria) has been detected. The first environmental sensor may be a temperature sensor. Identifying the environmental event may include determining that a temperature change greater than a threshold amount (or some other form of temperature-based criteria) has been detected. The method may include receiving a voice-based query via a microphone. The method may include outputting, via a speaker, a response to the voice-based query. The response may indicate the attributive environmental event as the reason for the user to wake up. The method may further include outputting, via the wireless network interface, information based on the voice-based query. The method may further include receiving data from a cloud-based server system via a wireless network interface. Outputting an indication of the attribution event as a cause for the user to wake up may include outputting an indication of the attribution environment event via an electronic display, a speaker using synthesized speech, or both. Determining a transition time, identifying an environmental event, attributing the user to the environmental event to wake up, and outputting an indication of attributing the environmental event as a reason for the user to wake up may be performed by a processing system of the contactless sleep tracking device. The first environmental sensor may be part of a contactless sleep tracking device.
Various embodiments are described relating to a contactless sleep analysis device for monitoring a plurality of users. In some embodiments, a non-contact sleep analysis device for monitoring a plurality of users is described. The device may include a housing. The device may include a radar sensor housed by the housing that may monitor movement of the area using radio waves. The device may include a processing system housed by the housing, the processing system including one or more processors that may receive data from the radar sensor. The processing system may be configured to receive data from the radar sensor. The processing system may be configured to perform clustering on data received from the radar sensor. The clustered data may be indicative of a first cluster and a second cluster. The processing system may be configured to determine that two users are present within the area based on the clustering performed on the data received from the radar sensor. The processing system may be configured to calculate a midpoint location between the first cluster and the second cluster in response to determining that two users are present. The processing system may be configured to map a first portion of the data from the radar sensor to a first user based on the calculated midpoint. The processing system may be configured to map a second portion of the data from the radar sensor to a second user based on the calculated midpoint. The processing system may be configured to perform separate sleep analysis on a first portion of the data of the first user and a second portion of the data of the second user over a period of time. The processing system may be configured to output data that may be indicative of sleep data of the first user during the time period and the second user during the time period, respectively.
Embodiments of such a device may include one or more of the following features: the processing system may be further configured to receive additional data from the radar sensor. The processing system may be further configured to perform clustering on additional data received from the radar sensor after determining that there are two users and calculating the midpoint location. The clustered data may indicate a single cluster. The processing system may be further configured to determine that only a single user may be present based on the clustering performed on the additional data received from the radar sensor. The processing system may be further configured to determine which of the first user and the second user may be a single user based on the location of the single cluster relative to the calculated midpoint. The processing system may be further configured to convert data received from the radar sensor into fewer dimensions. The data received from the radar sensor may be multi-dimensional. Clustering may be performed on the converted data. The processing system being configured to perform separate sleep analysis on the first portion of the data of the first user and the second portion of the data of the second user over a period of time may include the processing system being configured to determine that the first user may have entered a sleep state at the first time. The processing system is configured to determine that the second user may have entered a sleep state at a second time. The radar sensor may use a low power Frequency Modulated Continuous Wave (FMCW) radar. The device may further include a first environmental sensor housed by the housing. The processing system has been further configured to determine a transition time for the first user to transition from the sleep state to the awake state. The processing system has been further configured to identify an environmental event occurring within the period of the transition time based on the data received from the first environmental sensor. The processing system has been further configured to attribute the first user wake to the environmental event based on the environmental event occurring within the period of the transition time. The processing system may be further configured to output an indication of the attribution environment event mapped to the first user. The first environmental sensor may be an ambient light sensor. The processing system being configured to identify the environmental event may include the processing system being configured to determine that the ambient light level may have increased by at least a threshold amount. The first environmental sensor may be a microphone. The processing system being configured to identify the environmental event may include the processing system being configured to determine that a sound that is louder than the sound event threshold has been detected. The device may further include a wireless network interface housed by the housing. The device may further include a display screen housed by the housing. The device may further include a microphone housed by the housing. The device may further comprise a speaker housed by the housing. The device may further include a stand incorporated as part of the housing. The processing system may be in communication with a wireless network interface, a display screen, a microphone, and a speaker. The processing system may be further configured to receive a voice-based query via the microphone. The processing system may be further configured to output information based on the voice-based query via the wireless network interface. The processing system may be further configured to receive data from the cloud-based server system via the wireless network interface. The processing system may be further configured to output a response to the voice-based query via the speaker.
In some embodiments, a method for contactless sleep monitoring of multiple users is described. The method may include receiving a radar data stream based on radio waves transmitted into the area. The method may include performing clustering on the radar data streams. The clustered data may be indicative of a first cluster and a second cluster. The method may include determining that two users are present within the region based on the clustering performed on the radar data streams. The method may include, in response to determining that there are two users, calculating a midpoint location between the first cluster and the second cluster. The method may include mapping a first portion of the radar data stream to a first user based on the calculated midpoint. The method may include mapping a second portion of the radar data stream to a second user based on the calculated midpoint. The method may include performing separate sleep analysis of a first portion of data of a first user and a second portion of data of a second user over a period of time. The method may include outputting data indicative of sleep data of the first user during the time period and the second user during the time period, respectively.
Embodiments of such a method may include one or more of the following features: the method may further include receiving additional data as part of the radar data stream. The method may further include, after determining that there are two users and calculating the midpoint location, performing clustering on the received additional data of the radar data stream. The clustered data may indicate a single cluster. The method may further include determining that only a single user may be present based on the clustering performed on the additional data received as part of the radar data stream. The determination of which of the first user and the second user may be a single user may be based on the location of the single cluster relative to the calculated midpoint. The method may further include converting the radar data stream into fewer dimensions. The radar data stream may be multi-dimensional. Clustering may be performed on the converted data. The radar data stream may be output by a radar Integrated Circuit (IC), and the radar data stream may be based on a low power Frequency Modulated Continuous Wave (FMCW) radar output by the radar IC. Performing separate sleep analysis on the first portion of the data of the first user and the second portion of the data of the second user during the time period may include determining that the first user has entered a sleep state at the first time. Performing separate sleep analysis on the first portion of the data of the first user and the second portion of the data of the second user during the time period may include determining that the second user has entered a sleep state at the second time. The method may further include determining a transition time for the first user to transition from the sleep state to the awake state. The method may further include identifying environmental events occurring within the period of the transition time. The method may further include attributing the first user to the environmental event based on the environmental event occurring within the period of the transition time.
Various embodiments are described in connection with a non-contact cough detection device. In some embodiments, a non-contact cough detection device is described. The device may include a housing. The device may include a microphone housed by the housing. The device may include a radar sensor housed by the housing. The device may include a processing system housed by the housing, the processing system including one or more processors that may receive data from the microphone and the radar sensor. The processing system may be configured to receive audio data from the microphone. The processing system may be configured to detect that a cough has occurred based on the received audio data. The processing system may be configured to receive radar data indicative of reflected radio waves from the radar sensor. The processing system may be configured to perform a sleep state analysis process using the received radar data. The processing system may be configured to attribute the detected cough to a particular user based at least in part on a sleep state analysis process performed using the received radar data.
Embodiments of such a device may include one or more of the following features: the processing system may detect that a cough has occurred by analyzing audio data received from the microphone using a pre-trained cough detection machine learning model. The processing system may be further configured to delete audio data received from the microphone after detecting that a cough has occurred. The processing system being configured to attribute the detected cough to the particular user may include the processing system being configured to determine that only the monitored user is likely to cause the detected cough. The processing system being configured to perform the sleep state analysis process may include the processing system being configured to determine that a particular user has moved in the bed within a period of time of the detected cough. The processing system may be configured to determine that a particular user of the plurality of users being monitored may cause the detected cough. The processing system being configured to perform the sleep state analysis process may include the processing system being configured to determine that a particular user is moving more in the bed than other users of the plurality of users during a period of detected cough. The processing system may be further configured such that sleep data indicative of a cough due to a particular user is stored. The device may further include a wireless network interface housed by the housing and in communication with the processing system. The device may further include a speaker housed by the housing and in communication with the processing system. The processing system may be further configured to receive the spoken command via the microphone. The processing system may be further configured to output the dictation command based data to a cloud based server system via a wireless network interface. The processing system may be further configured to receive instructions from the cloud-based server system via the wireless network interface in response to outputting the data. The processing system may be further configured to output the stored sleep data in response to the instructions. The processing system may be further configured to output a sleep report indicating the number of times a particular user cough during sleep. The device may further include an electronic display in communication with the processing system that may output the sleep report for presentation. The processing system may be further configured to create a trend report over a plurality of days indicating whether the amount of cough by a particular user may be increasing, decreasing, or remain unchanged. The radar sensor may be a different Integrated Circuit (IC) than the processing system. The IC outputs Frequency Modulated Continuous Wave (FMCW) radio waves into the environment of a non-contact cough detection device. The FMCW radar may have a frequency between 57 and 64GHz, may have a peak Effective Isotropic Radiated Power (EIRP) of 20dBm or less, and may have an aiming area sufficient to cover multiple user bed areas.
In some embodiments, a method for performing contactless cough detection is described. The method may include receiving an audio data stream. The method may include detecting that a cough may have occurred based on the received audio data stream. The method may include receiving a radar data stream. The method may include performing a sleep state analysis process using the received radar data. The method may include attributing the detected cough to a particular user based on a sleep state analysis process performed using the received radar data.
Embodiments of such a method may include one or more of the following features: detecting that a cough has occurred may be performed by analyzing the received audio data stream using a pre-trained cough detection machine learning model. The method may further include deleting the received audio data stream after detecting that a cough has occurred. Performing the sleep state analysis process may include determining that a particular user has moved in the bed within a period of time of the detected cough. Performing the sleep state analysis process may include determining that a particular user has moved more than one or more other users in the bed during the period of time that the cough was detected. Attributing the detected cough to the particular user may include determining that the particular user may cause the detected cough. A processing system of the non-contact cough detection device may receive an audio data stream from a microphone and a radar data stream from a radar Integrated Circuit (IC) of the non-contact cough detection device. The method may further include receiving a spoken command via the microphone. The method may further include outputting the dictation command based data to a cloud based server system via a wireless network interface. The method may further include receiving instructions from the cloud-based server system via the wireless network interface in response to the output data. The method may further include outputting the stored sleep data in response to an instruction received via the electronic display.
Various embodiments are described in connection with a contactless sleep analysis device. In some embodiments, a non-contact sleep analysis device is described. The device may include a housing. The device may include a radar sensor housed by the housing, which may include multiple antennas and use radio waves to monitor movement. The device may include a processing system housed by the housing, the processing system including one or more processors that may receive data from the radar sensor. The processing system may be configured to receive a plurality of digital radar data streams. Each of the plurality of digital radar data streams may be based on radio waves received by one of a plurality of antennas of the radar sensor. The processing system may be configured to perform a direction optimization process to determine the first weight and the second weight. The direction optimization process may be aimed at the area in the bed where the user sleeps. The processing system may be configured to apply a first weight to a first digital radar data stream of the plurality of digital radar data streams. The processing system may be configured to apply a second weight to a second digital radar data stream of the plurality of digital radar data streams. The processing system may be configured to combine the weighted first digital radar data stream and the weighted second digital radar data stream to create a first directionally targeted radar data stream. The processing system may be configured to perform sleep analysis based on the first directional targeted radar data stream. The processing system may be configured to output sleep data of the user based on the performed sleep analysis.
Embodiments of such a device may include one or more of the following features: the first weight, the second weight, or both may be complex values that introduce a delay into the first digital data stream, the second digital data stream, or both. The processing system may be further configured to perform a direction optimization process to determine the first weight and the second weight by determining a direction in which the detected movement amount is likely to be greatest. The processing system being configured to perform direction optimization may include the processing system being configured to perform least squares optimization for various selected values of the first weight and the second weight. A processing system wherein the direction optimization process may determine only the optimized vertical direction. The plurality of antennas may include at least three antennas. The processing system may be further configured to apply a third weight to a second digital data stream of the plurality of digital radar data streams. The processing system may be further configured to apply a fourth weight to a third digital data stream of the plurality of digital radar data streams. The processing system may be further configured to combine the weighted third digital data stream and the weighted fourth digital data stream to create a second directionally targeted radar data stream. Sleep analysis may be further performed based on the second directionally targeted radar data stream. The processing system may be further configured to initially process the first and second directionally targeted radar data streams, respectively, during sleep analysis. The processing system may be further configured to combine data obtained from the first directionally targeted radar data stream and the second directionally targeted radar data stream after the initial processing. The processing system may be further configured to complete the sleep analysis using the combined data obtained from the first directionally targeted radar data stream and the second directionally targeted radar data stream. The first weight, the second weight, the third weight, and the fourth weight may compensate for at least three antennas arranged in an L-shape. The radar sensor may output Frequency Modulated Continuous Wave (FMCW) radio waves. The device further includes a microphone housed by the housing. The device may further comprise a speaker housed by the housing. The device further includes an electronic display housed by the housing. The microphone, speaker, and electronic display may be in communication with the processing system, and the processing system may be further configured to output sleep data via the electronic display in response to verbal commands received by the microphone. The microphone, speaker, and electronic display may be in communication with the processing system, and the processing system may be further configured to output synthesized speech regarding the sleep data via the speaker in response to the verbal command received by the microphone. The non-contact sleep analysis device may be a bedside device and may include an electronic display screen. The plurality of antennas may be substantially parallel to the display screen. The display screen may be accommodated by the housing such that the display screen is arranged at a face-up angle for ease of reading. The direction optimization process may compensate for the face-up angle. In some embodiments, the radar sensor outputs a Frequency Modulated Continuous Wave (FMCW) radar into the environment of the non-contact cough detection device. FMCW radars may have frequencies between 57 and 64GHz and have a peak Effective Isotropic Radiated Power (EIRP) of 20dBm or less.
In some embodiments, a method for performing targeted contactless sleep monitoring is described. The method may include receiving a plurality of digital radar data streams. Each of the plurality of digital radar data streams may be based on radio waves received by one of a plurality of antennas of a radar sensor of a bedside-mounted contactless sleep analysis device. The method may include performing a direction optimization process to determine a first weight and a second weight. The direction optimization process may be aimed at the area in the bed where the user sleeps. The method may include applying a first weight to a first digital radar data stream of the plurality of digital radar data streams. The method may include applying a second weight to a second digital radar data stream of the plurality of digital radar data streams. The method may include combining the weighted first digital radar data stream and the weighted second digital radar data stream to create a first directionally targeted radar data stream. The method may include performing sleep analysis based on the radar data stream targeted by the first direction. The method may include outputting sleep data of the user based on the performed sleep analysis.
Embodiments of such a method may include one or more of the following features: the first weight, the second weight, or both may be complex values that introduce a delay into the first digital data stream, the second digital data stream, or both. The method may further include performing a direction optimization process to determine the first weight and the second weight by determining a direction in which the detected movement amount is likely to be greatest. Performing the direction optimization may include performing a least squares optimization to obtain the first weight and the second weight. The direction optimization process may determine only the optimized vertical direction, while the horizontal direction is fixed. The method may further include applying a third weight to a second digital data stream of the plurality of digital radar data streams. The method may further include applying a fourth weight to a third digital data stream of the plurality of digital radar data streams. The method may further include combining the weighted third digital data stream and the weighted fourth digital data stream to create a second directionally targeted radar data stream. Sleep analysis may be further performed based on the second directionally targeted radar data stream. The method may further include initially processing the first and second directionally targeted radar data streams, respectively, during sleep analysis. The method may further include combining the partially processed data obtained from the first directionally targeted radar data stream and the second directionally targeted radar data stream after the initial processing. The method may further include completing a sleep analysis using combined data obtained from the first directionally targeted radar data stream and the second directionally targeted radar data stream.
Various embodiments are described in relation to a contactless sleep tracking device. In some embodiments, a contactless sleep tracking device is described. The device may include an electronic display screen housed by the housing. The device may include a user interface housed by the housing. The device may include a radar sensor housed by the housing. The device may include a processing system housed by the housing, the processing system including one or more processors that may receive data from the radar sensor and the user interface and output the data to the electronic display screen for presentation. The processing system may be configured to receive, via the user interface, a user input requesting to perform a sleep tracking setup procedure. The processing system may be configured to perform a detection process based on data received from the radar sensor to determine whether the user is present and static in response to the user input. The processing system may be configured to perform a consistency analysis over a period of time to evaluate a duration of time of the user's likely presence and the static state in response to determining the detection process of the user's likely presence and the static state. The processing system may be configured to activate sleep tracking based on the consistency analysis such that when the user is in bed may be detected via the radar sensor, the user's sleep may be tracked.
Embodiments of such a device may include one or more of the following features: the detection process may include the processing system using a neural network classifier to determine that the user may be present and static. The consistency analysis may include determining that a neural network classifier that classifies the user may be present and static for a period of time. The processing system may be further configured to output, based on the consistency analysis, an indication via the electronic display screen that the sleep tracking setting has been successfully performed. The processing system may be further configured to output, via the electronic display screen, an indication that the user should lie in the bed in the sleep position in response to receiving the user input. The processing system may be configured to perform a detection process based on data received from the radar sensor to determine whether the user is likely present and the statics may include detecting respiration of the user based on data received from the radar sensor. The user interface may be a microphone and the user may speak a command requesting to perform a sleep tracking setup procedure. The electronic display screen may be a touch screen that serves as a user interface. The user provides a touch input that may indicate a request to perform a sleep tracking setup procedure. The radar sensor may be a frequency modulated continuous wave radar sensor implemented using a single Integrated Chip (IC) that emits radar having a frequency between 57 and 64GHz and has a peak Effective Isotropic Radiated Power (EIRP) of 20dBm or less. The processing system may be further configured to receive, via the user interface, a second user input that may request to perform a sleep tracking setup procedure. The processing system may be further configured to perform a second detection process based on data received from the radar sensor to determine whether the user is likely present and static in response to the second user input. The processing system may be further configured to determine that there is likely to be excessive movement in response to the second detection process. The processing system may be further configured to output a recommendation to eliminate nearby sources of movement in the environment of the contactless sleep tracking device in response to determining that excessive movement is likely to exist. The second user input may occur prior to the user input. The processing system may be further configured to output an indication that the sleep tracking has not been successfully set in response to determining that excessive movement is likely to exist.
In some embodiments, a method for performing an initial setup procedure of a sleep tracking device is described. The method may include receiving, via a user interface of a contactless sleep tracking device, user input that may request to perform a sleep tracking setup process. The method may include, in response to a user input, performing, by the contactless sleep tracking device, a detection process based on data received from the radar sensor to determine whether the user is likely to be present and stationary. The method may include, in response to determining a detection process that the user is likely to be present and static, performing, by the sleep tracking device, a consistency analysis over a period of time to evaluate a duration of time that the user is likely to be present and static. The method may include activating sleep tracking based on the consistency analysis such that when a user is in bed may be detected via the radar sensor, sleep of the user may be tracked.
Embodiments of such a method may include one or more of the following features: the detection process may include using a neural network classifier to determine the likely presence and static of a user. The consistency analysis may include determining that the neural network classifier classifies the user as present and static during the duration. The method may further include outputting an indication that the sleep tracking setting has been successfully performed based on the consistency analysis. The method may further include outputting an indication that the user should lie in the bed in a sleep position in response to receiving the user input. Performing a detection process based on data received from the radar sensor to determine whether the user is likely present and static may include detecting respiration of the user based on data received from the radar sensor. The method may further include receiving, via the user interface, a second user input that may request to perform a sleep tracking setup procedure. The method may further include, in response to the second user input, performing a second detection process based on data received from the radar sensor to determine whether the user is likely to be present and stationary. The method may further include, in response to the second detection process, determining that there is likely to be excessive movement. The method may further include, in response to determining that excessive movement is likely, outputting a recommendation to eliminate a source of movement in the environment of the contactless sleep tracking device. The second user input may occur prior to the user input. The method may further include, in response to determining that excessive movement is likely, outputting an indication that sleep tracking has not been successfully set. The radar sensor may be a frequency modulated continuous wave radar sensor implemented using a single Integrated Chip (IC).
Drawings
A further understanding of the nature and advantages of the various embodiments may be realized by reference to the following drawings. In the drawings, similar components or features may have the same reference numerals. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If only the first reference label is used in the specification, the description applies to any one of the similar components having the same first reference label, regardless of the second reference label.
Fig. 1 illustrates an embodiment of a system for performing contactless sleep tracking.
Fig. 2A illustrates an embodiment of a sleep tracking system.
Fig. 2B illustrates an embodiment of a sleep tracking system with integrated beam aiming.
Fig. 2C illustrates an embodiment of a frequency modulated continuous wave radar radio wave output by a radar subsystem.
Fig. 3A illustrates an embodiment of a contactless sleep tracking device.
Fig. 3B illustrates an exploded view of an embodiment of a contactless sleep tracking device.
Fig. 4 illustrates a cross-sectional view of a contactless sleep tracking device.
Fig. 5 illustrates an embodiment of a state machine for determining when a person is sleeping.
Fig. 6 illustrates a timeline of detected sleep states and environmental disturbances.
Fig. 7 illustrates an embodiment of waveform data in which movement due to vital signs of a user may be observed.
Fig. 8 illustrates an embodiment of a method for performing contactless sleep detection and interference attribution.
Fig. 9 illustrates an embodiment of a contactless sleep tracking device that monitors multiple users.
Fig. 10 illustrates an embodiment of a sleep tracking system that may track multiple users.
Fig. 11A and 11B illustrate graphs of movements detected at different distances.
FIG. 12 illustrates a graph of detected movement divided into multiple targets.
Fig. 13 illustrates an embodiment of a method for performing sleep tracking for multiple users.
Fig. 14 illustrates an embodiment of a beam steering module for a contactless sleep tracking device aimed at a direction in which sleep tracking is performed.
Fig. 15 illustrates an embodiment of an antenna layout of a radar subsystem that may be used in conjunction with a beam steering module of a contactless sleep tracking device.
Fig. 16 illustrates another embodiment of a beam steering module for targeting directions in which sleep tracking is performed.
Fig. 17 illustrates an embodiment of a method for directional targeting sleep tracking.
Fig. 18 illustrates a cough detection and attribution device.
Fig. 19 illustrates an example of a timeline of cough and sleep disturbance detected for a single monitored user.
Fig. 20 illustrates an example of a timeline of detected cough and sleep disturbance for a plurality of monitored users.
Fig. 21 illustrates an embodiment of a method for cough detection and attribution.
Fig. 22 illustrates an embodiment of a sleep tracking system that performs a sleep setup process.
Fig. 23 illustrates an embodiment of a first instructional user interface presented during a sleep setup process.
FIG. 24 illustrates an embodiment of a second instructional user interface presented during a sleep setting process.
FIG. 25 illustrates an embodiment of a third instructional user interface presented during a sleep setting process.
FIG. 26 illustrates an embodiment of a user interface presented during a sleep setup process.
FIG. 27 illustrates an embodiment of a user interface presented after a successful sleep setup process.
FIG. 28 illustrates an embodiment of a user interface presented after an unsuccessful sleep setup procedure.
FIG. 29 illustrates another embodiment of a user interface presented after an unsuccessful sleep setup procedure.
Fig. 30 illustrates an embodiment of a method for performing an initial setup procedure of a sleep tracking device.
Detailed Description
The embodiments detailed herein focus on systems and devices that perform non-contact sleep monitoring, cause for sleep disruption, and perform non-contact sleep monitoring and analysis. A single device may sit on the bedside. For some embodiments, it is preferred that the single device is not in physical contact with the user or the user's bed. The device may monitor the user without any physical contact to assess whether the user is awake or asleep while in bed. When the user transitions from asleep to awake, the device can determine what caused the user to wake up. In addition to performing sleep monitoring, the device may monitor one or more environmental conditions, such as ambient sound, light, and temperature. If a sufficiently loud sound, increased illumination, and/or a sufficiently significant temperature change is detected near the time the user wakes up, the environmental condition may be identified as the cause of the user waking up.
Throughout one or more nights, the contactless sleep analysis device may monitor the user to determine when the user wakes up during the night and what environmental conditions may be attributed to. When the user wakes up, the user may be provided with information indicating when they wake up, how often they wake up, and/or what environmental conditions, if any, are possible reasons for the user to wake up. If the same environmental condition is repeatedly caused to cause the user to wake up, the device may provide a suggestion that the user try to eliminate or reduce the presence of the environmental condition. For example, if the light level in a user's sleep environment tends to increase before the user wakes up, the user should address the cause of the light to improve their sleep. For example, lighting a car headlight on a window or activation of a display may result in light. To remedy this, the user may adjust their shop window decoration or turn off the power to their display screen, respectively.
Detection of whether the user is asleep or awake may be accomplished using a low power radar. Low power radars, which can involve the use of Frequency Modulated Continuous Wave (FMCW) radars, may involve a contactless sleep analysis device transmitting the CW radar to a user's bed. The reflected radio waves may be analyzed to determine the distance to the object causing the reflection and the phase shift in the reflected radio waves. The large movements detected using radar can be used to determine whether the user is awake or asleep. Small movements may be used to measure vital signs of the user, such as heart rate and respiration rate.
FMCW can be particularly effective in observing vital signs of a user. In general, FMCW allows finer movement measurements than Ultra Wideband (UWB). For example, UWB-based devices may be able to detect 10mm movement at a distance of 3m, while FMCW devices may be able to detect 2mm movement at a similar distance. To achieve this advantage, FMCW allows measuring the phase shift of the emitted radio waves to detect small displacements of the object.
The contactless sleep analysis device may be highly privacy-preserving. Sleep data may not be collected without explicit permission granted by the user. An indication like this may be presented on the display while sleep data is being collected. In some embodiments, the sleep related data is not transmitted to a remote server. Rather, in such embodiments, sleep data is available only locally to the device. In some embodiments, the user may be required to give explicit consent to the transmission of any sleep related data to a remote server for storage and/or analysis. In some embodiments, no identity of the user is stored with the sleep data; thus, without additional information, it may not be possible to determine to whom sleep data corresponds.
In some embodiments, sleep analysis may be performed on more than one user concurrently. In such a multi-user arrangement, two users may be the most likely arrangement (e.g., two spouses); however, three or more users are also possible (e.g., two spouse and child). This arrangement may allow sleep analysis to be performed separately for each user. The contactless sleep analysis device may store or output sleep data for each user separately, and may provide separate sleep reports for each user, which may indicate environmental events that may cause individual users to wake up. Thus, advantageously, multiple users sleeping on the same bed may have their sleep separately monitored over the same period of time, despite the presence of a single non-contact sleep analysis device.
As detailed previously for a single user, one or more environmental sensors may be used to monitor environmental factors. The environmental factor may be monitored to determine if the environmental factor causes the user to wake. Additionally, the user's wake-up may be due to another user's movement. For example, if a first user turns over in a bed, the action may be identified as an environmental factor that causes a second user to wake up.
Depending on the settings of the user's bedroom, the direction from the contactless sleep analysis device to the place where the one or more users sleep may vary. For example, a user may place a contactless sleep analysis device on a bedside table that is taller than the user's bed, while another user may place a sleep device on a bedside table that is the same height as the user's bed or shorter than the user's bed. Additionally or alternatively, the contactless sleep analysis device may be rotated horizontally at an angle to the position where the user sleeps. In some or all of the detailed embodiments herein, the directions monitored by the non-contact sleep analysis device may be aimed vertically and/or horizontally toward one or more users.
The beam control module, which may perform preprocessing on digital data received from the radar subsystem, may perform weighted delay-sum (WDAS) beam control. Depending on the number and location of antennas through which reflected radio waves are sensed by the radar subsystem, targeting may be performed vertically and/or horizontally. The beam steering may take into account a particular layout of antennas of the radar subsystem or may be agnostic to the antenna layout. Through the training process, the beam steering direction in which the movement most relevant to the vital signal occurs can be identified by the non-contact sleep tracking device. The weights associated with the directions may be applied during sleep tracking such that digital beam steering is performed to target areas where the user may be located.
A non-contact sleep detection apparatus with or without a beam steering module may be used for cough and/or snore detection and attribution of one or more users. Detecting coughing and/or snoring based on audio can be a relatively accurate process. However, determining the particular source of cough or snoring can be challenging. In addition to or instead of performing sleep tracking, the devices detailed herein may be used as contactless cough detection and attribution devices. (while this document focuses on cough detection and attribution, such systems and methods may be applied to snoring or other sounds (e.g., speaking during sleep) through the use of detection systems configured or trained to detect the desired sounds.)
If a non-contact cough and/or snore detection and attribution device ("cough attribution device") is being used to monitor a single user, it may be determined whether the detected cough is being performed by the monitored user. For some embodiments, the cough attribution process is based at least in part on the FMCW radar signal. For example, a cough may have originated from another person nearby, a pet, or audio output by a television or other audio output device. If the cough causing device is used to monitor a plurality of users, the cough may be due to one of the plurality of users being monitored, or if it is determined that the cough has originated from some source other than the plurality of users (again, such as another person nearby, a pet, or audio output by a television or other audio output device), the cough is not due to any of the plurality of users.
The cough attribution device may incorporate data regarding the monitored user's cough into a sleep report provided to the user, or the cough data may be presented in a separate report. Cough data for a particular user may be compiled over an extended period of time (e.g., days, weeks, months) and may allow the user to be provided with cough trend information, such as an indication that the user's cough level is tending to rise, fall, or remain substantially unchanged over the extended period of time.
To perform sleep tracking, cough detection and attribution, and/or other forms of health monitoring, a setup process may be performed to ensure that the user has properly placed the sleep tracking device or cough attribution device and that the surrounding environment is configured in a manner that allows the device to operate properly. For some embodiments, the setup process includes the training system using beam steering to view or aim one or more users at typical sleeping positions in the bed. The user may request that sleep tracking (or other form of health monitoring process) be set and may employ mimicking sleep locations. Using the radar, the user may be monitored to determine if the user is still within the range of distances monitored by the sleep tracking device. The user may be determined to be static based on a trained machine learning model or may detect the user's breath by excluding any other important movements. If it is determined that the user is present and static, the user may be monitored for a period of time to determine if the user remains present and static for at least a threshold amount of time (or some other form of determination using at least partially time-based threshold criteria). If it is determined that the user has been classified as present and static for a period of time long enough, sleep tracking may be activated and an indication may be output to the user indicating that the setting has been successfully performed. By successfully completing such a setup procedure, the device is directed sufficiently to where the user sleeps, at an acceptable distance, and has removed other moving objects from the environment. If the user is not determined to be stationary, or once identified as stationary, the user is not classified as remaining in that state for a sufficient period of time, the setup process may fail and the user may be provided with advice regarding steps to be taken to improve the likelihood of successful completion when attempting the setup again.
Further details regarding these embodiments and additional embodiments may be understood with reference to the accompanying drawings. Fig. 1 illustrates an embodiment of a system 100 for performing contactless sleep detection and interference attribution. The system 100 may include: contactless sleep tracking device 101 ("device 101"); a network 160; and a cloud-based server system 170. The device 101 may comprise: a processing system 110; sleep data store 118; a radar subsystem 120; an environmental sensor suite 130; a display 140; a wireless network interface 150; and a speaker 155. In general, the device 101 may include a housing that houses all components of the device 101. Further details regarding such a housing are provided with respect to fig. 3A and 3B, according to some embodiments.
The processing system 110 may include one or more processors configured to perform various functions, such as the following: a radar processing module 112; a sleep state detection engine 114; and, an environmental event correlation engine 116. The processing system 110 may include one or more special purpose or general-purpose processors. Such special purpose processors may include processors specifically designed to perform the functions detailed herein. Such a special purpose processor may be an ASIC or FPGA, which are general purpose components physically and electrically configured to perform the functions detailed herein. Such general purpose processors may execute specialized software stored using one or more non-transitory processor-readable media, such as Random Access Memory (RAM), flash memory, a Hard Disk Drive (HDD), or a Solid State Drive (SSD).
The radar subsystem 120 (also referred to as a radar sensor) may be a single Integrated Circuit (IC) that transmits, receives, and outputs data indicative of the received reflected waveforms. The output of radar subsystem 120 may be analyzed using radar processing module 112 of processing system 110. Further details regarding radar subsystem 120 and radar processing module 112 are provided with respect to fig. 2.
The device 101 may include one or more environmental sensors, such as all, one, or some combination of environmental sensors provided as part of the environmental sensor suite 130. The environmental sensor package 130 may include: a photosensor 132; a microphone 134; a temperature sensor 136; and a Passive Infrared (PIR) sensor 138. In some embodiments, there may be multiple instances of some or all of these sensors. For example, in some embodiments, there may be multiple microphones. The light sensor 132 may be used to measure the amount of ambient light present in the general environment of the device 101. Microphone 134 may be used to measure the ambient noise level present in the general environment of device 101. The temperature sensor 136 may be used to measure the ambient temperature of the general environment of the device 101. PIR sensor 138 may be used to detect moving animate objects (e.g., people, pets) within the general environment of device 101. Other types of environmental sensors are possible. For example, a camera and/or humidity sensor may be incorporated as part of the environmental sensor suite 130. As another example, an active infrared sensor may be included. In some embodiments, some data, such as humidity data, may be obtained from a nearby weather station having data available via the internet. In some embodiments, active acoustic sensing methods may be implemented, including but not limited to sonar and ultrasound, and include single or array acoustic sources and/or receivers. Such an arrangement may be used as one or more accessory sensing modalities in combination with other sensors and methods described herein.
In some embodiments, one, some, or all of the sensors of environmental sensor suite 130 may be external devices to 101. For example, one or more remote environmental sensors may communicate with device 101 directly (e.g., via a direct wireless communication method, via a low power mesh network) or indirectly (e.g., via a low power mesh network, via an access point of the network, via a remote server) by one or more other devices.
The device 101 may include various interfaces. Display 140 may allow processing system 110 to present information for viewing by one or more users. The wireless network interface 150 may allow communication using a Wireless Local Area Network (WLAN), such as a WiFi-based network. The speaker 155 may allow sound such as synthesized voice to be output. For example, a response to a spoken command received via microphone 134 may be output via speaker 155 and/or display 140. The spoken command may be analyzed locally by the device 101 or may be transmitted via the wireless network interface 150 to the cloud-based server system 170 for analysis. A response based on the analysis of the spoken command may be sent back to the device 101 via the wireless network interface 150 for output via the speaker 155 and/or display 140. Additionally or alternatively, the speaker 155 and microphone 134 may be collectively configured for active acoustic sensing, including ultrasonic acoustic sensing. Additionally or alternatively, other forms of wireless communication may be possible, such as using low power wireless mesh network radios and protocols (e.g., thread) to communicate with various smart home devices. In some embodiments, a wired network interface, such as an ethernet connection, may be used to communicate with a network. Furthermore, wireless communications provide greater throughput and lower latency to evolution of fifth generation (5G) and sixth generation (6G) standards and technologies, which enhance mobile broadband services. The 5G and 6G technologies also provide new classes of service for vehicular networks (V2X), fixed wireless broadband, and internet of things (IoT) through control and data channels. Such standards and techniques may be used for communication through device 101.
Low power wireless mesh network radios and protocols may be used to communicate with power limited devices. The power limited device may be an exclusively battery powered device. Such devices may rely exclusively on one or more batteries for power and, as such, the amount of power used for communication may be kept low in order to reduce the frequency at which the one or more batteries need to be replaced. In some embodiments, the power limited device may have the capability to communicate via a relatively high power network (e.g., wiFi) and a low power mesh network. Power limited devices may not often use relatively high power networks to conserve power. Examples of such power limited devices include environmental sensors (e.g., temperature sensors, carbon monoxide sensors, smoke sensors, motion sensors, presence detectors) and other forms of remote sensors.
Note that some embodiments of the device 101 do not have any still or video cameras. By not incorporating an onboard camera, nearby users may be relieved of their privacy. For example, the device 101 may typically be installed in a user's bedroom. For many reasons, a user may not want to be located in such a private space or aim a user's camera while the user is sleeping. In other embodiments, the device 101 may have a camera, but the lens of the camera may be obscured by a mechanical lens shutter. To use the camera, the user may be required to physically open the shutter to allow the camera to see the environment of the device 101. When the shutter is closed, user privacy can be ensured with respect to the camera.
Wireless network interface 150 may allow wireless communication to be performed with network 160. Network 160 may include one or more public and/or private networks. Network 160 may include a private local wired or wireless network, such as a home wireless local area network. Network 160 may also include a public network, such as the Internet. Network 160 may allow device 101 to communicate with a remotely located cloud-based server system 170.
The cloud-based server system 170 may provide various services to the device 101. Regarding sleep data, cloud-based server system 170 may include processing and storage services for sleep related data. While the embodiment of fig. 1 relates to the processing system 110 performing sleep state detection and environmental event correlation, in other embodiments, such functionality may be performed by the cloud-based server system 170. Further, in addition to or in lieu of sleep data store 118, sleep related data may be stored by cloud-based server system 170, such as mapped to a public user account to which device 101 is linked. If multiple users are monitored, sleep data may be stored and mapped to a primary user account or an account of the corresponding user.
Whether a single user or multiple users are monitored, each user may be required to provide their informed consent. Such informed consent may involve each user agreeing to an end-user agreement involving data used in compliance with the security and privacy standards of HIPAA and/or other commonly accepted health information. Periodically, users may be required to update their consent to collecting sleep data, such as once a year. In some embodiments, each end user may receive periodic notifications, such as via a mobile device (e.g., a smartphone), that alert each user that their sleep data is being collected and analyzed, and provide each user with an option to disable such data collection.
The cloud-based server system 170 may additionally or alternatively provide other cloud-based services. For example, device 101 may additionally function as a home assistant device. The home assistant apparatus may respond to voice queries from the user. In response to detecting the voice trigger phrase being spoken, device 101 may record audio. The audio stream may be transmitted to a cloud-based server system 170 for analysis. The cloud-based server system 170 may perform a speech recognition process, using a natural language processing engine to understand queries from users, and provide responses to be output by the device 101 as synthesized speech, output to be presented on the display 140, and/or commands to be performed by the device 101 (e.g., to increase the volume of the device 101) or sent to some other smart home device. Further, the query or command may be submitted to the cloud-based server system 170 via the display 140, which may be a touch screen. For example, the device 101 may be used to control various smart home devices or home automation devices. Such commands may be sent by device 101 directly to the device to be controlled or may be sent via cloud-based server system 170.
Based on the data output by radar processing module 112, sleep state detection engine 114 may be used to determine whether the user is likely to be asleep or awake. The sleep state detection engine 114 may proceed through a state machine, such as detailed with respect to fig. 5, or may utilize the state identified using such a state machine to determine whether the user is likely to wake or sleep. For example, if it is determined that the user is in bed and stationary for at least a period of time, the user may be identified as asleep. The output of the sleep state detection engine 114 may be used by the environmental event correlation engine 116. The environmental event correlation engine 116 may analyze data received from the environmental sensor suite 130. The data from each environmental sensor device may be monitored for: 1) An increase in environmental conditions above a fixedly defined threshold (or some other form of determination using a threshold criterion); and/or 2) an increase in environmental conditions by at least a predetermined amount or percentage. Alternatively, some other form of threshold criteria may be used to analyze the change in environmental conditions. As an example, data indicative of the light level in the surrounding environment may be continuously or periodically output by the light sensor 132. The environmental event correlation engine 116 may determine whether to: 1) The amount of ambient illumination has increased from below a fixedly defined threshold to above a fixedly defined threshold (or some other form of determination using a threshold criterion based at least in part on illumination); and/or 2) the amount of ambient illumination is increased by at least a predefined percentage amount. If option 1, 2, or both occur, it may be determined that an environmental event has occurred. The environmental event may be time stamped by the environmental event correlation engine 116. The environmental event association engine 116 may then determine whether the user wake may be attributed to the identified environmental event. Further details regarding the relationship between environmental events and sleep events are provided in connection with fig. 6.
Fig. 2A illustrates an embodiment of a sleep tracking system 200A ("system 200A"). The system 200A may include: radar subsystem 205 (which may represent an embodiment of radar subsystem 120); radar processing module 210 (which may represent one embodiment of radar processing module 112); and, a beam steering module 230.
The radar subsystem 205 may include an RF transmitter 206, an RF receiver 207, and a radar processing circuit 208. The RF transmitter 206 may transmit radio waves, such as in the form of a Continuous Wave (CW) radar. RF transmitter 206 may use Frequency Modulated Continuous Wave (FMCW) radar. FMCW radar may operate in burst mode or continuous sparse sampling mode. In burst mode, one frame or multiple bursts of chirps may be output by the RF transmitter 206, the chirps being separated by relatively short periods of time. Each frame may be followed by a relatively long amount of time until a subsequent frame. In the continuous sparse sampling mode, no frame or chirp burst is output, but rather a chirp is periodically output. The interval of the chirps in the continuous sparse sampling mode may be greater in duration than the interval between intra-frame chirps of the burst mode. In some embodiments, the radar subsystem 205 may operate in burst mode, but may combine (e.g., average) the output raw chirped waterfall data of each burst together to create simulated continuously sparsely sampled chirped waterfall data. In some embodiments, raw waterfall data collected in burst mode may be better for gesture detection, while raw waterfall data collected in continuous sparse sampling mode may be better for sleep tracking, vital sign detection, and health monitoring in general. Gesture detection may be performed by other hardware or software components that use the output of the radar subsystem 205, not shown.
The RF transmitter 206 may include one or more antennas and may transmit at 60GHz or about 60 GHz. The frequency of the emitted radio wave may be repeatedly swept from low to high frequencies (or vice versa). The power level for transmission may be very low such that the radar subsystem 205 has an effective range of a few meters or even shorter distances. Further details regarding the radio waves generated and emitted by the radar subsystem 205 are provided with respect to fig. 2C.
The RF receiver 207 includes one or more antennas different from the transmitting antennas, and can receive radio waves reflected from nearby objects by radio waves transmitted by the RF transmitter 206. The reflected radio waves may be interpreted by the radar processing circuit 208 by: the transmitted radio waves are mixed with the reflected received radio waves, thereby producing a mixed signal that can be analyzed for distance. Based on the mixed signal, the radar processing circuit 208 may output raw waveform data, which may also be referred to as raw chirped waterfall data, for analysis by a separate processing entity. The radar subsystem 205 may be implemented as a single Integrated Circuit (IC), or the radar processing circuit 208 may be a separate component from the RF transmitter 206 and the RF receiver 207. In some embodiments, radar subsystem 205 is integrated as part of device 101 such that RF transmitter 206 and RF receiver 207 are directed in the same direction as display 140. In other embodiments, an external device including radar subsystem 205 may be connected with device 101 via wired or wireless communication. For example, the radar subsystem 205 may be an add-on device to a home auxiliary device.
For the radar subsystem 205, if FMCW is used, a non-ambiguous FMCW range may be defined. Within this range, the distance to the object can be accurately determined. Outside this range, however, the detected object may be erroneously interpreted as being closer than an object in the non-ambiguous range. The incorrect interpretation can be due to the frequency of the mixed signal and the sampling rate of the ADC used by the radar subsystem to convert the received analog signal to a digital signal. If the frequency of the mixed signal is higher than the nyquist rate of sampling of the ADC, the digital data representing the reflected radar signal output by the ADC can be incorrectly represented (e.g., as a lower frequency indicative of a closer object).
When using the device 201 to monitor sleep patterns and life statistics, the user may be instructed that the user should be the closest person to the device 201. However, another person or animal may be present in the bed. The non-ambiguous FMCW range may have to be defined far enough, such as two meters, that two persons (or about the width of the bed) fall within the non-ambiguous FMCW range of the radar subsystem 205. Two meters may be an ideal distance because the distance is approximately the width of a commercial large bed (e.g., a king size bed).
Raw waveform data may be passed from radar subsystem 205 to radar processing module 210. The raw waveform data passed to radar processing module 210 may include waveform data indicative of a continuous sparse reflection chirp resulting from radar subsystem 205 operating in a continuous sparse sampling mode or from radar subsystem 205 operating in a burst mode and performing a conversion process for simulating raw waveform data produced by radar subsystem 205 operating in a continuous sparse sampling mode. Processing may be performed to convert the burst sampled waveform data into successive sparse samples using an averaging process, such as each reflected burst radio wave group represented by a single average sample. The radar processing module 210 may include one or more processors. The radar processing module 210 may include one or more special purpose or general purpose processors. A special purpose processor may comprise a processor specifically designed to perform the functions detailed herein. Such special purpose processors may be ASICs or FPGAs, which are general-purpose components physically and electrically configured to perform the functions detailed herein. A general purpose processor may execute specialized software stored using one or more non-transitory processor-readable media, such as Random Access Memory (RAM), flash memory, a Hard Disk Drive (HDD), or a Solid State Drive (SSD). The radar processing module 210 may include: a moving filter 211; a frequency weight 212; a range-vital sign transformation engine 213; a range gating filter 214; a spectrum summation engine 215; and, a neural network 216. Each component of radar processing module 210 may be implemented using software, firmware, or as dedicated hardware.
The raw waveform data output by the radar subsystem 205 may be received by the radar processing module 210 and processed first using the moving filter 211. In some embodiments, it is important that the motion filter 211 is the initial component for performing the filtering. That is, the processing performed by radar processing module 210 is not exchangeable in some embodiments. Typically, vital sign determination and sleep monitoring may occur while the monitored user is sleeping or attempting to sleep in a bed. In such environments, there may typically be little movement. Such movement may be due to movement of the user in the bed (e.g., turning over while attempting to fall asleep or asleep) and vital signs of the user, including movement caused by respiration and movement caused by the heartbeat of the monitored user. In such an environment, most of the radio waves emitted from the RF transmitter 206 may be reflected by static objects in the vicinity of the monitored user, such as mattresses, spring beds, bed frames, walls, furniture, bedding, and the like. Thus, most of the raw waveform data received from the radar subsystem 205 may be independent of user movement and user life measurements.
The shifting filter 211 may include a waveform buffer that buffers "chirps" or segments of the received raw waveform data. For example, sampling may occur at a rate of 10 Hz. In other embodiments, sampling may be slower or faster. In some embodiments, the moving filter 211 may buffer twenty seconds of the received raw waveform chirp. In other embodiments, shorter or longer duration buffered raw waveform data is buffered. Filtering may be performed on the buffered raw waveform data to remove raw waveform data indicative of a stationary object. That is, for an object that is moving, such as the chest of the monitored user, the heartbeat and respiration rate of the user will affect the distance and velocity measurements performed by the radar subsystem 205 and output to the movement filter 211. This movement by the user will result in "jitter" in the raw waveform data received during the buffer period. More specifically, jitter refers to a phase shift caused by radio waves reflected and emitted by a moving object. Instead of using reflected FMCW radio waves to determine the velocity of a moving object, movement-induced phase shifts in the reflected radio waves may be used to measure vital statistics, including heart rate and respiration rate, as described in detail herein.
For a fixed object, such as furniture, zero phase shift (i.e., no jitter) will exist in the raw waveform data during the buffer period. The moving filter 211 may subtract this raw waveform data corresponding to the stationary object such that raw waveform data indicative of motion is passed to the frequency multiplier 212 for further analysis. For the remaining processing of the radar processing module 210, the raw waveform data corresponding to the stationary object may be discarded or otherwise ignored.
In some embodiments, an Infinite Impulse Response (IIR) filter is incorporated as part of the moving filter 211. In particular, a single pole IIR filter may be implemented to filter out raw waveform data that is not indicative of movement. Thus, the single pole IIR filter may be implemented as a high pass, low resistance filter that prevents raw waveform data indicative of movement below a particular frequency from passing through to the frequency multiplier 212. The cut-off frequency may be set based on known limits on human vital signs. For example, the respiration rate may be expected to be between 10 and 60 breaths per minute. Movement data indicating a frequency of less than 10 breaths per minute may be excluded by the filter. In some embodiments, a band pass filter may be implemented to exclude raw waveform data indicative of high frequency movement, which is not possible for human vital signs. For example, for a person at rest or near rest, a heart rate that is higher than the respiration rate may be expected to be less likely to beat more than 150 beats per minute. The raw waveform data indicative of higher frequencies may be filtered out by a band pass filter.
In some embodiments, it may be possible to further fine tune the frequency of the raw waveform data that the motion filter 211 passes to the frequency multiplier 212. For example, during an initial configuration phase, the user may provide information about the monitored user (e.g., himself, child), such as age data. Table 1 indicates typical respiration rates for the respective ages. Similar data may exist for heart rate. The filter may be configured to exclude data outside of the expected respiratory rate, heart rate range, or both.
Age of Respiration rate range (number of breaths per minute)
Birth-6 weeks 30-60
6 months of 25-40
Age 3 20-30
Age 6 18-25
Age 10 17-23
Adult human 12-18
65-80 years old 12-28
>Age of 80 10-30
Table 1
The vital sign of the monitored user being measured is a periodic pulse event: the heart rate of the user may vary over time, but it is expected that the heart of the user will continue to beat periodically. The jitter is not a sinusoidal function but can be understood as a pulse event, which more closely resembles a square wave with a relatively low duty cycle that causes movement of the user's body. Similarly, the user's breathing rate may vary over time, but breathing is a periodic function performed by the user's body that is similar to a sinusoidal function except that the user's exhalations are typically longer than their inhalations. Furthermore, at any given time, a particular window of waveform data is being analyzed. Since a particular time window of waveform data is being analyzed, even a perfect sine wave within that window can result in spectral leakage in the frequency domain. The frequency components due to this spectral leakage should be de-emphasized.
The frequency weighting engine 212 may work in conjunction with the range-vital sign transformation engine 213 to determine one (e.g., breath) or two (e.g., breath plus heartbeat) frequency components of the raw waveform data. The frequency multiplier 212 may use a frequency windowing, such as a 2D Hamming (Hamming) window (other forms of windowing are possible, such as a Han En (Hann) window), to emphasize important frequency components of the raw waveform data and de-emphasize or remove waveform data attributable to spectral leakage outside of the defined frequency window. Such frequency windowing may reduce the amplitude of the raw waveform data, which may be due to processing artifacts. The use of frequency windowing may help reduce the effects of data-dependent processing artifacts while retaining data related to being able to separately determine heart rate and respiration rate.
For a fixed bedside FMCW radar-based monitoring device, which may be positioned within 1 to 2 meters of one or more users being monitored to detect respiration and heart rate (e.g., using radar transmitted as in fig. 2C), a 2D hamming window emphasizing a range of 10 to 60bpm (0.16 Hz to 1 Hz) respiration and 30 to 150bpm (0.5 to 2.5 Hz) heartbeat provides a signal that is good enough to make reliable measurements without prior knowledge of the subject's age or medical history.
Since heart rate and respiration rate are periodic pulse events, frequency domain heart rate and respiration rate may be represented by different fundamental frequencies, but each may have many harmonic components at higher frequencies. One of the primary purposes of the frequency weight 212 may be to prevent frequency fluctuations of harmonics of the monitored user's respiration rate from affecting the frequency measurement of the monitored user's heart rate (or vice versa). While the frequency multiplier 212 may use a 2D hamming window, it should be appreciated that other window functions or isolation functions may be used to help isolate frequency fluctuations in the monitored user's respiration rate from frequency fluctuations in the monitored user's heart rate.
The range-vital sign transformation engine 213 analyzes the received movement filtered waveform data to identify and quantify the movement amplitude at a particular frequency. More specifically, the range-vital sign transformation engine 213 analyzes the phase jitter over time to detect relatively small movements due to user vital signs (e.g., respiratory rate and heart rate) having a relatively low frequency. Analysis by the range-vital sign transformation engine 213 may assume that the frequency components of the motion waveform data are sinusoidal. Furthermore, the transformations used by the range-vital sign transformation engine 213 may also identify the distance over which the frequency is observed. Frequency, amplitude, and distance may all be determined, at least in part, because radar subsystem 205 uses an FMCW radar system.
Before applying the transformation of the range-vital sign transformation engine 213, a zero-padding process may be performed by the range-vital sign transformation engine 213 to add multiple zeros to the motion filtered raw waveform data. By performing a zero-fill process, the resolution in the frequency domain can be effectively improved, allowing for more accurate low-rate measurements (e.g., low heart rate, low respiration rate). For example, zero padding may help to numerically increase resolution to detect differences in one half breath per minute compared to the resolution of one breath per minute without zero padding. In some embodiments, three to four times the number of zeros may be added as compared to the buffered sample size of the original waveform data. For example, if twenty seconds of buffered raw waveform data is analyzed, zero padding of sixty to eighty seconds values may be added to the sample. In particular, the three to four times zero-fill range of samples has been found to substantially increase resolution without unduly complicating the conversion process (and thus, processor usage intensive).
To determine the zero fill amount to be performed, equations 1 through 3 may be used. In equation 1, RPM_resolution may be less than 1 in the ideal case.
Figure BDA0004113235840000311
n_fft_slow_time_min= (nearest_power_of_2) (60 x chirp_rate) equation 2
In some embodiments, a linear tone frequency (chirp_rate) of 30Hz may be used. Such frequencies may have sufficient margin with respect to the nyquist limit of the upper limits of respiration rate and heart rate. Thus, n_fft_slow_time_min may be 2048. Given a window of 20 seconds to estimate the respiratory statistics, equation 3 yields a value 600.
n_chirp_for_restart=20×chirp_rate=600 equation 3
This value 600 is smaller than the required vital sign-FFT size and causes the range-vital sign transformation engine 213 to perform a 3x to 4x zero padding. The balancing of how much zero padding is performed may be based on an associated increase in frequency resolution and the amount of computation required to perform the FFT. Zero padding of 3x to 4x has been found to provide adequate resolution for heart rate and respiration rate while mitigating the amount of computation that needs to be performed.
The range-vital sign transformation engine 213 may perform a series of Fourier Transforms (FT) to determine the frequency components of the received raw waveform data output by the frequency multiplier 212. In particular, the particular frequency of the waveform data and the amplitude of the waveform data at such frequencies may be determined by the range-vital sign transformation engine 213 performing a series of Fast Fourier Transforms (FFTs).
Waveform data obtained over a period of time may be represented in multiple dimensions. The first dimension (e.g., along the y-axis) may be related to a plurality of samples of waveform data from a particular chirp, and the second dimension (e.g., along the x-axis) is related to a particular sample index of waveform data collected across the plurality of chirps. The third dimension of the presence data (e.g., along the z-axis) indicates the intensity of the waveform data.
A plurality of FFTs may be performed based on the first and second dimensions of the waveform data. The FFT may be performed along each of the first and second dimensions: the FFT may be performed for each chirp, and may be performed for each particular sample index across the plurality of chirps occurring during the period of time. The FFT performed on the waveform data of a particular reflected chirp may indicate one or more frequencies that, in FMCW radar, indicate the distance that an object reflecting the transmitted radio wave is present. An FFT performed for a particular sample index across multiple chirps may measure the frequency of the phase jitter across the multiple chirps. Thus, the first dimension of the FFT may provide a distance over which the vital statistics exist, and the second dimension of the FFT may provide a frequency of the vital statistics. The output of the FFT performed across two dimensions indicates: 1) Frequency of vital statistics; 2) Measuring a range of vital statistics; and 3) measuring the amplitude of the frequency. In addition to values due to vital statistics present in the data, there may be noise such as filtered using the spectral summation engine 215. Noise may be due in part to heart rate and respiration being non-perfect sine waves.
In particular, the transform performed by the range-vital sign transform engine 213 is different from the range-Doppler transform. Rather than analyzing changes in velocity (as in range-doppler transforms), periodic changes in phase shift over time are analyzed as part of a range-vital sign transform. The range-vital sign transformation is adjusted to identify small movements (e.g., respiration, heart rate) that occur over a relatively long period of time by tracking phase changes, which are referred to as phase jitter. As detailed previously, zero padding is performed to allow sufficient resolution to accurately determine heart rate and respiration rate.
The range gating filter 214 is used to monitor the defined range of interest and exclude waveform data due to movement beyond the defined range of interest. For the arrangements detailed herein, the defined range of interest may be 0 to 1 meter. In some embodiments, the defined range of interest may be different or may be set by the user (e.g., via a training or setup process) or by the service provider. In some embodiments, the goal of the arrangement may be to monitor one person closest to the device (and exclude or isolate data from any other person farther away, such as a person sleeping beside the monitored person). In other embodiments, if both persons are to be monitored, the data may be isolated, as described in detail with respect to FIG. 12. Thus, the range-vital sign transformation engine 213 and the range gating filter 214 are used to separate, exclude or remove movement data due to objects outside the defined range of interest and sum the energy of the movement data due to objects within the defined range of interest. The output of the range gate filter 214 may include data having a certain range within the allowable range of the range gate filter 214. The data may further have a frequency dimension and an amplitude. Thus, the data may have three dimensions.
The spectral summation engine 215 may receive the output from the range gate filter 214. The spectral summation engine 215 may be used to transfer the energy of the harmonic frequencies of the measured heart rate and respiration rate and add the harmonic frequency energy to the fundamental frequency energy. This function may be referred to as harmonics and spectrum (HSS). Heart rate and respiration rate are not sinusoidal; thus, in the frequency domain, harmonics will appear at frequencies above the fundamental frequency of the user's respiration rate and the fundamental frequency of the user's heart rate. One of the main purposes of the spectral summation engine 215 is to prevent harmonics of the monitored user's respiration rate from affecting the frequency measurement of the monitored user's heart rate (or vice versa). HSS may be performed in second order by adding the original spectrum to the downsampled instance of the spectrum (by a factor of two). This procedure can also be applied to higher order harmonics so that their respective spectra are added to the spectrum at the fundamental frequency.
At this stage, for a person lying in bed stationary (except for movements due to respiration and heart rate), two dominant frequency peaks will be expected in the frequency data. However, if the monitored user's body is moving, such as turning over in a bed, the energy will be distributed significantly across the spectrum (a more extensive distribution). Such large physical movements may appear as a large number of small peaks in the frequency data. If the bed is empty, rather than someone is present, there may be no or little frequency components above the noise floor because the moving filter 211 has previously filtered the raw waveform data corresponding to the static object. The distribution and magnitude of the frequency peaks across the spectrum can be used to determine whether the user is likely to be awake or asleep.
The spectral summation engine 215 may output feature vectors indicative of heart rate (e.g., in beats per minute) and respiration rate (e.g., in breaths per minute). The feature vector may indicate frequency and amplitude. The neural network 216 may be used to determine whether the heart rate and/or respiration rate indicated in the output of the feature vector from the spectral summation engine 215 should be considered valid. Accordingly, the heart rate and respiration rate output by the spectral summation engine 215 may be stored, presented to the user, and/or deemed valid based on the output of the neural network 216. The neural network 216 may be trained (e.g., with supervised learning performed using training data sets) to output one of three states, such as those indicated in table 2, by performing spectral analysis. The vital statistics may be considered valid when it is determined that the user is present and that the detected movement is due to vital signs of the user.
Each state in table 2 is associated with a different spectral energy and spectral sparsity profile. Spectral energy refers to the sum of energy detected across the spectrum due to the presence of motion within the monitored region. Spectral sparsity indicates whether the movement tends to be distributed across a broad frequency range or clustered over several specific frequencies. For example, if energy peaks occur at a few frequencies, such as when vital signs of the user (but not other movements) are detected, the spectral sparsity is high. However, if the peak (exceeding the threshold) or some other form of determination based at least in part on the threshold criteria of amplitude occurs at many frequencies, the spectral sparsity is low.
As an example, motion due to vital signs such as heart beat may indicate significant movement (e.g., high spectral energy) at a particular frequency (e.g., high spectral sparsity); motion due to a user moving a limb may also indicate significant movement (high spectral energy), but may have low spectral sparsity. The neural network may be trained to distinguish each state based on the spectral energy profile output by the spectral summation engine 215. Thus, the neural network 216 may be provided with two features, a first value representing spectral energy and a second value representing spectral frequency sparsity.
The output of the spectral summation engine 215 may be characterized as a eigenvector having a first dimension frequency and a second dimension amplitude. The first value representing spectral energy may be calculated by determining the maximum amplitude present in the eigenvector output by the spectral summation engine 215. The maximum amplitude value may be normalized to a value within 0 to 1. The second value representing the spectral sparsity may be calculated by subtracting the median amplitude of the feature vector from the maximum amplitude. Also here, the calculated sparsity may be normalized to a value between 0 and 1.
Table 2 shows a summary of how the features of spectral energy and spectral sparsity are used by the trained neural network as features to classify the status of the monitored area.
Figure BDA0004113235840000351
TABLE 2
The status of the monitored area classified by the neural network 216 may be used to determine the sleep status of the monitored user, or more generally, whether the user is moving or still in bed. The status of the monitored area determined by the classification of the neural network 216 performed may further be used to determine whether the vital statistics output by the spectral summation engine 215 should be trusted or ignored. For accurate vital statistics determination, heart rate and respiration rate may be identified as potentially accurate when the neural network 216 determines that the user is present and stationary (i.e., not having large body movements; however, movement occurs due to respiration and/or heartbeat). In some embodiments, the vital statistics output by the spectral summation engine 215 may be stored exclusively locally (e.g., to mitigate privacy concerns); in other embodiments, the vital statistics output may be transmitted to the cloud-based server system 170 for remote storage (instead of or in addition to such data being stored locally).
The neural network 216 may be initially trained using a large set of training data of amplitude and frequency feature vectors that have been appropriately labeled as a classification mapping spectral energy and spectral sparsity to corresponding real-valued states of the monitored region. Alternatively, the neural network 216 may be initially trained using a large set of training data of amplitude and frequency feature vectors that have been appropriately classified as corresponding true value states mapping the included spectral energy and spectral sparsity pairs, each appropriately labeled, to a monitored region. The neural network may be a time independent fully connected neural network. In some embodiments, machine learning arrangements, classifiers, or forms of artificial intelligence other than neural networks may be used.
In other embodiments, rather than having spectral energy values and spectral sparsity values as features used by the neural network, which may have additional front-end convolution layers, may be trained to directly use the output of range-gating filter 214. Instead, embodiments of the convolutional network may analyze the frequency and amplitude data output by range gate filter 214 to classify the user's state. The convolutional neural network may be trained to utilize offline training based on a set of spectral measurements that are mapped to the true value states of the monitored region prior to use of the system 200B by the end user.
The sleep state determined by the neural network 216 may be stored to the sleep data store 118 along with the time data. When the neural network 216 indicates that the monitored user is present and stationary, the vital statistics output by the spectral summation engine 215 may be stored to a vital statistics database. Other vital statistics may be discarded or may be marked to indicate that it is unlikely to be correct. The data stored to sleep data store 118 and the vital statistics database may be stored locally at device 101. In some embodiments, storage occurs only at device 101. Such implementations may help mitigate concerns about the transmission and remote storage of health-related data. In some embodiments, the monitored user may choose to have sleep data and vital statistics transmitted, stored, and analyzed externally, such as by cloud-based server system 170, via a network interface (e.g., wireless network interface 150). Storage by the cloud-based server system 170 may have significant benefits, such as the ability for users to access such data remotely, allow access to medical providers, or participate in research studies. The user may retain the ability to delete or otherwise remove data from the cloud-based server system 170 at any time.
In some embodiments, radar processing module 210 may be located wholly or partially at a location remote from device 101. Although the radar subsystem 205 may need to be local to the monitored user, the processing of the radar processing module 210 may be moved to the cloud-based server system 170. In other embodiments, a smart home device in local communication with device 101 (e.g., via a LAN or WLAN) may perform some or all of the processing of radar processing module 210. In some embodiments, a local communication protocol, such as that involving a mesh network, may be used to transmit raw waveform data to a local device that is to perform the processing. Such communication protocols may include Wi-Fi, bluetooth, thread, or IEEE 802.11 and 802.15.4 series communication protocols. Similar to the processing, the storage of sleep data and vital statistics may occur at the cloud-based server system 170 or another smart home device in the home where the device 101 is located. In still other embodiments, radar processing module 210 may be combined with radar subsystem 205 into a single component or system of components.
Sleep data and vital statistics stored by sleep data store 118 may be used to provide short-term and long-term trends to users related to their sleep patterns, vital statistics, or both, via sleep data compilation engine 119. For example, each morning, charts, statistics, and trends may be determined by sleep data compilation engine 119 based on data stored to sleep data store 118 and output for display by sleep data compilation engine 119 via display 140. A graph of sleep data from the previous night and one or more graphs indicating respiration rate and heart rate during the previous night may be presented. Similar charts, trends, and statistics may be output by sleep data compilation engine 119 for significantly longer periods of time, such as weeks, months, years, and even years of extension. Other uses of sleep data and vital statistics may be possible. For example, if certain triggers regarding heart rate, respiration rate, and/or sleep mode are triggered, the medical professional may be notified. Additionally or alternatively, a notification may be output to the user indicating that the collected data potentially relates to or indicates a healthy person. In some instances, specific sleep problems may be identified, such as sleep apnea. The synthesized speech may be used to output sleep data via speaker 155 (e.g., in response to a user waking up, in response to a spoken user command, or in response to a user providing input via a touch screen such as display 140). Such sleep data may also be represented graphically and/or textually on display 140.
The system 200A may additionally include a beam steering module 230. The beam control module 230 may include a channel weighting engine 231, which may be implemented using software, firmware, and/or hardware similar to the components of the radar processing module 210. The beam steering module 230 is shown separate from the radar processing module 210 in that it processes data received from the radar subsystem 205 to emphasize data received from a particular direction and de-emphasize data received from other directions. The beam steering module 230 may be implemented using the same hardware as the radar processing module 210. For example, beam steering module 230 may be a software process that modifies radar data received from radar subsystem 205 prior to application of mobile filter 211. The device 101 may be a surface top device intended to be placed in a specific location, connected to a continuous power supply (e.g., a household power outlet), and interacting via voice and/or touch screen. Thus, the radar subsystem 205 may remain directed to a portion of the surrounding environment for a significant period of time (e.g., hours, days, weeks, months). In general, the beam steering module 230 may be used to map the environment (e.g., room) in which the device 101 is located and direct the sensing direction of the radar subsystem 205 to the area within the field of view of the radar subsystem 205 where the user is most likely to be present.
Targeting areas within the field of view of the radar subsystem 205 may help reduce the number of false positives and false negatives caused by movement of objects other than the user. Furthermore, the targeting may help compensate for the angle and position of the device 101 relative to where the user sleeps. ( For example, the device 101 may be located on a bedside table that is different from the user's bed height. Additionally or alternatively, the radar subsystem 205 device 101 may not be directed to a location on the bed where the user sleeps. )
When it is determined that no users are present, such as based on the low spectral energy and low spectral density of table 2, an optimal beam steering process may be performed by the channel weighting engine 231 and the beam steering system 232. Although no user is present, an analysis may be performed to determine which directional alignment of the radar subsystem 205 provides the smallest clutter.
Fig. 2B illustrates an embodiment of a sleep tracking system 200B ("system 200B") that may perform beam aiming. The beam sighting performed by using beam steering module 230 may focus on radar reflections from areas where the user may be present and ignore or at least reduce radar reflections from interfering objects such as nearby walls or large objects.
Radar subsystem 240 may include multiple antennas to receive reflected radar radio waves. In some embodiments, there may be three antennas. The antennas may be aligned in an "L" pattern such that two antennas are horizontally orthogonal and two antennas are vertically orthogonal, wherein one of the antennas is used for both horizontal and vertical arrangements. By analyzing the phase differences in the received radar signals, weighting may be applied to aim the received radar beam vertically and/or horizontally. In other embodiments, the antennas may be aligned in different modes and/or beam aiming may be performed using a single receive antenna and multiple transmit antennas or by both multiple transmit antennas and multiple receive antennas.
Vertical aiming may be performed to compensate for vertical tilting of the device into which system 200B is incorporated. For example, as discussed below with respect to fig. 3A, the surface of the contactless sleep tracking device 300 may be inclined with respect to where the user would normally sleep.
Horizontal sighting may be performed to compensate for the emitted radar being directed at the interfering object. For example, if the user's headboard is against a wall, the headboard and/or wall may occupy a significant portion of the field of view of the radar subsystem 120. Radar reflections from the headboard and/or wall are not useful for determining data about the user; thus, it may be beneficial to de-emphasize the reflection from the wall and/or headboard and emphasize the reflection obtained away from the wall and/or headboard. Thus, the receive beam may be steered horizontally away from the wall and headboard by weighting the received radar signals.
In system 200B, there is a beam steering module 230 to perform processing on the raw chirped waterfall received from radar subsystem 205 before processing is performed by radar processing module 210. Accordingly, the beam steering module 230 may serve as a preprocessing module prior to analysis by the radar processing module 210 and may be used to emphasize areas where one or more users are expected to be present. The beam steering module 230 may be implemented using hardware, software, or firmware; accordingly, the beam steering module 230 may be implemented using the same one or more processors as the radar processing module 210.
The beam steering module 230 may include a channel weighting engine 231 and a beam steering system 232. Channel weighting engine 231 may be used to perform a training process to determine a series of weights that will be applied to the radar signals received from each antenna before they are mixed together. When the monitored area is determined to be empty, the channel weighting engine 231 may perform a training process. During such times, the intensity of signals received from large static objects (e.g., walls, headboard) may be analyzed, and weights may be set to direct beams horizontally (and possibly also vertically) away from such objects. Thus, for a particular range of distances from the device (e.g., up to one meter), the amount of reflection in the static environment may be minimized by the channel weighting engine 231 that directs the received radar beam. Such training may also be performed in the presence of a user. That is, the receive beam of the radar subsystem 205 may be directed to where motion is detected, or specifically where vital signs of the user are present.
The weights determined by channel weighting engine 231 may be used by beam control system 232 to apply weights to the received reflected radar signals for each antenna individually. The received signals from each antenna may be weighted and then mixed together for processing by radar processing module 210. Further details regarding how various embodiments of beam steering module 230 may be implemented will be described in conjunction with fig. 14-17. The beam steering module 230 may be used in conjunction with any of the other embodiments detailed herein.
Fig. 2C illustrates an embodiment of a chirped pulse timing diagram 200C of a Frequency Modulated Continuous Wave (FMCW) radar radio wave output by a radar subsystem. The chirped pulse timing diagram 200C is not drawn to scale. The radar subsystem 205 may generally output radar in a pattern of chirp timing diagram 200C. Chirps 250 represent successive pulses of radio waves that sweep in frequency from a low frequency to a high frequency. In other embodiments, individual chirps may be scanned continuously from high frequency to low frequency, from low frequency to high frequency, and back to low frequency, or from high frequency to low frequency, and back to high frequency. In some embodiments, the low frequency is 58GHz and the high frequency is 63.5GHz. (for such frequencies, the radio waves may be referred to as millimeter waves.) in some embodiments, the frequency is between 57 and 64 GHz. The low and high frequencies may vary depending on the embodiment. For example, the low and high frequencies may be between 45GHz and 80 GHz. The frequency selection may be selected, at least in part, to comply with government regulations. In some embodiments, each chirp includes a linear sweep from low frequency to high frequency (or vice versa). In other embodiments, an exponential or some other pattern may be used to scan frequencies from low to high or from high to low.
The chirp 250, which may represent all of the chirps in the chirp timing diagram 200C, may have a chirp duration 252 of 128 mus. In other embodiments, the chirp duration 252 may be longer or shorter, such as between 50 μs and 1 ms. In some embodiments, a period of time may elapse before a subsequent chirp is sent out. The chirped inter-pulse pause 256 may be 205.33 mus. In other embodiments, the inter-chirp pause 256 may be longer or shorter, such as between 10 μs and 1 ms. In the illustrated embodiment, the chirp period 254, including the chirp 250 and the inter-chirp pause 256, may be 333.33 mus. The duration varies based on the selected chirp duration 252 and the inter-chirp pause 256.
The plurality of chirps of the output separated by the inter-chirp pause may be referred to as frame 258 or frame 258. Frame 258 may include twenty chirps. In other embodiments, the number of chirps in frame 258 may be greater or lesser, such as between 1 and 100. The number of chirps present in frame 258 may be determined based on the maximum amount of power expected to be output over a given period of time. The FCC or other regulatory body may set the maximum amount of power that may be allowed to be radiated into the environment. For example, there may be a duty cycle requirement that limits the duty cycle of any 33ms period to less than 10%. In one particular example, where there are twenty chirps per frame, each chirp may have a duration of 128 μs and each frame has a duration of 33.33ms. The corresponding duty cycle is (20 frames) × (.128 ms)/(33.33 ms), which is about 7.8%. By limiting the number of chirps within frame 258 prior to the inter-frame pause, the total amount of power output can be limited. In some embodiments, the peak EIRP (effective isotropic radiant power) may be 13dBm (20 mW) or less, such as 12.86dBm (19.05 mW). In other embodiments, the peak EIRP is 15dBm or less and the duty cycle is 15% or less. In some embodiments, the peak EIRP is 20dBm or less. That is, the amount of power radiated by the radar subsystem may never exceed these values at any given time. Furthermore, the total power radiated over a period of time may be limited.
The frame may be transmitted at a frequency of 30Hz (33.33 ms) as indicated by time period 260. In other embodiments, the frequency may be higher or lower. The frame frequency may depend on the number of intra-frame chirps and the duration of the inter-frame pauses 262. For example, the frequency may be between 1Hz and 50 Hz. In some embodiments, the chirp may be transmitted continuously such that the radar subsystem outputs a continuous stream of chirp pulses interspersed with inter-chirp pauses. As the chirps are transmitted and the received chirps reflections are processed, a trade-off may be performed to save on average power consumed by the device. The inter-frame pause 262 represents a period of time when no chirp is output. In some embodiments, the inter-frame pause 262 is significantly longer than the duration of the frame 258. For example, the duration of frame 258 may be 6.66ms (chirp period 254 is 333.33 μs and 20 chirps per frame). If 33.33ms occurs between frames, the inter-frame pause 262 may be 26.66ms. In other embodiments, the duration of the inter-frame pause 262 may be longer or shorter, such as between 15ms and 40 ms.
In the illustrated embodiment of fig. 2C, the start of a single frame 258 and subsequent frames is illustrated. It should be appreciated that each subsequent frame may be constructed similar to frame 258. Furthermore, the transmission mode of the radar subsystem may be fixed. That is, regardless of the presence of the user, the time of day, or other factors, the chirp may be transmitted according to the chirp timing graph 200C. Thus, in some embodiments, the radar subsystem always operates in a single transmission mode, regardless of the environmental conditions or activities that are attempted to be monitored. A continuous string of frames similar to frame 258 may be transmitted when device 101 is powered on.
Fig. 3A illustrates an embodiment of a contactless sleep tracking device 300 ("device 300"). The device 300 may have a front surface that includes a front transparent screen 340 so that the display is visible. Such a display may be a touch screen. Surrounding the front transparent screen 340 may be an optically opaque area, referred to as a bezel 330, through which the radar subsystem 205 may have a view of the environment in front of the device 300. The cross-sectional view 400 is detailed in connection with fig. 4.
For the purposes of the following description, the terms vertical and horizontal generally describe directions relative to a bedroom, with vertical referring to a direction perpendicular to the floor and horizontal referring to a direction parallel to the floor. As it can be
Figure BDA0004113235840000431
The radar subsystem of the BGT60 radar chip is generally planar and is mounted generally parallel to the bezel 330 to achieve spatial compactness of the device as a whole, and since the antenna within the radar chip is located in the plane of the chip, the receive beam of the radar subsystem 120 may be directed in a direction 350 generally orthogonal to the bezel 330 without beam aiming. Because the bezel 330 is inclined away from a purely vertical direction, which in some embodiments is provided at approximately 25 degrees to facilitate easy user interaction with the touch screen functionality of the transparent screen 340, the direction 350 may be directed upward from the horizontal direction by the off angle 351. It may be beneficial to assume that the device 300 will typically be mounted on a bedside platform (e.g., a bedside table) at approximately the same height as the top of the mattress on which the user will sleep, with the receive beam of the radar subsystem 120 aimed in a horizontal direction 352 or approximately horizontal (e.g., between-5 ° and 5 ° from horizontal). Thus, vertical beam aiming may be used to compensate for the off angle 351 of the portion of the device 300 where the radar subsystem 120 is present.
Fig. 3B illustrates an exploded view of an embodiment of a contactless sleep tracking device 300. The apparatus 300 may include: a display assembly 301; a display housing 302; a main circuit board 303; a neck assembly 304; a speaker assembly 305; a bottom plate 306; a mesh network communication interface 307; a top daughter board 308; a button assembly 309; a radar component 310; a microphone assembly 311; a rocker switch bracket 312; a rocker switch plate 313; a rocker switch button 314; a Wi-Fi component 315; a power supply board 316; and, a power bracket assembly 317. Device 300 may represent an embodiment of how device 101 may be implemented.
Display assembly 301, display housing 302, neck assembly 304, and base plate 306 may collectively form a housing that houses all the remaining components of device 300. Display assembly 301 may include an electronic display, which may be a touch screen, that presents information to a user. Thus, display assembly 301 may include a display screen, which may include a metal plate of the display that may serve as a ground plane. The display assembly 301 may include a transparent portion distal from the metal plate that allows the various sensors to have a field of view in the general direction in which the display assembly 301 is facing. Display assembly 301 may include an outer surface made of glass or transparent plastic that is used as part of the housing of device 300.
Display housing 302 may be a plastic or other rigid or semi-rigid material that serves as a housing for display assembly 301. Various components, such as a main circuit board 303, may be mounted on the display housing 302; a mesh network communication interface 307; a top daughter board 308; a button assembly 309; a radar component 310; and a microphone assembly 311. The following sections may be connected to the main circuit board 303 using flat wire assemblies: a mesh network communication interface 307; a top daughter board 308; a radar component 310; and a microphone assembly 311. The display housing may be attached to the display assembly 301 using an adhesive.
The mesh network communication interface 307 may include one or more antennas and may enable communication with a mesh network, such as a Thread-based mesh network. The Wi-Fi component 315 may be located at a distance from the mesh network communication interface 307 to reduce the likelihood of interference. Wi-Fi component 315 can enable communication with Wi-Fi based networks.
The radar component 310, which may include the radar subsystem 120 or the radar subsystem 205, may be positioned such that its RF transmitters and RF receivers are remote from the metal plate of the display assembly 301 and located a significant distance from the mesh network communication interface 307 and Wi-Fi component 315. The three components may be arranged in an approximately triangular shape to increase the distance between the components and reduce interference. For example, in device 300, a distance of at least 74mm between Wi-Fi component 315 and radar component 310 may be maintained. A distance of at least 98mm between mesh network communication interface 307 and radar component 310 may be maintained. Additionally, it may be desirable to minimize the effect of vibrations that may be generated by speaker 318 on radar assembly 310 from the distance between radar assembly 310 and speaker 318. For example, for device 300, a distance of at least 79mm between radar assembly 310 and speaker 318 may be maintained. Additionally, it may be desirable for the distance between the microphone and radar component 310 to minimize any possible interference of the microphone with the received radar signal. The top sub-board 308 may include a plurality of microphones. For example, at least 12mm may be maintained between the nearest microphone and radar assembly 310 of the top sub-board 308.
Other components may also be present. There may be a third microphone assembly 311, which may be backward. Microphone assembly 311 may work in conjunction with the microphone of top sub-board 308 to isolate spoken commands from background noise. The power strip 316 may convert power received from an AC power source to DC to power the components of the device 300. The power strip 316 may be mounted within the device 300 using a power bracket assembly 317. The rocker switch bracket 312, rocker switch plate 313, and rocker switch button 314 may be used together to receive user inputs, such as up/down inputs. Such input may be used, for example, to adjust the volume of sound output through speaker 318. As another user input, the button assembly 309 may include a toggle button that can be actuated by a user. Such user input may be used to activate and deactivate all microphones, such as for when the user desires privacy and/or does not desire the device 300 to respond to voice commands.
Fig. 4 illustrates a cross-sectional view of the apparatus 300. The screen 401, which may be glass or plastic, may be attached to the display housing 302, such as by using an adhesive 403. The screen 401 and the metal housing 404 may be part of the display assembly 301. An air gap 406 may be between radar assembly 310 and screen 401. Radar assembly 310 may be mounted such that the refractive index differences encountered by electromagnetic waves propagating outward from radar assembly 310 and through the front of device 300 cause a minimal amount of unwanted reflection. Distance 402 may be between 2 and 2.3mm, which corresponds to slightly less than half of the 5mm free space wavelength at 60GHz, which may be about the frequency of the RF signal output by the RF transmitter of radar component 310. By a distance corresponding to slightly less than (or greater than) half a wavelength, an anti-cavity is created. If exactly half wavelength distance is used, constructive interference may exist, which may be avoided to prevent unwanted reflected signals from being received. Additionally, significantly larger or smaller air gap sizes may be used to ensure that constructive interference does not occur. The adhesive 403 may be considered to have little effect on radar reflection.
Distance 405 may be at least 1mm, such as 1.2mm. The farther the radar component 310 is separated from the metal housing 404, the less interference the metal housing 404 causes to the RF transmitted and received by the metal housing 404. The ground of radar component 310 may be connected to metal enclosure 404 such that radar component 310 uses metal enclosure 404 as a ground plane.
Fig. 5 illustrates an embodiment of a state machine 500 for determining when a person is sleeping. Based on the data output by radar processing module 112, sleep state detection engine 114 may use state machine 500 to determine whether a person is sleeping. It should be appreciated that in some embodiments, sleep state detection engine 114 is incorporated as part of the functionality of radar processing module 112 and does not exist as a separate module. State machine 500 may include five possible sleep states: a bed state 501; an out-of-bed state 502; a movement state 503 in the bed; a no motion state 505 in the bed; and, an out-of-bed state 504.
If there is no motion indicating waveform data, this may indicate that the user is not in bed. Due to the vital signs of a user in a bed, it can be expected that the user always has at least a small movement. Thus, if zero movement is observed, it may be determined that the user is in state 501. After determining state 501, the next possible state that can be determined is state 502. In state 502, the monitored user is getting in bed. Significant user motion may be sensed, such as according to table 2. This may indicate that the user is getting up and may cause the state to transition from state 501 to state 502.
Beginning at state 502, movement on the bed, such as due to a user rolling around, positioning, moving pillows, sheets and/or blankets, reading books, etc., may continue to be detected. While continuing to detect such movement, state 502 may transition to state 503. Alternatively, if motion is detected, then zero motion is detected, which may indicate that the monitored user who is getting out of bed has entered state 505. If this occurs, state 502 may transition to state 505 and then back to state 501. In general, state 504 may be interpreted as the user being asleep and state 503 may be interpreted as the user being awake. In some embodiments, more than a threshold amount of time (or some other form of determination using a form of threshold criteria based at least in part on time) is necessary in state 504 to classify the user as asleep, and more than a threshold amount of time (or some other form of determination using a form of threshold criteria based at least in part on time) is necessary in state 503 to classify the user as awake. For example, if the user was previously determined to be asleep, movements in the bed of less than five seconds may be interpreted as movements while the user is still asleep. Thus, if the user transitions from state 504 to state 503, experiences some movement event, and then returns to state 504 in less than a duration of time, the user may be identified as experiencing a "sleep wake" in which the user's sleep is disturbed, but the user has not been awakened. Such sleep arousals may be tracked along with or may maintain data separate from episodes of the judgment that the user has fully awakened.
Starting at state 503, the monitored user may be determined to be out of bed at state 505 and may become stationary at state 504. "immobilized" in state 504 means that the monitored user has not performed a large movement, but the user continues to perform a small action due to vital signs. In some embodiments, vital signs are considered accurate and/or stored, recorded or otherwise used to measure vital signs of a user only when the status of the monitored user is determined to be status 504. The data collected during states 503 and 504 may be used to determine a general sleep pattern of the monitored user (e.g., length of time to roll the opposite side, level of sleep quality, when deep sleep occurs, when REM sleep occurs, etc.). After the user enters state 504 for a predefined period of time, the user may be assumed to be in a sleep state until the user exits state 504. When the user initially transitions to state 504, the user may be required to stay in state 504 for a certain amount of time, such as two to five minutes, to be considered asleep. If the user is in state 503 for at least a defined period of time, the user may be identified as awake. However, if the user enters state 503 from state 504 for less than a defined period of time and returns to state 504, the user may be identified as simply moving in their sleep and having been continuously asleep.
Fig. 6 illustrates a timeline of detected sleep states and environmental disturbances. Sleep timeline 600 illustrates when a user is determined to be awake or asleep, such as in accordance with state machine 500. Audio timeline 610 illustrates the time at which an environmental audio event was detected by device 101. Light timeline 620 illustrates the time at which device 101 detected an ambient light event. In the example of fig. 6, device 101 monitors audio and light. In other embodiments, device 101 may monitor audio or light. In still other embodiments, one or more additional environmental conditions may be monitored, such as temperature or movement of other organisms (e.g., using PIR sensors). A sound event may be detected if: 1) The amount of sound detected in the environment exceeds a fixed sound level threshold (or some other form of determination using a threshold criterion based at least in part on sound); or, 2) an increase in ambient sound level beyond a defined threshold amount or percentage (or some other form of determination using a threshold criteria based at least in part on sound). An optical event may be detected if: 1) The detected light quantity exceeds a fixed light threshold; or, 2) the ambient lighting level increases beyond a defined threshold amount or percentage (or some other form of determination using a threshold criteria based at least in part on the lighting level). Similar analysis can be performed on temperature. For mobile monitoring, if another animate object is detected moving in the room, an event may be recorded as detected.
The time may be recorded each time the user transitions from asleep to awake. For each detected audio event and each detected light event, time may also be recorded and mapped to an environmental event. To determine whether a sleep event (e.g., a transition from sleep to awake) may correspond to an environmental event, if the environmental event occurs within a time window around the sleep event, the sleep event may be interpreted as having been caused by the environmental event. In some embodiments, the time window precedes the sleep event by a fixed amount of time, such as five seconds. In some embodiments, the time window additionally lags the sleep event by a fixed amount of time, which may be shorter than the previous time, such as two seconds. In order for an environmental event to cause a user to wake, the environmental event must logically wake the user. However, due to changes in the detection, processing, and/or analysis of audio, light, temperature, or other factors, it may be accurate to have a trailing period of time after a sleep event during which an environmental event is "responsible" for waking up the user if the environmental event is determined to have occurred. As an example, if a significant change in temperature is detected soon after the user wakes up, it is likely that a time is required to detect a temperature shift and the user wakes up by the temperature change.
In the example of fig. 6, during a period 601, a transition of the user from asleep to awake is detected. During time period 601, an audio event is detected, but no light event is detected. Because the audio event is within a defined period of time before and after the sleep event, the user wakes up due to the detected audio event. During the time period 602, a light event is detected but no audio event is detected. Since the light event is within a defined period of time before and after the sleep event, the user wakes up due to the detected light event. During time period 603, light events and audio events are detected. Since the light event and the audio event are within a defined period of time before and after the sleep event, the user wakes up due to both the detected light and audio events. During time period 604, no environmental event is detected. Since there are no environmental events within a defined period of time before and after the sleep event, the user wakes up not due to any environmental events.
The data indicating the timeline may be used to provide a night sleep report to the user. For example, when requested by a user or at a defined time in the morning, a graphical or text report may be presented indicating: 1) When the user wakes up at night; 2) Which environmental events are detected; 3) Wake instances due to environmental events.
If multiple users are present in the same bed, the movement of the first user may be an environmental factor that wakes up the second user. Thus, if the first user is detected in state 503 and it is determined that another user wakes up within the time window in which the first user entered state 503, the first user may be an environmental factor that wakes up the second user.
Fig. 7 illustrates an embodiment of raw waveform data or raw chirped waterfall data in which movement due to vital signs of a user may be observed. Embodiment 700 represents raw waveform data (which may also be referred to as raw chirped waterfall data) output by radar subsystem 205. Along the x-axis, the chirp index is indicated. The chirp index indicates any identifier of a particular chirp corresponding to data arranged along the y-axis and RF intensity data indicated by shading. The shaded scale represents a normalized value that may be output by the ADC of the radar subsystem 205. Along the y-axis, the sample index is indicated. For each chirp indicated along the x-axis, a plurality of samples are measured at time intervals. For example, sixty-four samples may be measured for each chirp. The RF intensity of the reflected radio waves can be measured at each sample index.
Embodiment 700 shows device 101 for a monitored sleeping user that is typically stationary beyond a distance of less than one meter. In embodiment 700, a slight "wave" is visible in the raw waveform data over time due to the user's chest and/or abdomen undulation, thereby affecting the reflection of radio waves. The frequency of these relatively slow movements may be measured over time to determine the frequency of the vital signs of the user. In the illustrated embodiment, the visible wave is caused by the user's breathing pattern breathing at about 13.5 breaths per minute.
In addition to visible waves, a significant amount of RF intensity is due to reflections off static objects. For example, at sample index 64, the RF intensity is still high regardless of the chirp index, which may be due to reflections from large objects such as walls. Such static reflections may be filtered out by the moving filter 211 prior to other signal processing.
The various methods may be performed using the systems, devices, and arrangements detailed with respect to fig. 1-7. Fig. 8 illustrates an embodiment of a method 800 for performing contactless sleep detection and interference attribution. Method 800 may be performed using system 100, system 200A, device 300, or some other form of system that may transmit, receive, and analyze radar, such as FMCW radar.
At block 805, radio waves are transmitted. The emitted radio waves may be continuous wave radars, such as FMCW. The radio waves transmitted at block 805 may be transmitted according to the FMCW radar scheme of fig. 2C. The transmitted radio waves may be transmitted by RF transmitter 206 of radar subsystem 205. At block 810, reflections of the radio waves are received, such as by RF receiver 207 of radar subsystem 205. The reflection received at block 810 may be reflected by a moving object (e.g., a person with heartbeats and breathing) and a stationary object. There may be a phase shift in the radio waves reflected by the moving object. For each FMCW chirp transmitted at block 805, multiple samples of reflected RF intensity, such as 64 samples, may be measured. In other embodiments, a fewer or greater number of samples may be measured.
At block 815, raw waveform data, which may also be referred to as raw chirped waterfall data, may be generated based on the received reflected radio waves. The mixed signal generated by mixing the reflected radio wave with the transmitted radio wave may indicate the distance and the phase shift. For each of these samples, the intensity and phase shift may be measured. Over time, a window of raw waveform data may be created and stored in a buffer for analysis. Referring to fig. 2, block 815 may be performed by radar processing module 210.
At block 820, samples of already buffered waveform data may be compared. Waveform data indicative of a static object (i.e., zero phase shift), which may be defined as an object having a movement below a particular frequency (or at least a threshold phase shift or some other form of determination using a threshold criterion based at least in part on phase shift), may be filtered out and discarded in order to preserve waveform data indicative of movement above the particular frequency for further analysis. Block 220 may be performed prior to block 825 to remove most of the waveform data due to the static object and more easily make the data due to the user's movement detectable.
At block 825, the motion indication waveform data may be analyzed. This analysis may be performed to identify and separate data attributable to user motion, heart rate, and respiration rate. Details regarding how the motion-indicative waveform data is analyzed are detailed in connection with the components of radar processing module 210. That is, processing using the mobile filter 211, frequency multiplier 212, range-vital sign transformation engine 213, range gating filter 214, spectral summation engine 215, and neural network 216 to generate data to be analyzed to determine the sleep state of the user and possibly the vital statistics of the user may be performed.
At block 830, the sleep state of the user may be determined, such as using the state machine of fig. 5. Based on the output from the radar processing module, the sleep state detection engine may determine whether the user is likely asleep (e.g., no significant motion in the bed, but vital statistics are detected) or awake (e.g., major motion is detected in the bed). At block 835, it may be determined that the user has entered a sleep state based on the analyzed radar data.
When the user is in a sleep state, one or more environmental sensors may be active and collect data provided to the processing system. Environmental conditions may be monitored to determine whether an environmental event has occurred. Such environmental conditions may include one or more of the following: light, sound, temperature, smell, movement, etc. These environmental conditions may be monitored continuously or periodically while the user is asleep (e.g., after block 830). In some embodiments, the device performing method 800 may continuously perform such monitoring regardless of whether the user is detected as present (and asleep) or absent.
At block 840, it may be determined that an environmental event has occurred based on environmental data obtained from one or more environmental sensors. At block 840, data from each environmental sensor device may be monitored for: 1) An increase in environmental conditions exceeds a fixedly defined threshold (or some other form of determination using a threshold criterion); and/or 2) the environmental condition is increased by at least a predetermined amount or percentage. If any of these events occurs, the environmental event is identified as having occurred. An indication of the environmental event may be stored in association with the timestamp. At block 845, while the environmental conditions are monitored, the user is determined to transition from sleep to awake. Referring to state machine 500, the determination may involve the state machine being in state 503 for at least a predetermined amount of time.
At block 850, based on the environmental event occurring within the predefined period of time that the user wakes up, the user is determined at block 845 to have entered the awake state from the sleep state due to the environmental event identified at block 840. The predefined period of time may be before the time the user wakes up, or may span from before the user wakes up until after the user is identified as awake. The amount of time before the user wakes up may be longer than the trailing time of the period. In some embodiments, the time period varies in duration based on the particular type of environmental event (e.g., a temperature event may involve a longer tailing time than a sound event). When an environmental event results in waking the user, data may be stored indicating the environmental event that caused the user to wake, the time at which the event occurred, and/or how many times the type of environmental event has caused the user to wake over a period of time (e.g., last week, last month).
At block 855, an indication may be output to the user that the user has awakened one or more times due to one or more environmental events. This output can relate to a report presented to the user. The report may be for a particular night or some other period of time, such as the previous week. If the number of times a particular type of environmental event has awakened the user exceeds a defined threshold (or some other form of determination using a threshold criteria), a suggestion may be presented to the user to remedy the environmental event. In some embodiments, the user may receive the verbal report via synthesized speech while the user is awake. In some embodiments, reports about their sleep may be sent to the user email periodically (e.g., once a week).
While the previous embodiments detailed with respect to the figures focus primarily on monitoring sleep of a single user, these same concepts may be applied to multiple users that are sleeping in close proximity to each other (e.g., on the same bed or on two beds with little space therebetween). Fig. 9 illustrates an embodiment 900 of a contactless sleep tracking device that monitors multiple users. Contactless sleep tracking device 901 ("device 901") may represent an embodiment of contactless sleep tracking device 101 of fig. 1 and/or device 300 configured to monitor multiple users over the same period of time. In embodiment 900, two users are present in a bed and their sleep is monitored by device 901. There are different distances between user 910 and device 901 (distance 911) and between user 920 and device 901 (distance 921).
While separate sleep data may be created and stored by device 901 for each user, user 910 and user 920 may not be in the bed for exactly the same period of time. For example, user 920 may be on bed earlier or later than user 910; similarly, user 920 may get up earlier or later in the morning than user 910. As a further example, user 910 or user 920 may leave the bed (e.g., go to a restroom) and then return to the bed temporarily at midnight. Thus, as the user leaves the bed, the device 901 may continue to monitor sleep of other users remaining in the bed. The device 901 may track which user has gone out of bed to ensure that sleep data is still attributed to the correct user, despite one or more of getting out of bed and getting in bed.
Fig. 10 illustrates an embodiment of a sleep tracking system 1000 that may track multiple users. The system 1000 functions similarly to the system 200A of fig. 2. More specifically, the radar subsystem 205 may not change and the beam steering module 230 may not change. For radar processing module 1010, several components may not be changed from radar processing module 210: a moving filter 211; a frequency weight 212; a range-vital sign transformation engine 213; and range gate filter 214 may function as detailed with respect to device 200.
The radar processing module 1010 may additionally include a multi-target splitter 1011. The multi-target splitter 1011 may be used to: identifying the number of users present; and maps data received from the radar subsystem (which may have been processed using the mobile filter 211, the frequency multiplier 212, and/or the range-vital sign transformation engine 213) to an associated user.
As an initial step, the multi-object splitter 1011 may compress the multi-dimensional data to fewer dimensions. The data received by the multi-target splitter 1011 may have: a first dimension indicative of a frequency of movement; a second dimension of distance from device 901; and/or a third dimension of the moving intensity. One or more of these dimensions may be eliminated to aid in the clustering process performed by the multi-target splitter 1011. For example, the dimension of the movement frequency may be removed by using the sample with the largest amplitude movement frequency for a given distance. After such compression, the data may have two dimensions: distance and amplitude. The data may then be clustered.
Note that the distance may be represented by a single value in the multidimensional data. Thus, regardless of direction, only the distance from the system 1000 is tracked. As part of the installation procedure, the user may be instructed to place the system 1000 on a side of a bed, such as a bedside table. Two (or more) users lying side by side will detect their corresponding movements at different distances by the system 1000 at the bedside. In other embodiments, rather than the data received by the multi-target splitter 1011 having three dimensions, there may be four or five dimensions to capture the direction or exact position of movement (e.g., 3D coordinates). By using such more dimensions in relation to the mobile location, the user is free to place the system 1000 in any orientation relative to the user.
The multi-target splitter 1011 may then be assigned the task of performing an unsupervised clustering process. The total number of users present is unknown and therefore the process may need to determine the number of clusters. As previously discussed, while the user may have previously provided data indicating that two users desire to monitor their sleep data, the user may be on and/or off at different times. Thus, at any given time, it may be desirable to analyze the data to identify whether there are one, two, or more than two users. In some embodiments, clustering may be limited to submitting a maximum of two clusters (i.e., a limit may be provided that no more than two people are monitored at a time).
To perform unsupervised clustering, the multi-target splitter 1011 may perform a density-based clustering algorithm on the received data (which has been reduced by one or more dimensions). Density-based clustering may be performed using density-based spatial clustering with application of a noise (DBSCAN) algorithm. Given a collection of points in space, a DBSCAN may combine points that are closely aligned together (e.g., have many nearby neighbors). It should be appreciated that other forms of clustering other than the DBSCAN algorithm may be performed. The multi-target splitter 1011 may initially configure parameters of the DBSCAN algorithm such as the minimum number of points required to form a dense region and the size of the point neighborhood (typically represented by epsilon).
The output of the clustering process may be an indication of the number of clusters and the center position of each cluster (or the boundaries of the clusters). To separate clusters, the midpoint location between two clusters may be located by the multi-target splitter 1011. The midpoint may be calculated as the exact midpoint between the locations output by the clustering algorithm. If there are three clusters (e.g., indicating three users), two midpoints may be output, each midpoint being a midpoint between two adjacent users.
Based on the location of the midpoint, a data set may be created for each user. Thus, if there are two clusters (indicating two users), the data received by the multi-target splitter 1011 may be split into two data sets based on the location of the midpoint. Each data set may then be analyzed separately. Depending on the number of users (as many as clusters), different instances of the spectral summation engine 1015 (e.g., 1015-1, 1015-2) may be used to independently analyze the portion of data mapped to each user output by the range gating filter 214. Each of the spectral summation engines 1015 may function as described in detail with respect to the spectral summation engine 215 for a respective portion of the data it receives from the range gate filter 214.
For example, the spectral summing engine 1015-1 may analyze data mapped to a first user determined by the multi-target splitter 1011 and the spectral summing engine 1015-2 may analyze data mapped to a second user determined by the multi-target splitter 1011. In some embodiments, the data sent to the separate spectral summation engines 1015 do not overlap. That is, the data from the range gating filter 214 is split into two data sets (for two users) and each data set is sent to one of the spectral summation engines 1015. If there are more than two users, there may be a matching number of spectral summation engines 1015 to process each dataset and the number of datasets created matches the number of users detected.
Separate instances of the neural network 1016 (or some other form of analysis engine) may be implemented (e.g., neural networks 1016-1, 1016-2) for each user (and thus, for each cluster and instance of the spectral summation engine 1015). Each instance of the neural network 1016 may function similarly to the neural network 216 for each data set received from a corresponding spectral summation engine of the spectral summation engines 1015. The output of each neural network 1016 may be output to a corresponding sleep state detection engine of the sleep state detection engine 1018 (e.g., sleep state detection engine 1018-1, sleep state detection engine 1018-2). Each instance of sleep state detection engine 1018 may function similarly to sleep state detection engine 114. Thus, for each user, their sleep states are monitored independently based on data determined by the multi-target splitter 1011 to correspond to that user. When multiple users are identified, a corresponding instance of a spectral summation engine, neural network, and sleep state detection engine may be instantiated for each detected user.
The output of each sleep state detection engine 1018 may be stored and used, similar to what is described in detail with respect to fig. 1. That is, the sleep data of each user may be mapped to the appropriate user. Thus, the times that each user is asleep and awake may be stored, so that sleep reports (such as similar to sleep timeline 600) may be generated separately for each user.
Further, for each user, the association between the awake event and the environmental event may be performed separately. Referring to fig. 6, a similar analysis (based on data mapped to users by the multi-target splitter 1011) may be performed for each individual user. For example, an audio event may result in waking up a first user, but a second user may be asleep during the audio event (and thus the audio event will result in waking up the first user but not the second user).
As additional environmental events that may be monitored (e.g., in addition to light, sound, temperature, etc.), the movements of other users may be due to environmental factors that cause the user to wake up. As an example, a radar sensor may be used to sense movement of the first user (e.g., turn over, wake up, and get up in bed while asleep). If the second user wakes up within a sufficiently close period of time that the first user moves, the first user may be assigned "blame" to wake up the second user. In some embodiments, determining what is sufficiently close may be a defined period of time. In this example, in the second user's sleep report, the first user may cause the second user to wake up at a particular time. It is even possible to link environmental events together. For example, an audio event may cause the first user to wake up. The movement of the first user may then cause the second user to wake up. If both events (the second user is awakened by the movement of the first user after the audio event) occur within a defined period of time, the second user's awakening may be due to the audio event or to a combination of the audio event and the movement of the first user.
Fig. 11A and 11B illustrate graphs of movements detected at various distances, such as those detected by device 901. Fig. 11A is a graphical representation of data that may be received by the multi-target splitter 1011. In a first dimension (e.g., y-axis), the distance that the slave device detects movement is indicated. In a second dimension (e.g., x-axis), the frequency of the detected movement may be indicated; in a third dimension (e.g., the z-axis), the magnitude of the movement may be indicated. In the illustrated graph, the amplitude (along the z-axis) is illustrated using shading. For example, the heartbeat may have a relatively high frequency of movement but a small amplitude; and at about the same distance, breathing may have a lower frequency but a greater amplitude (such as because portions of the user's chest move more due to breathing than blood pumping).
As shown in graph 1110A of fig. 11A, there is a single data cluster. This arrangement indicates that a single user is present within the detection range of the device 901. However, in graph 1110B of fig. 11B, two clusters exist at different distances. The frequency and amplitude of such data are similar, indicating, for example, two different user breaths. Thus, in chart 1110A, a single user is present within the detection range of device 901, but for chart 1110B, two users are present within the detection range of device 901.
FIG. 12 illustrates a graph 1200 of detected movement divided into multiple targets. Once the multi-target splitter has identified the presence of two clusters and the locations of the two clusters, a midpoint distance 1201 between the two locations may be determined. ( As previously mentioned, the dimensionality of the radar data may have been reduced; thus, while the graph 1200 indicates three dimensions, the multi-target splitter may have eliminated one or more dimensions, such as the moving frequency dimension. Thus, the midpoint may be represented as a point on the y-axis. )
In diagram 1200, the motion represented in region 1210 may be attributed to a first user that is farther from the device; the motion represented in region 1220 may be attributed to a second user that is closer to the device. By monitoring these areas separately, two users can be monitored. If one of the clusters disappears, the corresponding user may be instructed to get out of bed. The remaining clusters may be attributed to one of the users based on which side of the midpoint distance 1201 the cluster is present. As long as a single cluster remains present, it may be attributed to the user to which the cluster was originally attributed based on which side of the midpoint distance 1201 the cluster was detected. This may still be true even if the cluster migrates beyond the midpoint distance 1201. For example, consider whether a first user is out of bed. After the first user has left, the second user may remain asleep and may turn over to the center of the bed or even the other side of the bed. The second user will remain tracking the second user's sleep regardless of where on the bed the second user migrates after the first user leaves.
Various methods may be performed to independently track the sleep of multiple users on one bed or adjacent beds. Fig. 13 illustrates an embodiment of a method 1300 for performing sleep tracking for multiple users. In general, method 1300 may be used to monitor two users separately. However, the principles of method 1300 may be useful for monitoring more than two users. The method 1300 may be performed using the radar processing module 1010, and the radar processing module 1010 may be incorporated as part of the system 200A or 200B. Further, such a system may be integrated as part of a single device, such as device 300 of fig. 3A and 3B.
At block 1305, after a certain amount of processing has been performed on the raw chirped waterfall data that has been received from the source, waveform data may be received and analyzed. For example, referring to system 200A, radar subsystem 205 may output raw chirped waterfall data processed using a moving filter 211, a frequency multiplier 212, a range-vital sign conversion engine 213, and a range gating filter 214. The processed waveform or waterfall data output by range gate filter 214 may be graphically represented similar to the data of graphs 1100A and 1100B. That is, the data output by range gate filter 214 may have a first dimension representing distance, a second dimension representing frequency, and a third dimension representing amplitude. To separate data for multiple users, the data may initially remove one or more dimensions. At block 1305, one or more dimensions of the data may be removed, such as by using a maximum or average of the frequency of movement for each distance (e.g., each distance range). By performing such a conversion, the frequency dimension may be eliminated and the data may now have only distance and amplitude components for analysis at block 1310.
At block 1310, a clustering process may be performed. Clustering may be performed on the data that has been reduced in dimension from block 1305. The clustering of block 1310 may be understood as an unsupervised clustering problem. A key aspect of clustering may be that the number of users present is unknown. For example, while two users may typically sleep in one bed, only a single user may be present at any given night, or users may get in and out of bed at different times. A density-based clustering approach may be employed, such as by: using a DBSCAN algorithm or some other algorithm, such as k-means clustering performed using progressive counts of hypothetical clusters, it may determine the number of clusters present and the location of such clusters (e.g., the center point of each cluster or the location of the cluster along the axis).
At block 1315, based on the number of clusters determined at block 1310, the number of users present may be determined. The number of users may correspond to the number of clusters identified at block 1310. Thus, if two clusters are identified, two users may be identified as present. Although method 1300 focuses on one or two users being present, more than two users may be identified.
If two users are identified as present, the method 1300 may proceed to block 1320. At block 1320, a midpoint between two clusters may be determined. The midpoint may be an average of two locations of the cluster identified at block 1310. In other embodiments, some other method for determining the position between two clusters may be performed. The point determined at block 1320 may be used to determine which user the data is attributed to.
The processed multi-dimensional radar waterfall data or waveform data from block 1305 may then be assigned or mapped to each user, respectively, at block 1325. In some embodiments, the processed multi-dimensional radar waterfall data may be mapped to either user but not both. In other embodiments, there may be at least some overlap in the data mapped to each user. Thus, although processed data with reduced dimensionality may be used to determine the midpoint; further processing may be performed using the multi-dimensional radar waterfall data from block 1305. At block 1325, a first point of the multi-dimensional data is assigned to the first user and a second portion of the multi-dimensional data is assigned to the second user based on the midpoint. For example, processed multi-dimensional radar waterfall data corresponding to distances greater than a midpoint may be assigned to a second user; the processed multi-dimensional radar waterfall data corresponding to a distance less than the midpoint may be assigned to the first user.
At block 1330, an independent analysis of the multidimensional data mapped to each user may be performed. An independent analysis may be performed to independently determine the sleep state of each user and to determine any environmental factors that have awakened the user (e.g., as detailed with respect to fig. 6). Referring to fig. 10, separate instances of components, such as a spectral summation engine 1015 and a neural network 1016, may be used to analyze the data mapped to each user. Sleep state detection may then be performed independently for each user based on the mapped data.
If, after determining that two users are present, during a subsequent iteration of method 1300, it is determined at block 1315 that a single user is present, method 1300 may proceed to block 1335. This may occur during the day or temporarily when two users are sleeping on the same bed and one user is getting up. It may be desirable to determine which user is still in bed to attribute future sleep events to the user that is still in bed.
At block 1335, it may be determined where the single cluster is located relative to the previously determined midpoint from block 1320. The most recently determined midpoint from block 1320 or an average of some of the most recent midpoints, for example, may be used. Users remaining in the bed may be identified based on whether the cluster of individual users in the bed is closer to the device than the midpoint or farther from the device than the midpoint. When method 1300 identifies two users as present, a single cluster is attributed to the user previously identified on the same side of the midpoint. After this analysis, the sleep data will be attributed to the same individual user, regardless of where the user is located relative to the midpoint, even if the user rolls over or otherwise moves in the bed. I.e. when a user exits, the midpoint is used to determine which user remains; future movements of the user still in the bed relative to the midpoint have reduced importance because it is already known which user remains in the bed and which has left.
If the user leaving the bed returns, it may be assumed that the user will resume on the same side of the bed, as previously determined at block 1325. For example, if a user on the other side of the bed leaves, it may be assumed that when the user returns, they will again become a user on the other side of the bed.
At block 1340, an analysis may be performed to determine the sleep state of the individual user. The spectral summation engine and neural network may be used to analyze data mapped to a single user. Sleep state detection continues to be performed for a single user even if another user has been out of bed (or otherwise no longer detected).
In fig. 14 to 17, the beam steering module is detailed. The beam control modules of fig. 14 and 16 perform reception-side beam control. The particular radar subsystem used may reside on a single integrated circuit, such as
Figure BDA0004113235840000611
BGT60 chip. As detailed with respect to fig. 15, there is a single transmit antenna and there are multiple (e.g., 3) receive antennas. In other embodiments, multiple transmit antennas may be present to perform transmit side beam control in addition to or in lieu of receive side beam control.
The beam steering module may process the data provided by the radar subsystem 205 from each antenna before the radar processing module 210 processes the data. The beam control module may be used to perform beam control so as to emphasize reflected radio waves received from the direction in which the user sleeps and to attenuate radio waves reflected from other directions, such as static objects (e.g., walls, headboard, etc.). It should be understood that the term beam forming may be used interchangeably with the term beam steering. The beam steering module as detailed herein may be used with any detailed embodiment of a contactless sleep tracking device and associated method. The beam steering module may function in the analog domain or the digital domain. If the radar subsystem outputs digital data, the beam steering module may use digital components to fully function in the digital domain.
In a digital embodiment, the functions of the beam steering module may be implemented as software executed by the same processor or processors as radar processing module 210. Alternatively, the functions of the beam steering module may be implemented by dedicated hardware or incorporated as part of the radar subsystem 205.
Fig. 14 illustrates an embodiment 1400 of a beam steering module 1410 for aiming a direction in which sleep tracking is performed. The beam steering module 1410 may represent one embodiment of the beam steering module 230. In general, beam steering module 1410 may apply weights to each antenna data stream received from radar subsystem 205, sum the weighted inputs, and output a combined weighted antenna data stream to radar processing module 210. The applied weights may introduce delays to the input of a particular antenna, which may be achieved by the weights being complex values. By introducing delays to one or more antenna data streams received from the antennas, the antenna receive beam can be effectively controlled.
In embodiment 1400, three digital antenna data streams 1420 (1420-1, 1420-2, 1420-3) are received from radar subsystem 205, each digital antenna data stream corresponding to a separate antenna. Thus, in this embodiment, there are three antennas as part of the radar subsystem 205. In other embodiments, the radar subsystem 205 may have fewer (e.g., 2) or more (e.g., 4, 5, 6, 7, or more) antennas, each with a corresponding raw antenna data stream digitally output to the beam steering module 1410.
Mixer 1430 and combiner 1440 may represent beam steering system 232. Each of the antenna data streams 1420 may be input to a separate mixer of the mixer 1430. Mixer 1430 may be implemented digitally and thus may represent a software process. Mixer 1430-1 mixes antenna data stream 1420-1 with weights represented by complex values output by channel weighting engine 231. Mixer 1430-2 mixes the antenna data stream 1420-2 with weights (which may be the same as or different from the weights applied at mixer 1430-1) output by channel weighting engine 231. Mixer 1430-3 mixes the antenna data stream 1420-3 with weights (which may be the same as or different from each weight applied at mixers 1430-1 and 1430-2) that are output by channel weighting engine 231.
The channel weighting engine 231, which may represent a software process, may perform a training process to determine a value (e.g., complex value) representing the weight that should be output to each mixer 1430. In other embodiments, the channel weighting engine 231 may be performed by separate dedicated hardware or hardware incorporated as part of the radar subsystem 205. The digital signal representing the weights output by the channel weighting engine 231 may effectively apply greater or lesser delays to each antenna data stream 1420. The weights applied via mixer 1430 may be normalized to 1. Thus, the sum of the three weights applied in embodiment 1400 may sum to 1.
The beam control system 232 and beam control module 1410 may be used to implement weighted delay-sum (WDAS) beam control via mixer 1430. Equation 4 details how WDAS is implemented:
Figure BDA0004113235840000631
in equation 4, w i Representing channel weights, which may be complex values to introduce phase delays; x is x i Representing input digital radar data (e.g., FMCW radar chirp) from radar subsystem 205; a, a i Representing complex valued weights responsible for performing phase delays on different receive antenna signals having different amplitudes. The weights output by the channel weighting engine 231 may be determined by performing a least squares optimization procedure. The least squares optimization procedure may be performed according to equation 5.
minimize||y-Xw|| 2 Equation 5
In equation 5, y represents vectorized data generated using the target beam. X represents the antenna data stream data received from the radar subsystem 205; w represents the weight to be learned by the channel weighting engine 231. As part of the training process for determining the most efficient weights for the user, various weights may be tested (e.g., randomly in a pattern) in an attempt to obtain the minimized output of equation 5. For example, if a sufficient number of random weights are tested, it may be expected that a minimized output value is obtained within a certain amount of error. By minimizing the output values according to a least squares optimization process, the weights corresponding to the beam directions closest to the location within the bed of the targeted user can be obtained. These weights may then be used for future monitoring by the user. Periodically or occasionally, retraining may be performed to compensate for the user moving in the bed and/or the orientation and/or position of the sleep detection apparatus being changed.
Before use, weights may be determined offline to compensate for known tilt of the radar subsystem, such as indicated in fig. 3A, and indicated by directions 350 and 352. When a user is present, an optimal direction is determined for the user, such as by sweeping or randomly selecting weights. When the user is absent, one or more directions to the stationary object that produced the significant reflection may be determined so that these one or more directions may be avoided when aiming the user.
It should be appreciated that learning processes other than the least squares optimization process may be performed by the channel weighting engine 231. For example, in some embodiments, the user may assist in the training process by providing an input indicating a direction from the contactless sleep tracking device to where the user is sleeping. In other embodiments, a different form of automatic learning process may be performed to aim the beam at the user.
The channel weighting engine 231 may be triggered to determine weights for which the system 200B is booted or turned on. If system 200B detects motion, such as via an on-board accelerometer, the channel weights may be recalculated.
The sum of weighted antenna data streams 1435 (e.g., 1435-1, 1435-2, and 1435-3) output by mixer 1430 may be received by combiner 1440. Combiner 1440 may output a single summation output 1445 to radar processing module 210. At least one of the weights applied by mixer 1430 (which causes a delay) that is different from the other weights applied by mixer 1430, the beam is effectively steered in a direction that may have vertical and/or horizontal components. The processing of the radar processing module 210 may be performed as described in detail with respect to fig. 2A and 2B.
Fig. 15 illustrates an embodiment of a possible antenna layout of a radar subsystem 1500. Radar subsystem 1500 may represent an embodiment of an integrated circuit that functions as radar subsystem 205. The entire IC may have dimensions of 6.5mm (length 1505) by 5mm (width 1504). In other embodiments, the entire IC has a length 1505 times a width 1504 of between 7mm by 7mm and 4mm by 4 mm. The illustrated embodiment of the radar subsystem 205 has three receive antennas and one transmit antenna, but other embodiments may have a greater or lesser number of antennas. The radar subsystem 1500 may have receive antennas 1510-1, 1510-2, and 1510-3 distributed in an "L" pattern. That is, antennas 1510-1 and 1510-2 may be aligned on axis 1501 and antennas 1510-2 and 1510-3 may be aligned on axis 1502 perpendicular to axis 1501, as shown in FIG. 15. The center of antenna 1510-2 may be located 2.5mm or less from the center of antenna 1510-1. The center of antenna 1510-2 may be located 2.5mm or less from the center of antenna 1510-3.
The transmit antenna 1510-4 may be arranged separately from the L-shaped pattern of the receive antennas 1510-1, 1510-2, and 1510-3. That is, in some embodiments, the center of transmit antenna 1510-4 is not located on an axis of antenna 1510-3 that is parallel to axis 1501. In some embodiments, transmit antenna 1510-4 is on axis 1503 centered at antenna 1510-1, axis 1503 being parallel to axis 1502.
Each antenna 1510 may be a hollow rectangular Dielectric Resonant Antenna (DRA). Each antenna 1510 may have the same set of dimensions. Alternatively, each of the receive antennas 1510-1, 1510-2, and 1510-3 may have the same size and the transmit antenna 1510-4 may be different in size from the receive antenna. In some embodiments, the transmit antenna 1510-4 has a greater width, such as 0.2mm greater, but the same length, than the receive antennas 1510-1, 1510-2, and 1510-3.
In such an arrangement, the phase delay introduced by the weight applied between the antenna data stream of antenna 1510-1 and the data stream of antenna 1510-2 may affect the vertical direction of the receive beam, and the phase delay introduced by the weight between the antenna data stream of antenna 1510-2 and the data stream of antenna 1510-3 may affect the horizontal direction of the receive beam (assuming that the radar subsystem integrated circuits are present in approximately the same direction within the contactless sleep tracking device).
In some embodiments, separate antennas are used for transmission and reception. For example, antennas 1510-4 may be dedicated to transmitting and antennas 1510-1, 1510-2, and 1510-3 dedicated to receiving.
As described, the use of a radar subsystem in which all antennas are located on a single relatively compact integrated circuit chip has been found to achieve a good balance of cost savings, reasonable ability to perform receive side beam steering, and antenna patterns that are wide enough in the horizontal plane to cover common bed sizes (e.g., large, extra large, full, double). At the same time, a device incorporating such a radar subsystem allows it to be placed close enough to the bed (e.g., within 1 meter), it can also be used as a personal assistant, including an alarm clock function (which can replace an alarm clock), a home control center, and/or an entertainment touch screen device.
Although the beam steering module of embodiment 1400 does not consider the arrangement of antennas 1510 relative to each other, embodiment 1600 does consider the topology of the antenna arrangement of radar subsystem 1500. In other embodiments, antenna 1510 may be arranged in a pattern other than "L".
Fig. 16 illustrates an embodiment 1600 of a beam steering module for targeting directions in which sleep tracking is performed. In embodiment 1600, the antenna arrangement (i.e., antenna topology) of radar subsystem 205 is considered. By taking into account the antenna topology, more accurate beam steering may be performed, which may result in more accurate tracking of the user while sleeping in the user bed. Antenna 150-1 corresponds to antenna data stream 1420-1, antenna 1510-2 corresponds to antenna data stream 1420-2, and antenna 1510-3 corresponds to antenna data stream 1420-3. That is, the phase delay added between the data streams of antenna 1510-2 and antenna 1510-3 of radar subsystem 205 is used for horizontal beam pointing and the phase delay added between the data streams of antenna 1510-2 and antenna 1510-1 is used for vertical beam pointing. Depending on whether the data stream of antenna 1510-2 is for vertical or horizontal beam aiming, different weights may be applied using separate digitally implemented mixers.
As in embodiment 1400, in embodiment 1600, separate digital antenna data streams 1420 are received from each antenna of radar subsystem 205. Mixer 1630 and combiner 1640 may represent beam steering system 232 of fig. 2B. In embodiment 1600, beam steering module 1610 has four mixers 1630 (1630-1, 1630-2, 1630-3, and 1630-4). Similar to embodiment 1400, the values (e.g., complex values) output by channel weighting engine 231 may be mixed with each antenna data stream of antenna data streams 1420. However, different weights may be mixed with the antenna data stream 1420-2 and two weighted outputs created for horizontal and vertical beam pointing, respectively. Antenna data stream 1420-1 may have weights applied via mixer 1630-1 and may be combined with antenna data stream 1420-2 (which has weights applied via mixer 1630-2) via combiner 1640-1. The weights applied at mixers 1630-1 and 1630-2 may sum to a normalized value of 1. Antenna data stream 1420-3 may have weights applied via mixer 1630-4 and may be combined via combiner 1640-2 with antenna data stream 1420-2 having weights applied via mixer 1630-3. The weights applied at mixers 1630-3 and 1630-4 may sum to a normalized value of 1.
The channel weighting engine 231 may be implemented similarly to that implemented in embodiment 1400. The channel weighting engine 231 may perform a least squares optimization process or some other optimization process to determine the optimal or near optimal direction of the receive beam. The channel weighting engine 231 may generate four outputs for weighting in embodiment 1600 instead of three as in embodiment 1400. Thus, if a pattern or set of random values that are output for the weights is used as part of the least squares optimization process, then more sets of output values in embodiment 1600 can be tested to obtain an optimized set of output values for setting the weights than in embodiment 1400.
Two outputs 1645, 1645-1 and 1645-2, may be output to radar processing module 1650. Separate processing may then be performed on output 1645-1 and output 1645-2 by radar processing module 1650. At a higher level, processing by radar processing module 1650 may be performed on each of outputs 1645 until directions are no longer used during processing. In embodiment 1600, separate instances of the mobile filter, frequency multiplier, and range-vital sign transformation engine may be applied to each output 1645, and the results may then be averaged or summed together. More specifically, the output 1645-1 may be output to the moving filter 1651-1, followed by the frequency multiplier 1652-1, followed by the range-vital sign transformation engine 1653-1. The output 1645-2 may be output to the moving filter 1651-2, followed by the frequency multiplier 1652-2, followed by the range-vital sign transformation engine 1653-2. The moving filter 1651, frequency booster 1652, and range-vital sign transformation engine 1653 may function as described in detail with respect to the moving filter 211, frequency booster 212, and range-vital sign transformation engine 213 of fig. 2A and 2B. The outputs of the range-vital sign transformation engine 1653-1 and the range-vital sign transformation engine 1653-2 may be summed or combined using a combiner 1654. The output of the combiner 1654, which may represent an average of the outputs of the range-vital sign transformation engine 1653, may be processed by the range gating filter 214 and subsequent components, as described in detail with respect to fig. 2A and 2B.
Because the radar processing module 1650 and the beam control module 1610 may be executed as software processes executed by a general purpose processor (or processors), implementing a more complex mix, weights, and multiple instances of the mobile filter 1651, frequency booster 1652, and range-vital sign conversion engine 1653 may require sufficient processing power to be available. Thus, assuming such processing power is available, there may be no need to perform hardware changes to the contactless sleep tracking device to implement embodiment 1600 instead of embodiment 1400. In some embodiments, embodiment 1400 may be implemented and if the sleep tracking results are inaccurate, embodiment 1600 may be implemented (or vice versa). Advantageously, in some embodiments where the contactless sleep tracking device 300 includes a smart home management device (e.g., a Nest home center) in which radar functionality is integrated, the smart home management device is a network connection combination of smart speakers, home assistants, and touch screen-based control and/or entertainment centers, improvements to the parameter calculation method and even the overall radar processing algorithm may be achieved by a central cloud server via on-site software updates pushed over the internet as needed.
While the receive side beam steering aspects of the embodiments 1400 and 1600 of fig. 14 and 16, respectively, are implemented in the digital domain, the functions of the beam steering modules 1410 and 1610 may be implemented in the analog domain using analog components. If such beam control is performed in the analog domain, conversion to the digital domain may be performed after such analog beam control is performed, such that digital data is provided to the radar processing module 210 or 1650.
Various methods may be performed using embodiments of beam steering modules, such as beam steering modules 1410 and 1610 of fig. 14 and 16, respectively. Fig. 17 illustrates an embodiment of a method 1700 for directionally targeting sleep tracking (or, possibly, for some other form of tracking, such as for coughing as detailed with respect to fig. 18-21). Method 1700 may be performed using a system such as that found in embodiments 1400 and 1600. In some such embodiments, there may be the antenna topology of fig. 15 or some similar L-shaped topology. Method 1700 may be performed by such a system incorporated as part of device 300 of fig. 3A and 3B.
Method 1700 may be performed in combination with any of the detailed methods described previously. Thus, there is overlap in the various blocks that are performed as part of the various methods described in detail below.
At block 1705, radio waves are transmitted. The emitted radio waves may be continuous wave radars, such as FMCW. The raw waveform data passed to the radar processing module may include: waveform data indicative of continuous sparse reflection chirps resulting from the radar subsystem operating in a continuous sparse sampling mode or from the radar subsystem operating in a burst mode and being performing a conversion process for simulating raw waveform data generated by the radar subsystem operating in a continuous sparse sampling mode. The radio waves transmitted at block 805 may be transmitted according to the FMCW radar scheme of fig. 2C. The transmitted radio waves may be transmitted by RF transmitter 206 of radar subsystem 205. At block 1710, reflections of the radio wave may be received, such as by multiple antennas of the RF receiver 207 of the radar subsystem 205. The reflection received at block 1710 may be reflected by a moving object (e.g., a person with heartbeats and breathing) and a stationary object. For each FMCW chirp transmitted at block 1705, multiple samples of reflected RF intensity, such as 64 samples, may be measured at block 1710. In other embodiments, a fewer or greater number of samples may be measured. There may be a phase shift in the radio waves reflected by the moving object. Blocks 1705 and 1710 may correspond to blocks executed as part of one or more other methods detailed herein, such as blocks 805 and 810 of method 800.
At block 1715, raw waveform data, which may also be referred to as raw chirped waterfall data, may be created based on the reflected radio waves received by each antenna. The reflected radio waves may indicate a distance and a phase shift. At a given frequency, such as 10Hz, a plurality of samples, such as 64 samples, may be taken. For each of these samples there may be intensity and phase shift data and it may be output as a digital antenna data stream, with a separate antenna data stream for each antenna for receiving reflected radio waves. Further processing may be performed in the digital domain. In other embodiments, the antenna data stream may be output as analog data by the radar subsystem and the weighting process may be performed in the analog domain. Over time, a window of raw waveform data may be created and stored in a buffer for analysis. Referring to fig. 2, block 815 may be performed by radar processing module 210.
At block 1720, a learning process may be performed to determine weights to be applied to each received antenna data stream. As detailed with respect to channel weighting engine 231, various values used as weights may be tested and the most efficient set of weights may be determined, which results in beam steering that best aims at the user's location. The applied value may be a complex value and thus may act to introduce a phase delay into one or more received antenna data streams. Such introduced delays may effectively aim the receive antenna beam in a particular direction, which may have vertical and/or horizontal components.
The learning process performed as part of block 1720 may involve a least squares optimization process being performed or some other form of optimization. In some embodiments, a particular direction may be locked or restricted for beam steering purposes. For example, in the horizontal direction, it may be desirable for the beam to be at 90 ° to the surface of the contactless sleep tracking device, such as shown in fig. 3A. Alternatively, the beam may be limited to vary from a limited range (e.g., 10 °) orthogonal to the face of the contactless sleep tracking device. Additionally or alternatively, the values for weighting may compensate for a vertical tilt angle of a display of the contactless sleep tracking device, such as indicated in fig. 3A with reference directions 350 and 352. Thus, the values used to determine the optimal angle may be limited to a particular range such that the vertical and/or horizontal direction of the beam remains within a particular range (e.g., horizontal +/-10 °, vertical +2° to-25 °).
After the learning process of block 1720 is complete, blocks 1705, 1710, and 1715 continue to be performed such that the antenna data stream continues to be output by the radar subsystem. At block 1725, the value determined at block 1720 to be used as a weight may be applied to the antenna data stream to perform beam steering while performing a sleep tracking procedure for one or more users. At block 1730, the weighted antenna data streams may be combined, such as by summing the data streams together. Block 1730 may involve summing all weighted antenna data streams together to create a single output stream, such as in embodiment 1400. Block 1730 may also include creating multiple output streams by summing together different sets of weighted antenna streams, such as in embodiment 1600. As in embodiment 1600, a particular antenna data stream may be used twice, applying different weights for horizontal and vertical aiming of the receive antenna beam.
At block 1735, sleep tracking may be performed using the one or more combined and weighted antenna data streams. In some embodiments, if a single output exists from block 1730, such as in embodiment 1400, the processing may be performed by radar processing module 210 as detailed with respect to mobile filter 211, frequency multiplier 212, range-vital sign transformation engine 213, range gating filter 214, spectral summation engine 215, and neural network 216. In other embodiments, at least some of the processing of radar processing module 210 may be performed separately for each weighted and combined antenna data stream if more than one output is present from block 1725, such as in embodiment 1600. For example, the processing of the moving filter, frequency multiplier, and range-vital sign transform may be applied to each weighted and combined antenna data stream separately. After such separate processing, such as through range-gating filter 214, spectral summation engine 215, and neural network 216, the processed data streams may be averaged together and further processing may be performed as part of the sleep tracking process, as detailed with respect to fig. 2A and 2B. While block 1735 focuses on sleep tracking, block 1735 may additionally or alternatively focus on cough attribution based on user movement, as detailed with respect to fig. 18-21.
Embodiments of the sleep tracking devices detailed herein may also be used as cough attribution devices. Alternatively, in some embodiments, the devices detailed herein do not perform sleep tracking functions but instead perform cough detection and attribution functions. When presenting sleep data to a user, cough data may be incorporated therein, such as the time and number of coughs of a particular user. Furthermore, cough trends over time may be monitored for a particular user. The user may be informed of how their cough level has increased or decreased over time (e.g., days, weeks, months, or even years).
Fig. 18 illustrates an embodiment of a cough detection and attribution system 1800 ("cough attribution system 1800"). In some embodiments of the contactless sleep tracking device 101, the functionality of the cough due system 1800 is incorporated. Alternatively, cough attribution system 1800 may be implemented in a device that does not perform sleep tracking functions. Cough attribution system 1800 may include: radar subsystem 205 (which may represent an embodiment of radar subsystem 120); radar processing module 210 (which may represent an embodiment of radar processing module 112) or radar processing module 1010 (which may also represent an embodiment of radar processing module 112). In some embodiments, a beam control module, such as beam control module 230, may be incorporated as part of cough attribution system 1800. In other embodiments, no beam steering module is present. Advantageously, by virtue of using radar and audio, system 1800 is able to perform cough detection and attribution without any physical contact with the monitored user or bed of the monitored user.
Cough attribution system 1800 may include: a microphone 134; radar subsystem 120 (which may be radar subsystem 205); cough detector 1810; radar processing module 210 (or 1010); cough data store 1825; a cough decision engine 1820; cough data compilation engine 1830; a display 140; a wireless network interface 150; and a speaker 155. Any components of cough detector 1810, radar processing module 210 (or 1010), cough data store 1825, cough decision engine 1820, cough data compilation engine 1830, which may represent software processes executed using one or more processors, may be executed locally or may be executed remotely using a cloud-based server system.
Microphone 134 may continuously receive audio and output data to cough detector 1810 based on the received audio. In some embodiments, microphone 134 is used to monitor various forms of audio of the surrounding environment, such as cough, disturbance, or spoken commands, which may be triggered by a particular keyword or key phrase. In some embodiments, multiple microphones are present as part of the cough causing device. The audio streams from such separate microphones may be combined or analyzed separately. In some embodiments, when radar processing module 210 or radar processing module 1010 has detected that the user is in bed, audio is monitored only for coughs and/or disturbances, which is an advantageous feature for bedside monitoring devices, as many users emit a significant amount of noise before getting in bed, but generally tend to be quieter when getting in bed. Such automatic entry detection mode may avoid the need for specific voice commands or button presses to initiate the cough monitoring process. Alternatively, or as an optional gating overlay to such features, audio monitoring may require each explicit authorization of the user to be activated. Preferably, cough attribution system 1800 is configured such that all audio monitoring can be easily, and verifiably disabled by a user at any time. For example, a hardware-based mechanical switch may be provided that disables all of the onboard microphones of cough due system 1800.
Cough detector 1810 may be a software-based process performed by a processing system including one or more processors that determines whether a cough is present in an audio stream received from microphone 134. Cough detector 1810 may be performed by the same processing system that performs radar processing module 210, or may be performed by a separate processing system. Cough detector 1810 may include a trained machine learning model that analyzes a received audio stream and outputs an indication of whether a cough is present. When a cough is identified, a time stamp may be output along with an indication of the presence of the cough. Additionally or alternatively, different forms of detectors may be implemented for detecting sounds other than coughs. For example, a snore detector may be implemented in addition to or in place of cough detector 1810. Additionally or alternatively, to detect conversations in the user's sleep, a voice detector may be implemented. Similar components may be implemented for user scratching, bruising, flatulence, hiccups, and/or some other action or physical function that may use audio recognition.
The trained machine learning model may include a neural network trained using truth-labeled training data including various identified coughs and audio samples that do not include coughs. In other embodiments, the trained machine learning model may analyze received audio using an arrangement other than a neural network. While cough detector 1810 may determine whether a cough is present based on the audio stream received from microphone 134, radar may be used to determine whether the monitored user is the source of the cough.
After the audio stream is analyzed by the cough detector 1810, the audio stream received from the microphone 134 may be deleted or otherwise discarded, in which case audio for cough analysis that is not captured by the microphone 134 would be retained. Thus, even if the audio capture performed via microphone 134 is effective for cough detection, the user does not need to worry about privacy concerns because the audio is discarded after the cough detection is performed on the audio stream.
The radar subsystem 120 may function as detailed in relation to fig. 1, 2A and 2B. Raw radar data based on the detected reflected radio waves of the FMCW radar system may be output to the radar processing module 210. In some embodiments, one or more data streams output by radar subsystem 120 may first be aimed at by a beam control module, such as beam control module 230, which is described in detail with respect to fig. 14-17.
The radar processing module 210 may function as described in detail with respect to fig. 2A and 2B. As detailed previously, the output of the neural network 216 may be used to determine a state within the state machine 500, such as whether the user is present in the bed and moving or is present in the bed and not moving (except for vital signs). Thus, the output of the neural network 216 (or some other form of classification engine) indicates: 1) Whether the user is in bed; 2) Whether the user is moving (not just vital signs) may be output to the cough decision engine 1820. In some embodiments, the state output by radar processing module 210 may include a timestamp.
Cough decision engine 1820 may be a software process executed by a processing system that performs the functions of radar processing module 210 and/or cough detector 1810. (it should be appreciated that the snore decision engine may be used in addition to or in lieu of the cough decision engine 1820 for other detected sounds, such as snoring.) the processing system may have one or more processors. The cough decision engine 1820 may analyze the indication of the presence of a cough received from the cough detector 1810 in combination with the indication of the user's movement in bed received from the radar processing module 210. The time stamp of the detected cough from cough detector 1810 and the time stamp of the detected movement on the bed may need to be within a sufficiently small period of time of each other for cough decision engine 1820 to determine that the cough is responsible for the user's movement. For example, if user movement is detected within a predefined period of time, such as within a range extending one second before to three seconds after a cough is detected, then the cough and movement related (i.e., the cough causes movement) may be determined. Due to the amount of processing time to analyze the radar data as compared to the audio data, a sufficiently large time range may be required to identify the event as relevant. In some embodiments, the time window is +/-1 second. In other embodiments, the time window is larger (e.g., +/-2 seconds) or smaller (e.g., +/-0.7 seconds).
If a cough is detected by cough detector 1810 but cough decision engine 1820 does not attribute the cough to the monitored user based on data received from radar processing module 210, cough decision engine 1820 may discard information about the cough because it is not relevant to the monitored user. Alternatively, even if the cough is not identified as originating from a monitored user, the cough may be considered an audio event, such as described in detail with respect to fig. 6. If a cough (performed or output by someone or other thing) is identified as an audio event that causes the monitored user to wake up, the cough may be considered any other audio or may be specifically stored as a cough event that wakes up the monitored user.
When a cough is detected and attributed to the monitored user, the cough decision engine 1820 can store an indication of the cough and a time stamp of the cough to a cough data store 1825 of the cough attribution system 1800. (additionally or alternatively, if other forms of interference are detected, such as snoring, the cough data store 1825 may be used to store such data, or separate data warehousing may be used.) in some embodiments, the indication of the cough and the timestamp of the cough may be output to a cloud-based server system for storage. In some embodiments, an indication of the severity of the cough may be stored based on the magnitude of the audio analyzed by the cough detector 1810 (e.g., whether the cough is a small, medium, or large cough based on a threshold-based volume analysis or based on some other form of determination that uses a threshold criteria based at least in part on sound volume). In some embodiments, a continuous series of coughs may be considered a single cough event and an indication of the duration of the cough event may be stored.
Cough data store 1825 may be incorporated as part of sleep data store 118 or may be separately stored data. For example, cough data may be stored in combination with sleep data. Cough data store 1825 may represent a non-transitory processor-readable medium, such as memory.
The cough data compilation engine 1830 may continuously or periodically analyze data from the cough data store 1825, such as once a day, possibly when the user wakes up in the morning. Cough data compilation engine 1830 may generate a night report that outputs data about a user's cough at night. The night report may include information such as: 1) The number of times the user coughs during night; 2) The duration of such cough; 3) The time of such cough; 4) Whether the cough wakes up the user; and/or 5) the severity of such cough. Such night reports may be represented on display 140 using synthesized speech output via speaker 155 and/or may be represented using text and/or graphical indicators. Data from the night report may be output via wireless network interface 150 to a cloud-based server system for storage and/or further analysis. In other embodiments, raw cough data from the cough decision engine 1820 is output to a cloud-based storage system for analysis. For example, the functionality of cough data compilation engine 1830 may be performed by a cloud-based server system. The cough data compilation engine 1830 may be used to alternatively or additionally output data regarding attributions of other sounds, such as snoring, talking, and the like.
The cough data compilation engine 1830 may further generate long-term trend data that is incorporated as part of the night report or is part of a separate long-term trend report. The long-term trend data may be based on cough data analyzed over a longer period of time than one day or one night. For example, the long-term trend data may analyze the data over a period of time such as the following: a week, weeks, a month, months, a year, months, or some custom period of time, such as some period of time when the user identifies that they are suffering from a disease. The long-term trend data may be output to the user as part of a night report or at less frequent intervals, such as once a week, and/or upon user request. Long-term trend data may be used to indicate information to a user, such as: 1) Whether the frequency of user nocturnal coughs increases, decreases, or remains the same (e.g., within a threshold number of coughs for the user's average number of coughs or some other form of threshold criteria for the user's average number of coughs); 2) Whether the user's cough intensity increases, decreases, or remains unchanged (e.g., within a threshold range of average intensity or some other form of determination using a threshold criterion based at least in part on intensity); 3) Whether the user's cough duration increases, decreases, or remains unchanged (e.g., within a threshold range of average durations or using some other form of determination based at least in part on a threshold criterion of cough duration); and/or 4) whether a cough becomes more likely, less likely, or has about the same chance to wake the user from sleep. In some embodiments, long-term trend data is output when one of the trends is noticeable, such as when the cough frequency of the user has increased significantly.
Such long-term trend data may be output via speaker 155 using synthesized speech and/or may be represented on display 140 using text and/or graphical indicators. Data from the long-term trend data may be output to a cloud-based server system via wireless network interface 150 for storage and/or further analysis. In some embodiments, long-term trend data for a cough is output along with long-term trend data for a user's sleep. In such embodiments, the functionality of the cough data compilation engine 1830 may be combined with a sleep data compilation engine, such as the sleep data compilation engine 119 of fig. 1. Thus, cough data may be output along with sleep data.
In some embodiments, a single user may be monitored. This may mean that a single user is in bed, or that the person closest to the cough due device is being monitored. However, even if a single person is being monitored, there may be other sources of cough and cough-like sounds nearby, such as other people, animals (e.g., pets), wind or weather, passing vehicles, or speakers. Fig. 19 illustrates an example of a timeline of detected coughs and movement status of a user for a single monitored user. Timeline 1901 illustrates a determined movement state of a user based on radar data generated by radar subsystem 120 and processed by radar processing module 210. Timeline 1902 illustrates when cough detector 1810 detects a cough based on audio flow from microphone 134. Notably, the presence of a cough in timeline 1902 does not necessarily correspond to the user of timeline 1901, as the cough may originate from a source other than the monitored user.
During period 1910, a cough is detected based on the audio stream and a motion is detected by radar processing module 210. In this case, the cough decision engine 1820 considers the monitored user to have cough. Similarly, during time period 1940, a cough is detected based on the audio stream and motion is detected by radar processing module 210. Here again, the cough decision engine 1820 considers the monitored user to have cough. Data indicating cough, cough duration, cough time stamp, and cough severity may be stored to a cough data store 1825.
During time period 1920, two coughs are detected. However, no movement of the user is detected. Thus, while a cough may already exist in audio, the cough is not attributed to the user and no cough data is stored for the user for those particular cough instances. In addition to such audio data indicating that the cough sound has originated from another source, it may also indicate a false positive of a cough that has been detected based on the audio. Whether the cough detection is false positive or whether the cough originates from a source other than the user, the data corresponding to that particular "cough" is not stored in association with the user.
During time period 1930, movement of the user in the bed is detected by radar subsystem 120. This motion represents a significant motion that is greater than the user's movement due to breathing or the user's heartbeat. However, based on the captured audio stream, no cough is detected. Thus, no cough data for time period 1930 is stored for the user.
In some embodiments, multiple users may be monitored during the same time period. For example, two users sleeping in the same bed, such as in fig. 9, may each track their sleep. Additionally or alternatively, the cough of each user may be tracked. If multiple users are tracking sleep and/or coughing, radar processing module 1010 may be used in place of radar processing module 210. In such an embodiment, the cough decision engine 1820 may receive two inputs from the radar processing module 1010, thereby receiving separate inputs for each user. Additional embodiments for three or more users are also possible by adding additional instances of the spectral summation engine and neural network. Thus, a separate output may be provided for each monitored user, indicating whether the user is present in the bed and moving or stationary. Cough detector 1810 may continue to function as detailed with respect to cough attribution system 1800.
FIG. 20 illustrates an example of a timeline of cough and in-bed motion detected for a plurality of monitored users. Here again, timeline 1901 illustrates what movement state the user is in based on radar data generated by radar subsystem 120 and processed by radar processing module 1010. Timeline 1902 illustrates when a cough is detected by cough detector 1810 based on the audio stream from microphone 134.
During period 1910, a cough is detected based on the audio stream and movement of the first user is detected by radar processing module 1010. In this case, the cough decision engine 1820 considers the monitored first user to have cough. Similarly, during time period 1940, a cough is detected based on the audio stream and movement of the first user is detected by radar processing module 1010. Here again, the cough decision engine 1820 considers the monitored first user to have cough. Data indicating cough, cough duration, cough time stamp, and cough severity may be stored to a cough data store 1825, which is mapped to the first user.
During time period 2010, a cough is detected based on the audio stream, and movement of the second monitored user is detected by radar processing module 1010, as indicated by time line 2001. In this case, the cough decision engine 1820 considers the monitored second user to have cough. Data indicating cough, cough duration, cough time stamp, and cough severity may be stored to a cough data store 1825, which maps to a second monitored user.
During time period 2020, a cough is detected based on the audio stream, but no motion is detected that is sufficient to classify the user as any monitored user moving in the bed. Thus, the cough for time period 2020 is not mapped to any one monitored user. During time period 1930, although the first user is moving in the bed, no indication of cough is stored for any one user because no cough is detected based on the audio stream. In some cases, one user may cough vigorously, resulting in both users moving (the user through the cough shakes the bed, which results in the other user moving). In such a case, the cough may be due to a user having a large movement amount in the bed.
It should be appreciated that timelines 1901, 1902, and 2001 are merely examples. The number of detected coughs, whether one or more users are monitored, and the timing of the cough can vary on a case-by-case basis.
Various methods for cough detection and attribution may be performed using the cough attribution system 1800. Fig. 21 illustrates an embodiment of a method 2100 for cough detection and attribution. Method 2100 may be performed using cough attribution system 1800, or some other similar system. Further, cough attribution system 1800 may be incorporated as part of the apparatus 300. For example, when a single user's cough is being monitored, cough attribution system 1800 may be used with radar processing module 210. If two users' coughs are being monitored, system 1800 can be used with radar processing module 1010. Cough attribution system 1800 may additionally or alternatively be used in combination with a beam control module, such as beam control module 230, beam control module 1410, or beam control module 1610, to perform beam control in the direction of one or more users in the bed. Further, it should be appreciated that cough detection and attribution may be performed in conjunction with or separate from sleep tracking. For example, cough detection and attribution may be performed with various embodiments of method 800, method 1300, and/or method 1700. Alternatively, method 2100 may be performed as a stand-alone method separate from methods 800, 1300, and 1700. If method 2100 is performed by device 300 as a stand-alone method, device 300 may be referred to as a contactless cough detection and attribution device. Notably, due to the use of radar and audio, method 2100 can perform cough detection and attribution without any device making physical contact with the monitored user or the bed of the monitored user.
In method 2100, two separate processes may be performed in parallel: an audio monitoring process may be performed in blocks 2105 through 2115 and a radar-based movement monitoring process may be performed in blocks 2120 through 2140. Both of these processes may be repeated and continuously performed as part of method 2100. At block 2105, audio is detected using one or more microphones. In some embodiments, such one or more microphones are located onboard the device performing method 2100, or a remote device with one or more onboard microphones may be used and the audio stream may be transmitted to a cough detection and tracking device for analysis. For example, the remote device may be a separate home assistant or a smart speaker device. At block 2110, the audio stream output by the microphone is analyzed to determine if a cough has occurred. The detection of cough may be performed using a pre-trained machine learning module, which may be a trained neural network. In some embodiments, cough detection is performed exclusively based on audio. If a cough is detected, an output of a timestamp indicating the presence of the cough and the cough may be created.
At block 2115, the audio stream created by the microphone may be deleted or otherwise discarded. In some embodiments, no portion of the audio stream is saved (except for an indication of whether a cough is present in the audio stream). If the device performing method 2100 can be used as a home assistant, the audio stream can be temporarily stored if the user speaks a keyword or key phrase that is intended to trigger the user's voice to be interpreted as a command or question.
At block 2120, radio waves are transmitted. The emitted radio waves may be continuous wave radars, such as FMCW. FMCW radar may emit radio waves as detailed with respect to fig. 2C. The radio waves may be emitted by an RF transmitter 206 of the radar subsystem 205. At block 2125, reflections of the radio wave may be received, such as by multiple antennas of RF receiver 207 of radar subsystem 205. The reflection received at block 2125 may be a reflection by a moving object (e.g., a person sleeping in a bed, a person moving in a bed) and a fixed object. Blocks 2120 and 2125 may correspond to blocks performed as part of one or more other methods detailed herein, such as blocks 805 and 810 and/or blocks 1705 and 1710 of method 800.
At block 2130, raw waveform data, which may also be referred to as raw chirped waterfall data, is created based on the received reflected radio waves and output by the radar subsystem. Over time, a window of raw waveform data may be created and stored in a buffer for analysis. The waveform data may be processed using a beam steering module to perform a beam steering process, such as WDAS beam steering, before being provided to a radar processing module for processing. For example, the blocks of method 1700 may be performed at block 2130.
At block 2135, raw waveform data that may have been weighted based on beam steering is analyzed at block 2135. The analysis at block 2135 may be performed in accordance with detailed processing related to the mobile filter 211, the frequency multiplier 212, the range-vital sign transformation engine 213, the range gate filter 214, the spectral summation engine 215, and the neural network 216. As detailed with respect to radar processing module 1010, if multiple users are being monitored, each user may be mapped to an instance of a spectral summation engine and neural network.
At block 2140, based on the output from the neural network 216, a status may be determined for the or each user. The state may be determined according to state machine 500. Thus, the output of block 2140 may be an indication of whether the user is in bed and is moving or stationary (excluding vital signs) in bed. If multiple users are being monitored, the output of block 2140 may be a similar indication for each user. A timestamp may be mapped to each determined state of the output. After block 2140, the radar process may continue to repeat and monitor movement.
At block 2145, it may be determined whether the cough occurred within a predefined time frame of the user's movement. The time stamps mapped to the cough indication and the user movement indication may be used to determine whether the cough and the user movement occur close enough in time that the cough may result in movement. In some embodiments, to determine that block 2145 is affirmative, a cough is detected based on audio, and the user is determined to be moving in bed within a predefined time frame (e.g., state 503). If no cough is detected or the monitored user is not moving in bed, block 2145 may be evaluated negative. If multiple users are being monitored, block 2145 may be evaluated positive for one user and negative for other users. It is also possible that block 2145 is evaluated negative for all users.
At block 2150, in some embodiments, no cough is attributed or recorded to indicate that the monitored user has cough. In some embodiments, if the user is detected moving in bed (but without a cough) and sleep tracking is being performed, an indication of the user moving in bed may be stored for sleep tracking purposes. After block 2150, the audio monitoring process and the radar-based movement monitoring process may continue to be performed to detect and account for possible cough in the future and block 2145 may continue to be evaluated in the future.
At block 2155, if block 2145 is positively determined, an indication that a cough has occurred may be mapped to the monitored user of the cough and stored. If multiple users are being monitored, the indication may be mapped to a particular monitored user that is determined to have cough. In some embodiments, additional information about the cough may also be stored, such as: duration of cough; the number of coughs in a group of rapid coughs (e.g., cough episodes); and the intensity of the cough. If there are multiple monitored users and a cough can be mapped to a particular user, then as part of method 800, the cough can be an audio event that has caused other users to wake up. At the end of the night, there may be no, one, several or many stored cough indications for a particular monitored user. Blocks 2105 through 2155 may be repeated throughout the night that the user is present in the bed.
At block 2160, the indication of the cough stored at block 2155 may be output. Block 2160 may include an output report, such as a night report, that includes previous late cough data. Thus, block 2160 may be performed after the cough detection and attribution ends one night, such as in the morning where one or more users are no longer detected to be present in the bed. The indication of cough may be included in the generated report indicating: the number of coughs of a particular user during the night; when the user coughs; how strong the user coughs are; whether the cough wakes up the user; etc. Such reports may be output in response to a user providing input to the cough detection and attribution device. The user may provide input via a touch screen or the user may speak a command requesting output of a night report (possibly along with a trigger word or phrase). In other embodiments, the night report may be automatically output at a particular time or when it is determined that the user is awake or getting up after a certain time of day (e.g., after 7 a.m.). The synthesized speech and/or text and/or graphics may be used on a display of the cough detection and attribution device to output a night report. If multiple users are being monitored, separate reports or separate data may be output for each user. It is also possible to combine reports of multiple user data.
The night report data may be transmitted to, stored by, and/or analyzed by a cloud-based remote server system. In some embodiments, each cough indication of block 2155 may be transmitted to a cloud-based server system for storage and analysis. Alternatively, in some embodiments, data from the generated report may be transmitted to and stored by a cloud-based server system. The user may have the option of preventing any cough related data from being transmitted to the cloud-based server system. In some embodiments, the night report may be generated by a cloud-based server system and stored as mapped to the user account such that the report may be accessed via one or more other devices (e.g., smartphones) of the user having access to the user account.
The cloud-based server system or cough detection and attribution device may also use the stored cough indications to generate long-term trend data. Such long-term trend data may indicate to the monitored user that the user is in a period of time, such as: cough trend over night, week, weeks, month, months, year, years, etc. The long-term data may indicate: whether the monitored user's cough frequency increases, decreases, or remains substantially unchanged over the period of time; whether the cough intensity of the monitored user increases, decreases, or remains substantially unchanged over the period of time; the monitored user's cough duration is increasing, decreasing, or remains substantially unchanged during the period of time. Long-term trend data may be maintained separately for each monitored user.
Similar to the night report data, the user may provide input via a touch screen, or the user may speak a command (along with a trigger word or phrase) requesting output of long-term trend data. In other embodiments, the long-term trend data may be output at a particular time or when the user is determined to be awake after a defined time (e.g., after 7 a.m.). In some embodiments, the long-term trend data is output as part of a night report. In some embodiments, long-term trend data is output in response to a change in the long-term trend data, such as a change in the frequency at which the user is cough that increases over time, being identified as present. The long-term trend data may be output using synthesized speech and/or text and/or graphics on a display of the cough detection and attribution device. If multiple users are being monitored, separate long-term trend data may be output for each user, or a combined long-term report may be generated.
Long-term trend data for night reports and/or coughs may be output along with sleep reports for one or more users. A single report may be output indicating the user's sleep data and cough data. For example, in the morning, the user may view a single report that includes the user's previous night sleep data and data about the user's cough. Long-term sleep and/or cough data may be incorporated as part of the report. Such reports may be stored using a cloud-based server system mapped to user accounts to allow users to access data from separate devices.
In some embodiments, one or more recommendations may be output if the cough frequency of the user is relatively high or increases. For example, if the cough attribution device (or another smart device in the vicinity) measures humidity using a humidity sensor, a recommendation to increase the humidity level of the user's sleeping room may be output if the measured humidity is below a threshold (or some other form of determination using a threshold criteria based at least in part on humidity) at night when the user is prone to cough. Another suggestion may be that the user seek medical assistance, such as in response to a prolonged cough exacerbation.
To perform sleep tracking, cough detection and attribution, and/or other forms of health monitoring or tracking, setup procedures may be performed to help ensure that the user has properly located the device and that the surrounding environment is configured so as to allow the device to operate properly. Without performing the setup procedure, the sleep tracking device may be less likely to have been removed correctly for the direction in which the user sleeps, moving objects located at acceptable distances and/or in the vicinity of the user. Fig. 22 illustrates an embodiment of a sleep tracking system 2200 that performs a sleep setup process. It should be appreciated that similar setup procedures may be performed for a cough attribution device or other form of health monitoring or health tracking device. Sleep tracking system 2200 may represent an embodiment of system 200A of fig. 2A. The sleep tracking system 2200 may be incorporated as part of the contactless sleep tracking device 300 or some other stand-alone, contactless health tracking or monitoring device. Sleep tracking system 2200 may also be used to perform setup procedures prior to performing cough detection and attribution. Some components of radar processing module 210 may be active before setting sleep tracking or cough detection and attribution. Radar processing module 2210 represents a subset of the components of radar processing module 210 that may be used to perform the setup process. The mobile filter 211, frequency weighting device 212, and range-vital sign transformation engine 213 may function as detailed with respect to system 200A. The training module 2220 may use the output from the range-vital sign transformation engine 213 of the radar processing module 2210.
Similar to radar processing module 2210, training module 2220 may be implemented as software executed using one or more general purpose processors. In other embodiments, dedicated hardware may be used to perform the functions of the components of training module 2220. In some embodiments, the training module 2220 may be active before the sleep tracking setup process is successfully completed. In such embodiments, once completed, training module 2220 is disabled and system 2200 may be used as system 200A, system 200B, embodiment 1400, embodiment 1600, or system 1800. Alternatively, the system or user may reinitiate the setup process at some time after a successful setup process, such as if the sleep tracking device has difficulty detecting that the sleeping user, the device is relocated, periodically, or at some other time in the future.
The training module 2220 may include a classifier 2221, a consistency monitor 2222, and a communication output engine 2223. The classifier may receive the output of the range-vital sign transformation engine 213. The radar subsystem 205 and radar processing module 2210 may operate continuously, regardless of whether a sleep tracking setup procedure has been performed. The training module 2220 may be activated when the user provides an input indicating that a sleep tracking setup procedure is to be performed. When the training module 2220 is activated, the classifier 2221 may begin outputting a classification based on data received from the radar processing module 2210, such as output from the range-vital sign transformation engine 213.
As detailed previously, the range-to-vital sign transform engine 213 analyzes the received motion filtered waveform data to identify and quantify the frequency, range, and amplitude of movement over time. The classifier 2221 receives as its input processed waveform data indicative of the magnitudes of the different frequencies observed at the respective distances.
Before performing classification, classifier 2221 may discard waveform data indicating movement too close and/or too far from system 2200. In some embodiments, frequencies detected at distances less than 0.25m or at distances greater than 1m are discarded. In other embodiments, the minimum and maximum range distances may vary. For example, the minimum distance may be between 0.1 and 0.5m and/or the maximum distance may be between 0.7 and 1.5 m.
Classifier 2221 may analyze the data in the waveform data chunks over time. That is, after discarding waveform data corresponding to too close or too far movement, the data from the range-vital sign transformation engine 213 may be aggregated or summed over a period of time, such as two seconds. In other embodiments, shorter or longer durations are used to create chunks of data, such as chunks having a duration of 0.5s to 5 s. Classifier 2221 may analyze the chunks in 1s steps (steps are time differences from the beginning of the first chunk to the beginning of the next chunk) so that there may be some amount of overlap, such as 50%, between the chunks. In other embodiments, the stride may be larger or smaller, such as between 0.5s and 5s, which varies the amount of overlap.
Classifier 2221 may include a machine learning model, such as a trained neural network. The machine learning model receives each summed data chunk (which includes frequency, amplitude, and range data) and outputs a classification selected from a plurality of possible classifications. In some embodiments, classifier 2221 outputs one of three possible classifications. The classification status may indicate: 1) No user exists; 2) Excessive movement; and 3) static user presence. The classification of "no user present" corresponds to no user detected. The classification may indicate that the user is outside of the allowed range, that the user is not present in the environment, or that the device comprising system 2200 targets its radar subsystem away from the user. The classification of "over-movement" may indicate that the user is not lying calm (e.g., the user is rolling or otherwise moving in a bed) and/or that one or more other objects are present and moving in the monitored area. Such objects may be fans, clocks (e.g., which include a pendulum), moving water, moving fabric (e.g., curtains moving due to airflow), plants (e.g., leaves rattling due to airflow), or some other type of moving object. The classification of "static user presence" may indicate that no moving user is detected. By not moving, the user may be stationary but still exhibit vital signs, such as slight movements due to the user's breathing and the user's heartbeat. Slight muscle movements (e.g., twitching of fingers or arms, deep sighs) can be tolerated by the machine learning model, and still return a classification of "static user presence".
Classifier 2221 may include a pre-trained neural network model that analyzes two or three features received from range-vital sign transformation engine 213. Features may be selected from the group of frequency, amplitude and distance. It should be appreciated that in other embodiments, a lesser or greater number of features may be used to perform classification. In other embodiments, a fewer or greater number of classification states may be determined by classifier 2221. Furthermore, in other embodiments, different classification arrangements are used, including both classification arrangements using other forms of machine learning arrangements and non-machine learning arrangements. The machine learning model may be trained using a set of truly labeled features (e.g., frequency, amplitude, and/or range) that have been accurately mapped to the desired states of those features. For example, in a controlled environment, the subject may be monitored and have features that are appropriately classified by the sleep specialist based on whether the subject is stationary, moving, or absent.
In some embodiments, classifier 2221 may use a process other than a machine learning model. For example, classifier 2221 may determine whether there is movement due to the detection of respiration but little, if any, other movement is detected. Thus, if it is determined, based on frequency and amplitude data received from the range-to-vital sign transformation engine 213, that there is a frequency between 10 and 60 hertz (or some other range of a particular age indicated in table 1) and that no other significant amount of movement is observed (in addition to, possibly, movement due to heartbeat), a classification of "static user presence" may be determined and output. If amplitudes above a defined threshold (or some other form of determination using a threshold criterion based at least in part on amplitude) are observed at multiple frequencies, a determination of "excessive movement" may be output. If no amplitude exceeding the defined threshold is detected, a determination of "no user present" may be output. In other embodiments, rather than using respiration, another vital sign, such as the heartbeat of the user, is detected and used to determine the classification. Breathing may be preferable because the chest of the user moves more due to breathing than the heartbeat of the user.
The classifier 2221 may output a single classification at any given time when the classifier 2221 is active. In some embodiments, if the classification of "static user present" has not been output by classifier 2221 after the sleep tracking setup process has been started with a defined time limit (or some other form of time-based criteria), such as between five and 20 seconds, the setup process is not successfully completed. In this case, the communication output engine 2223 may provide feedback to the user that the setting of sleep tracking has failed, and possibly provide advice on how to improve the chance of success in the setting process performed in the future. If classifier 2221 has identified a classification of "excessive movement," it may be recommended that the user attempt to remove extraneous movement from the environment, such as moving objects or that the user should avoid moving themselves. If a "no user present" classification is output by classifier 2221 during the failed setup process, the user may be alerted to the distance from radar subsystem 205 that the user should be located and/or the user may be alerted to how radar subsystem 205 should be pointed with respect to where the user is sleeping.
If classifier 2221 does output a classification of "static user presence" before expiration of the time period, this may be used as an indication that the user has been properly lying in bed, is being detected, and that the user's environment is sufficiently stationary for proper sleep, cough, or health monitoring and tracking. This initial classification of "static user presence" may act as a trigger to initiate a consistency check performed by the consistency monitor 2222. The purpose of the compliance monitor 2222 may be to ensure that the user is properly detected as "static user presence" while lying in bed for a sufficient portion of the time so that future monitoring of the user while sleeping may result in usable vital statistics and/or health monitoring data. For example, while the classifier 2221 may have initially observed "static user presence," the classification may be transient for a defined period of time, such as due to movement of a window covering that has temporarily and substantially stopped moving. In this case, although temporarily classified as "static user presence", excessive movement may be detected due to restoration of airflow that negatively affects accurate monitoring of the user.
During a period of time, such as five two-second chunks, the consistency monitor 2222 may determine whether the classifier 2221 has output "static user presence" for a sufficient portion of the period of time. For example, if the time period is 10 seconds, then in a 10 second window, classifier 2221 may need to output "static user presence" for seven seconds, some number of chunks, or some other threshold portion of the time period (or may perform some other form of determination that uses a threshold criterion based at least in part on the amount of time that classifier 2221 is outputting a particular state classification).
If the coherence monitor 2222 determines that the classifier 2221 outputs "static user presence" for at least a threshold amount of time (or, again, uses some other form of threshold criteria based at least in part on the amount of time in a given state), the communication output engine 2223 may indicate that the sleep tracking settings have been successfully completed and that the sleep tracking has now been properly set and activated. A graphic (e.g., via display 140) and/or an audible output (e.g., via speaker 155) may be provided to the user indicating that the setup has been successfully completed. The user's sleep may be tracked each time the user is traveling forward in the bed. Such tracking may be automatically initiated based on detecting that the user is in bed. Upon successful completion of the setup, training module 2220 may be deactivated and system 2200 may transition to function as systems 200A, 200B, 1000, and/or 1800.
If the coherence monitor 2222 determines that the classifier 2221 did not output a "static user presence" (or some other form of threshold criteria-based analysis of the amount of time in a given state) for at least a threshold portion of the time period, the communication output engine 2223 may indicate that the sleep tracking settings were not successfully completed and that the sleep tracking has not been activated. Since the user was previously identified by the classifier 2221 as being in bed and stationary, the failure at this time will likely be due to the classifier 2221 outputting "excessive movement" for a significant period of time, such as due to the user turning over, moving, or other object movement in the vicinity. Graphics (e.g., via display 140) and/or audible output (e.g., via speaker 155) indicating a failure of a setting may be provided to a user by communication output engine 2223. A recommendation may be output for the user to retry the sleep tracking setting, remain stationary in bed, and remove any moving objects from the environment.
In some embodiments, the compliance monitor 2222 may additionally monitor for changes in the distance from the user detected (e.g., based on the detected breath). If a change in the distance of the user is observed to change by more than a distance threshold (or some other form of determination using a threshold criterion based at least in part on distance), the consistency monitor 2222 may continue to monitor the user to see if the variance decreases over time. If the variance does not decrease before a defined time limit (e.g., 30 seconds) is reached or by some other time-based criteria, the sleep tracking setup process can fail. If the change in distance of the observing user is an acceptable amount, the setup process qualifies for successful completion.
Fig. 23 illustrates an embodiment of an instructional user interface 2300 presented during a sleep setup process. The instructional user interface 2300 can be presented using a display screen of the device 300 (in which the system 2200 can be incorporated). Responsive to a user providing input indicating that the user desires to perform a sleep or health monitoring setup process (e.g., perform a selection, speak an exit head command on a touch screen), an instructional user interface 2300 may be presented. In some embodiments, "setup" may instead be referred to as "calibration" because the user is potentially moving the device, other objects, and/or their own sleep location in order to successfully complete the setup process. Fig. 2301 may be presented as part of instructional user interface 2300 to indicate the general positioning and orientation in which device 300 and the user's bed should be disposed. The user may be allowed to skip additional instructions to set up directly via touch element 2302 or to proceed to the next instruction user interface via touch element 2303. The page indicator 2304 may indicate the number of instruction interfaces and the current instruction user interface being presented (in this example, one of the three) by virtue of multiple elements and which element is emphasized. The written instructions 2305 may indicate how the user should arrange the device with respect to the user's bed. Written instructions 2305 may also be output via synthesized speech when instructional user interface 2300 is presented.
Fig. 24 illustrates an embodiment of an instructional user interface 2400 presented during a sleep setup process. The instructional user interface 2400 can be presented using a display screen of the device 300 (in which the system 2200 can be incorporated). The instructional user interface 2400 can be presented after the user provides input (e.g., via the touch element 2303, via a voice command) to advance from the instructional user interface 2300 to the next instructional user interface. Fig. 2401 may be presented as part of a instructional user interface 2400 to indicate, such as in greater detail than fig. 2301, the position and orientation at which the device 300, the user's bed, and the user's sleep position should be arranged relative to one another. The user may be allowed to skip additional instructions to go directly to the setting via touch element 2402 or the user may go to the next instruction user interface via touch element 2403. The page indicator 2404 may indicate the number of instruction interfaces and the current instruction user interface being presented (in this case, two of the three) by virtue of multiple elements and which element is emphasized. The written instructions 2405 may indicate how the user should position himself in the bed relative to the device and/or ensure that no object blocks the path directly from the user's chest to the device. Written instructions 2405 may also be output via synthesized speech when instructional user interface 2400 is presented.
Fig. 25 illustrates an embodiment of an instructional user interface 2500 presented during a sleep setup process. The user interface 2500 may be presented using a display screen of the device 300 (into which the system 2200 may be incorporated). The user interface 2500 may be presented after the user provides input (e.g., via the touch element 2403, via a voice command) to continue from the instructional user interface 2400 to the next instructional user interface. The user may be allowed to skip additional instructions (and/or skip the setup process entirely) via touch element 2502 or continue to setup measurements captured via touch element 2503. Notably, the user may be encouraged or required to initiate settings using verbal commands rather than touching the touch element 2503. The use of verbal commands such as "setup" may help to keep the user stationary during setup except for breathing. That is, when such verbal commands are provided, the user does not need to move their arm and hand to provide touch input to trigger the start of the setup measurement. Page indicator 2504 can indicate the number of instruction interfaces and the instruction user interface currently presented by the number of elements and which element is emphasized (in this case, the third of the three interfaces). Written instructions 2505 may indicate how the user should be in bed, alone, and ready to begin setup. When the user interface 2500 is presented, written instructions 2505 can also be output via synthesized speech.
Fig. 26 illustrates an embodiment of a user interface 2600 presented during execution of a sleep setup process. The user interface 2600 may be presented using a display screen of the device 300 (in which the system 2200 may be incorporated). The user interface 2600 can be presented in response to a user triggering a setup process from the instructional user interface 2500, such as using a voice or touch command via the touch element 2503.
A written indication 2601 may be presented indicating that a contactless setting measurement is being performed. The user interface 2600 may include an indicator value 2602, the indicator value 2602 indicating how much (e.g., how many percent) of the setup process has been performed. In the example of user interface 2600, 25% is complete. The indicator value 2602 may be indicated for each percentage or at various rounding values, such as every 5%. Visually, an animation 2603 or some other animation may be presented to indicate to the user that the device is running and to provide a visually pleasing effect. The animation 2603 may change color over time. The animation 2603 may have a plurality of circular shapes, each of which has a circumference that fluctuates in a sinusoidal pattern over time. There may be gradients of decreasing intensity from the perimeter of the plurality of circles toward the center of the animation 2603. Furthermore, there may be a second gradient of decreasing intensity away from the center of the circle.
In some embodiments of user interface 2600, audio may be output when user interface 2600 is presented. The audio may be used to indicate to the user that the setup process is being performed, as it may be difficult for the user to see the user interface 2600 while lying in bed. The audio may include music, such as relaxed instrumental music. When the music ends and additional sounds can be output, such as a stings, the user can infer therefrom that the setting has been completed. Additionally or alternatively, a synthesized voice indicating that the setting is being performed may be output. When the setting is completed, a synthesized voice may be output, which indicates that the setting is completed. The next user interface presented may depend on whether the setup process was completed successfully.
Fig. 27 illustrates an embodiment of a user interface 2700 presented after a successful setup process. The user interface 2700 may be presented using a display screen of the device 300. The user interface 2700 may indicate that the sleep tracking process (or some other health monitoring process) has completed successfully. If the setup is completed successfully, user interface 2700 may be presented after user interface 2600. Specifically, when the consistency monitor 2222 has successfully completed the consistency check, the user interface 2700 may be output by the communication output engine 2223. The diagram 2701 may graphically indicate that the device is ready. Touch element 2702 may allow the user to advance to the next item for setting or returning to the home screen of the device. Notification 2703 may indicate that the device is now ready for sleep (and/or cough, and/or more generally, for health) tracking and/or providing one or more cues to obtain good results. While the user interface 2700 is being presented, synthesized speech stating the content of the notification 2704 can be output.
Fig. 28 illustrates an embodiment of a user interface 2800 presented after an unsuccessful sleep setup procedure. Thus, user interface 2800 may be presented after user interface 2600. User interface 2800 may be presented using a display of device 300. User interface 2800 may indicate that the sleep tracking setting (or other health tracking setting) process has not been completed successfully. User interface 2800 may indicate that classifier 2221 has detected a "no user present" state. The user may receive a distance suggestion in instruction 2804, such as the user being less than "one arm distance" from the device, as a possible reason is that the user is too close or too far from the device 300. While the user interface 2800 is being presented, synthesized speech stating the contents of the instructions 2804 may be output. The graphical status indicator 2801 may indicate that the device requires additional input from the user. Touching element 2802 may allow the user to retry the setup process. Touch element 2803 may allow a user to view instructions presented in instructional user interfaces 2300 through 2500.
Fig. 29 illustrates another embodiment of a user interface 2900 presented after an unsuccessful sleep setup procedure. Thus, user interface 2900 may be presented after user interface 2600. The user interface 2900 may be presented using a display screen of the device 300. The user interface 2900 may indicate that the sleep tracking setup process has not been completed successfully. The user interface 2900 may be presented when the classifier 2221 has detected an "excessive movement" classification (and no "static user present" classification has been output) or the consistency monitor 2222 has detected an exceeding "excessive movement" classification for a certain period of time. Since the possible reason is that the user is moving too much or another object nearby is moving, the user may receive advice in instruction 2904 on how to correct this situation, such as by lying still and removing the moving object from the general area. When the user interface 2900 is presented, synthesized speech stating the contents of the instructions 2904 may be output. The graphical status indicator 2901 may indicate that the device requires additional input from the user. Touch element 2902 may allow the user to retry the setup process. Touch element 2903 may allow a user to view instructions presented in instructional user interfaces 2300 through 2500.
For any of interfaces 2300 through 2900, synthesized speech corresponding to the presented text may be output. Thus, a user who is lying in a bed can be made aware of the status of the sleep tracking setting process without the user physically having to move their head to view the display screen. The synthesized speech output may match or may be slightly different from the text presented on the display screen.
It should be appreciated that a fewer or greater number of elements may be presented for any of interfaces 2300 to 2900. Furthermore, elements may be rearranged or include different instructions based on how the device should be set.
The various methods may be performed using the system 2200 and graphical user interface of fig. 23-29. Fig. 30 illustrates an embodiment of a method 3000 for performing an initial setup procedure of a sleep tracking device. However, the method 3000 may also be used to perform settings such as some other form of health monitoring or tracking device for cough detection and attribution. Method 3000 may be performed using system 2200, and system 2200 may be implemented on system 100 and/or device 300. Method 3000 may be performed prior to blocks of other methods detailed herein in order to facilitate the setup process prior to the user performing sleep, cough, or some other form of health monitoring or tracking.
At block 3005, the user may provide a request, such as via voice or touch input, indicating that the user desires to perform a sleep tracking setup process. In some embodiments, the device may graphically present an interface requesting the user to perform such a process and requesting the user to agree to proceed. The user may be required to provide input confirming that the user does desire to set sleep tracking and that the user is willing to participate in the setup process. The user may be provided with the option to skip the setup process (but still enable sleep, cough and health tracking). Such an option can be desirable when an expert, such as a user or installation professional who has previously used the device, is using the device and does not need to help calibrate the relative positions of the user, the device and the user's sleeping posture. The user may be provided with the option to disable sleep, cough and/or health tracking and forego the setup process. If selected by the user, the method 3000 ends after block 3005 and such features will be disabled.
After the user requests to perform sleep tracking settings, block 3010 may be performed. At block 3010, instructions may be output via a display screen and/or via synthesized speech indicating how the device should be positioned relative to where the user is sleeping, how far the user should be from the device, and the user should remove the moving object from the user's immediate environment.
At block 3015, radio waves are emitted by a radar subsystem of a system or device executing method 3000. Thus, no subject is in physical contact with the user to perform sleep tracking (or other form of health monitoring). In some embodiments, the transmission of radio waves may begin at block 3015; in other embodiments, the radio waves may have begun to be output by the device, regardless of whether the sleep tracking setup process is initiated. The emitted radio waves may be continuous wave radars, such as FMCW. The radio waves transmitted at block 805 may be transmitted according to the FMCW radar scheme of fig. 2C. The radio waves are emitted by the RF transmitter 206 of the radar subsystem 205. At block 3015, reflections of the radio wave are received, such as by multiple antennas of RF receiver 207 of radar subsystem 205. The reflection received at block 3020 is reflected by both the moving object (e.g., person with heartbeats and breathing) and the stationary object. The output of the radar subsystem based on the received reflected radio waves may be processed as described in detail with respect to the mobile filter 211, the frequency booster 212 and the range-to-vital sign conversion engine 213. The output of the range-vital sign transformation engine 213 may be indicative of the measured frequencies, the frequency magnitudes, and the distances at which those frequency magnitudes are measured.
At block 3025, classification may be performed using a trained classifier based on the frequency, frequency amplitude, and distance waveform data. The classification may be performed as described in detail with respect to classifier 2221, such as using a machine learning model or by determining whether respiration is detected in addition to other significant amounts of movement. The classifier may output one of three (or other number) possible classifications. The classification desired at this point to continue the setup process is that the user is present and static, which will indicate that the user is properly lying in the bed and is stationary, except for movements due to the user's vital statistics. The classification determined at block 3025 may be evaluated at block 3030.
At block 3030, if it is determined that the user is present and static based on the classification of block 3025, the method 3000 may proceed to block 3035. If the user is not assessed as present and static at block 3030, the classification of block 3025 may continue to be performed for a period of time, such as until a time limit or some other time-based criteria is restricted. If at any point during the time period the user is identified as present and static, the method 3000 may proceed to block 3035. If the classification of user presence and static at block 3025 is done at no point before the time limit is reached (or the time-based criteria is reached), block 3030 may be evaluated negative. If block 3030 is evaluated negative, method 3000 may proceed to block 3055.
At block 3055, an indication that the sleep tracking settings have failed may be output, and possibly one or more suggestions that the user should follow when attempting the settings again. If the primary classification output at block 3025 is that no user is detected ("user is not present"), an indication may be output that the user should realign the device where the user is sleeping and stay within a range of distances allowable for the device. If the primary classification output at block 3025 is that excessive movement is detected, an indication may be output that the user should attempt to reduce movement and/or remove objects moving in the vicinity of the user. As part of block 3055, the user may be invited to retry the sleep tracking setting process. If the user retries, the method 3000 may return to block 3010.
If block 3030 is evaluated affirmative, method 3000 may proceed to block 3035. At block 3035, block 3025 may continue to be performed such that the current classification is determined and stored. Over the time window, it may be determined at block 3040 whether the classification stored at block 3035 indicates that the user is present and static within at least a defined threshold amount of the time window (or meets some other time-based threshold criteria that indicates that the user is sufficiently present and static). If not, the method 3000 proceeds to block 3055. If the determination of block 3040 is affirmative, sleep tracking may be activated at block 3045. Sleep tracking and other health monitoring may then be automatically performed when the user is identified as being in bed (assuming the user has properly agreed to such monitoring). Sleep tracking is more likely to capture useful data because the user has performed the sleep tracking setup process and ensured that the distance from the device is correct when the user sleeps, the target of the device is correct, and moving objects near the user's sleep have been removed.
At block 3050, an indication may be output to the user indicating that sleep tracking has been successfully set. This may include an audible message (e.g., synthesized speech) being output indicating success and/or a graphical user interface being presented indicating success of the setup.
The methods, systems, and devices discussed above are examples. Various configurations may omit, replace, or add various procedures or components as appropriate. For example, in alternative configurations, the methods may be performed in a different order than described, and/or stages may be added, omitted, and/or combined. Furthermore, features described with respect to certain configurations may be combined in various other configurations. The different aspects and elements of the configuration may be combined in a similar manner. Furthermore, technology is evolving and, as such, many elements are examples and do not limit the scope of the disclosure or claims.
Specific details are given in the description to provide a thorough understanding of example configurations (including implementations). However, the configuration may be practiced without these specific details. For example, well-known circuits, processes, algorithms, structures, and techniques have been shown without unnecessary detail in order to avoid obscuring the configurations. The description provides example configurations only, and does not limit the scope, applicability, or configuration of the claims. Rather, the foregoing description of the configuration will provide those skilled in the art with an enabling description for implementing the described techniques. Various changes may be made in the function and arrangement of elements without departing from the spirit or scope of the disclosure.
Further, the configuration may be described as a process which is depicted as a flowchart or a block diagram. Although each may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. Further, the order of the operations may be rearranged. The process may have additional steps not included in the figures. Furthermore, examples of methods may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a non-transitory computer readable medium such as a storage medium. The processor may perform the described tasks.
Several example configurations have been described, and various modifications, alternative constructions, and equivalents may be used without departing from the spirit of the disclosure. For example, the above elements may be components of a larger system, wherein other rules may take precedence over or otherwise modify the application of the present invention. Furthermore, steps may be taken before, during, or after consideration of the above elements.

Claims (102)

1. A contactless sleep tracking device, comprising:
a housing;
a first environmental sensor housed by the housing;
A non-contact sensor housed by the housing, the non-contact sensor remotely monitoring movement of a user;
a processing system housed by the housing, comprising one or more processors, the processing system receiving data from the first environmental sensor and the non-contact sensor, wherein the processing system is configured to:
determining that the user has entered a sleep state based on data received from the non-contact sensor;
determining a transition time for the user to transition from the sleep state to an awake state;
identifying an environmental event occurring within a period of the transition time based on data received from the first environmental sensor;
based on the environmental event occurring within the period of time of the transition time, attributing the user to the environmental event; and
an indication of the attributed environmental event is output as a cause of the user waking up.
2. The contactless sleep tracking device according to claim 1, wherein the contactless sensor uses a low power Continuous Wave (CW) radar.
3. The contactless sleep tracking device of claim 2, wherein:
The first environmental sensor is an ambient light sensor; and
wherein the processing system wakes up the user based on processing the ambient light level detected from the ambient light sensor at least in part due to the ambient light level.
4. A contactless sleep tracking device according to claim 3, wherein the processing includes determining that the ambient light level has increased by at least a threshold amount.
5. The contactless sleep tracking device of claim 2, wherein:
the first environmental sensor is a microphone, wherein the processing system wakes the user due at least in part to sound captured by the microphone.
6. The contactless sleep tracking device of claim 2, further comprising:
a second environmental sensor; and
the processing system being configured to identify the environmental event includes the processing system being configured to:
comparing data received from the first environmental sensor to a first threshold; and
comparing data received from the second environmental sensor to a second threshold, wherein the processing system identifying the environmental event is further based on: data received from the second environmental sensor, comparing the data received from the first environmental sensor to the first threshold, and comparing the data received from the second environmental sensor to the second threshold.
7. The contactless sleep tracking device according to claim 2, wherein the first environmental sensor is a temperature sensor and the environmental event is a temperature change.
8. The contactless sleep tracking device of claim 2, wherein the contactless sensor outputs a low power CW radar within 57-64GHz and a peak EIRP of less than 20 dBm.
9. The contactless sleep tracking device of claim 2, further comprising:
a wireless network interface housed by the housing;
an electronic display housed by the housing;
a microphone housed by the housing;
a speaker housed by the housing; and
a bracket incorporated as part of the housing, wherein:
the processing system communicates with the wireless network interface, the electronic display screen, the microphone, and the speaker.
10. The contactless sleep tracking device of claim 9, wherein the processing system is further configured to:
receiving a voice-based query via the microphone;
outputting information of the voice-based query via the wireless network interface;
receiving data from a cloud-based server system via the wireless network interface; and
A response to the voice-based query is output via the speaker, wherein the response indicates the attributed environmental event as a cause of the user waking up.
11. The contactless sleep tracking device according to claim 9, wherein the processing system is further configured to output the indication of the attributed environmental event via the electronic display screen, the speaker using synthesized speech, or both.
12. A method for performing contactless sleep monitoring, the method comprising:
determining that the user has entered a sleep state based on data received from the non-contact sensor;
determining a transition time for the user to transition from the sleep state to an awake state;
identifying an environmental event occurring within a period of the transition time based on data received from a first environmental sensor;
based on the environmental event occurring within the period of time of the transition time, attributing the user to the environmental event; and
an indication of the attributed environmental event is output as a cause of the user waking up.
13. The method for performing contactless sleep monitoring of claim 12, wherein the data received from the contactless sensor is based on a low power Frequency Modulated Continuous Wave (FMCW) radar.
14. The method for performing contactless sleep monitoring of claim 12, wherein:
the first environmental sensor is an ambient light sensor; and
identifying the environmental event includes determining that the ambient light level has increased.
15. The method for performing contactless sleep monitoring of claim 12, wherein:
the first environmental sensor is an ambient light sensor; and
identifying the environmental event includes determining that the ambient light level has increased.
16. The method for performing contactless sleep monitoring of claim 12, wherein:
the first environmental sensor is a microphone; and
identifying the environmental event includes determining that the microphone has captured louder sound sufficient to wake the user.
17. The method for performing contactless sleep monitoring of claim 12, wherein:
the first environmental sensor is a temperature sensor; and
identifying the environmental event includes determining that a temperature change has been detected.
18. The method for performing contactless sleep monitoring of claim 12, further comprising:
receiving a voice-based query via a microphone; and
A response to the voice-based query is output via a speaker, wherein the response indicates the attributed environmental event as a cause of the user waking up.
19. The method for performing contactless sleep monitoring of claim 18, the method further comprising:
outputting information based on the voice-based query via a wireless network interface; and
data is received from a cloud-based server system via the wireless network interface.
20. The method for performing contactless sleep monitoring of claim 12, wherein: is executed by a processing system of the contactless sleep tracking device: determining the transition time, identifying the environmental event, attributing the user to the environmental event, and outputting the indication of the attributed environmental event as the cause of the user to wake, wherein the first environmental sensor is part of the contactless sleep tracking device.
21. A non-contact sleep analysis apparatus for monitoring a plurality of users, the non-contact sleep analysis apparatus comprising:
a housing;
a radar sensor housed by the housing, the radar sensor using radio waves to monitor movement of an area;
A processing system housed by the housing, comprising one or more processors, the processing system receiving data from the radar sensor, wherein the processing system is configured to:
receiving data from the radar sensor;
performing clustering on the data received from the radar sensor, wherein the clustered data is indicative of a first cluster and a second cluster;
determining that there are two users in the area based on the clustering performed on the data received from the radar sensor;
responsive to determining that there are two users, calculating a midpoint location between the first cluster and the second cluster;
mapping a first portion of the data from the radar sensor to a first user based on the calculated mid-point;
mapping a second portion of the data from the radar sensor to a second user based on the calculated mid-point;
performing separate sleep analysis of the first portion of the data for the first user and the second portion of the data for the second user over a period of time; and
outputting sleep information of the first user during the time period based on the first portion of data and outputting sleep information of the second user during the time period based on the second portion of data.
22. The contactless sleep analysis device for monitoring a plurality of users according to claim 21, wherein the processing system is further configured to:
receiving additional data from the radar sensor;
after determining that there are two users and calculating the midpoint location, performing clustering on the additional data received from the radar sensor, wherein the clustered data is indicative of a single cluster; and
based on the clustering performed on the additional data received from the radar sensor, it is determined that only a single user is present.
23. The contactless sleep analysis device for monitoring a plurality of users according to claim 22, wherein the processing system is further configured to:
determining which of the first user and the second user is the single user based on the location of the single cluster relative to the calculated midpoint.
24. The contactless sleep analysis device for monitoring a plurality of users according to claim 21, wherein the processing system is further configured to:
converting the data received from the radar sensor into fewer dimensions, wherein the data received from the radar sensor is multi-dimensional, wherein clustering is performed on the converted data.
25. The contactless sleep analysis device for monitoring a plurality of users of claim 21, wherein the processing system being configured to perform separate sleep analysis on the first portion of the data for the first user and the second portion of the data for the second user over the period of time comprises the processing system being configured to:
determining that the first user has entered a sleep state at a first time; and
determining that the second user has entered the sleep state at a second time.
26. The non-contact sleep analysis apparatus for monitoring a plurality of users according to claim 21, wherein the radar sensor uses a low power Frequency Modulated Continuous Wave (FMCW) radar.
27. The non-contact sleep analysis apparatus for monitoring a plurality of users according to claim 21, further comprising a first environmental sensor housed by the housing.
28. The contactless sleep analysis device for monitoring a plurality of users according to claim 27, wherein the processing system is further configured to:
determining a transition time for the first user to transition from a sleep state to an awake state;
Identifying an environmental event occurring within a period of the transition time based on data received from the first environmental sensor; and
the first user is awakened to be attributed to the environmental event based on the environmental event occurring within the period of time of the transition time.
29. The contactless sleep analysis device for monitoring a plurality of users according to claim 28, wherein the processing system is further configured to output an indication of the attributed environmental event mapped to the first user.
30. The contactless sleep analysis apparatus for monitoring a plurality of users according to claim 29, wherein:
the first environmental sensor is an ambient light sensor; and
the processing system being configured to identify the environmental event includes the processing system being configured to determine that the ambient light level has increased by at least a threshold amount.
31. The contactless sleep analysis apparatus for monitoring a plurality of users according to claim 29, wherein:
the first environmental sensor is a microphone; and
the processing system being configured to identify the environmental event includes the processing system being configured to determine that sound has been detected.
32. The contactless sleep analysis apparatus for monitoring a plurality of users according to claim 21, further comprising:
a wireless network interface housed by the housing;
a display screen received by the housing;
a microphone housed by the housing;
a speaker housed by the housing; and
a bracket incorporated as part of the housing, wherein:
the processing system communicates with the wireless network interface, the display screen, the microphone, and the speaker.
33. The contactless sleep analysis device for monitoring a plurality of users according to claim 32, wherein the processing system is further configured to:
receiving a voice-based query via the microphone;
outputting information based on the voice-based query via the wireless network interface;
receiving data from a cloud-based server system via the wireless network interface; and
a response to the voice-based query is output via the speaker.
34. A method for contactless sleep monitoring of a plurality of users, the method comprising:
receiving a radar data stream based on radio waves transmitted into an area;
Performing clustering on the radar data streams, wherein the clustered data is indicative of a first cluster and a second cluster;
determining that there are two users within the area based on the clustering performed on the radar data streams;
responsive to determining that there are two users, calculating a midpoint location between the first cluster and the second cluster;
mapping a first portion of the radar data stream to a first user based on the calculated mid-point;
mapping a second portion of the radar data stream to a second user based on the calculated mid-point;
performing separate sleep analysis of the first portion of the data for the first user and the second portion of the data for the second user over a period of time; and
outputting sleep information of the first user during the time period based on the first portion of data and outputting sleep information of the second user during the time period based on the second portion of data.
35. The method for contactless sleep monitoring of a plurality of users according to claim 34, the method further comprising:
receiving additional data as part of the radar data stream;
After determining that there are two users and calculating the midpoint location, performing clustering on the received additional data of the radar data stream, wherein the clustered data is indicative of a single cluster; and
based on the clustering performed on the additional data received as part of the radar data stream, it is determined that only a single user is present.
36. The method for contactless sleep monitoring of a plurality of users of claim 35, wherein determining which of the first user and the second user is the single user is based on a location of the single cluster relative to the calculated midpoint.
37. The method for contactless sleep monitoring of a plurality of users of claim 34, wherein the method further comprises converting the radar data stream into a single dimension, wherein the radar data stream is multi-dimensional, wherein the clustering is performed on the converted data.
38. The method for contactless sleep monitoring of a plurality of users of claim 34, wherein the radar data stream is output by a radar Integrated Circuit (IC) and the radar data stream is based on a low power Frequency Modulated Continuous Wave (FMCW) radar output by the radar IC.
39. The method for contactless sleep monitoring of a plurality of users of claim 34, wherein performing separate sleep analysis on the first portion of the data for the first user and the second portion of the data for the second user for the period of time comprises:
determining that the first user has entered a sleep state at a first time; and
determining that the second user has entered the sleep state at a second time.
40. The method for contactless sleep monitoring of a plurality of users according to claim 34, further comprising:
determining a transition time for the first user to transition from a sleep state to an awake state;
identifying an environmental event occurring within a time period of the transition time; and
the first user is awakened to be attributed to the environmental event based on the environmental event occurring within the period of time of the transition time.
41. A smart home device comprising:
a housing;
an electronic display housed by the housing;
a radar system housed by the enclosure, the radar system monitoring movement within a target area using millimeter radio waves within a 57GHz-64GHz frequency spectrum, the target area being large enough to encompass an area of a plurality of user beds, wherein instantaneous Effective Isotropic Radiated Power (EIRP) emitted by the radar system never exceeds 20dBm;
A processing system housed by the housing, comprising one or more processors, the processing system receiving radar data from the radar sensor and outputting information to the electronic display for presentation, wherein the processing system is configured to:
processing the radar data to determine that there are two users based solely on the radar data without requiring information derived from other non-radar sensors or user inputs;
processing the radar data to determine a heart rate and a respiration rate of each of the two users based solely on the radar data and without the need for information derived from other non-radar sensors; and
sleep information for each of the two users is displayed on the electronic display based on the determined heart rate and the determined respiration rate.
42. A non-contact cough detection device, comprising:
a housing;
a microphone housed by the housing;
a radar sensor housed by the housing; and
a processing system housed by the housing, comprising one or more processors, the processing system receiving data from the microphone and the radar sensor, wherein the processing system is configured to:
Receiving audio data from the microphone;
detecting that a cough has occurred based on the received audio data;
receiving radar data indicating reflected radio waves from the radar sensor;
performing a state analysis process using the received radar data; and
the detected cough is attributed to a particular user based at least in part on the state analysis process performed using the received radar data.
43. The contactless cough detection device of claim 42, wherein the processing system detects that the cough has occurred by analyzing the received audio data from the microphone using a pre-trained cough detection machine learning model.
44. The contactless cough detection device of claim 42, wherein the processing system is further configured to delete the audio data received from the microphone after detecting that the cough has occurred.
45. The contactless cough detection device of claim 42, wherein the processing system being configured to attribute the detected cough to the particular user includes the processing system being configured to determine that only the user being monitored caused the detected cough.
46. The non-contact cough detection device of claim 45, wherein the processing system being configured to perform the state analysis process includes the processing system being configured to determine that the particular user has moved in bed within a period of time of the detected cough.
47. The contactless cough detection device of claim 42, wherein the processing system attributing the detected cough to the particular user includes the processing system determining that the particular user of the monitored plurality of users resulted in the detected cough.
48. The contactless cough detection device of claim 47, wherein the processing system being configured to perform the state analysis process includes the processing system being configured to determine that the particular user has moved more in bed than other users of the plurality of users within a period of time of the detected cough.
49. The contactless cough detection device of claim 42, wherein the processing system is further configured to cause sleep data to be stored, the sleep data indicating that the cough is due to the particular user.
50. The non-contact cough detection device of claim 49, further comprising:
a wireless network interface housed by the housing and in communication with the processing system; and
a speaker housed by the housing and in communication with the processing system, wherein the processing system is further configured to:
receiving a spoken command via the microphone;
outputting the dictation command-based data to a cloud-based server system via the wireless network interface;
receiving instructions from the cloud-based server system via the wireless network interface in response to the output data; and
and outputting stored cough data in response to the instruction.
51. The contactless cough detection device of claim 42, wherein the processing system is further configured to output a report indicating the number of times the particular user cough during sleep.
52. The non-contact cough detection device of claim 42, further comprising an electronic display in communication with the processing system, the electronic display outputting the report for presentation.
53. The contactless cough detection device of claim 42, wherein the processing system is further configured to create a trend report over a plurality of days indicating whether the particular user's cough level is increasing, decreasing, or remaining unchanged.
54. The non-contact cough detection device of claim 42, wherein:
the radar sensor is a different Integrated Circuit (IC) than the processing system; and
the radar sensor outputs a Frequency Modulated Continuous Wave (FMCW) radar into an environment of the non-contact cough detection device.
55. The contactless cough detection device of claim 54, wherein the FMCW radar has a frequency between 57 and 64GHz and has a peak Effective Isotropic Radiated Power (EIRP) of 20dBm or less.
56. A method for performing contactless cough detection, the method comprising:
receiving an audio data stream;
detecting that a cough has occurred based on the received audio data stream;
receiving a radar data stream;
performing a state analysis process using the received radar data; and
the detected cough is attributed to a particular user based on the state analysis process performed using the received radar data.
57. The method for performing contactless cough detection of claim 56, wherein detecting that the cough has occurred is performed by analyzing a received audio data stream using a pre-trained cough detection machine learning model.
58. The method for performing non-contact cough detection of claim 56, the method further comprising: deleting the received audio data stream after detecting that the cough has occurred.
59. The method for performing contactless cough detection of claim 56, wherein performing the state analysis process includes determining that the particular user has moved in bed within a period of detected cough.
60. The method for performing contactless cough detection of claim 56, wherein:
performing the state analysis process includes determining that the particular user has moved more in the bed than one or more other users within a period of time of the detected cough; and
attributing the detected cough to the particular user includes determining that the particular user caused the detected cough.
61. The method for performing contactless cough detection according to claim 56, wherein a processing system of a contactless cough detection device receives the audio data stream from a microphone and the radar data stream from a radar Integrated Circuit (IC) of the contactless cough detection device.
62. The method for performing non-contact cough detection of claim 56, the method further comprising:
receiving a spoken command via a microphone;
outputting data based on the dictation command to a cloud-based server system via a wireless network interface;
receiving instructions from the cloud-based server system via the wireless network interface in response to the output data; and
the stored cough data is output via the electronic display in response to the received instructions.
63. A non-contact sleep analysis apparatus, the non-contact sleep analysis apparatus comprising:
a housing;
a radar sensor housed by the housing, including a plurality of antennas and monitoring movement using radio waves;
a processing system housed by the housing, comprising one or more processors, the processing system receiving data from the radar sensor, wherein the processing system is configured to:
receiving a plurality of digital radar data streams, wherein each digital stream of the plurality of digital radar data streams is based on radio waves received by one of the plurality of antennas of the radar sensor;
performing a direction optimization process to determine a first weight and a second weight, wherein the direction optimization process aims at an area on the bed where the user sleeps;
Applying the first weight to a first digital radar data stream of the plurality of digital radar data streams;
applying the second weight to a second digital radar data stream of the plurality of digital radar data streams;
combining the weighted first digital radar data stream and the weighted second digital radar data stream to create a first directionally targeted radar data stream;
performing a sleep analysis based on the first directional-aimed radar data stream; and
sleep data of the user is output based on the performed sleep analysis.
64. The contactless sleep analysis device of claim 63, wherein the processing system is further configured to:
the direction optimization process is performed to determine the first weight and the second weight by determining a direction in which the amount of detected movement is greatest.
65. The contactless sleep analysis device according to claim 64, wherein the processing system being configured to perform the direction optimization process includes the processing system being configured to perform a least squares optimization process based on various values selected for the first weight and the second weight.
66. The non-contact sleep analysis apparatus as in claim 64, wherein the direction optimization process determines only an optimized vertical direction.
67. The contactless sleep analysis device of claim 64, wherein the first weight, the second weight, or both are complex values that introduce a delay to the first digital data stream, the second digital data stream, or both.
68. The contactless sleep analysis device according to claim 63, wherein:
the plurality of antennas includes at least three antennas;
the processing system is further configured to:
applying a third weight to the second one of the plurality of digital radar data streams;
applying a fourth weight to a third digital data stream of the plurality of digital radar data streams; and
combining the weighted third digital data stream and the weighted fourth digital data stream to create a second directionally targeted radar data stream, wherein the sleep analysis is further performed based on the second directionally targeted radar data stream.
69. The non-contact sleep analysis apparatus of claim 68, further comprising a display screen attached to the housing, wherein:
the non-contact sleep analysis device is a bedside device;
the plurality of antennas being substantially parallel to the display screen;
the display screen is attached to the housing such that the display screen is arranged at a face-up angle for easy reading; and
The direction optimization process compensates for the face-up angle.
70. The contactless sleep analysis device according to claim 63, wherein:
the radar sensor outputting a Frequency Modulated Continuous Wave (FMCW) radar into an environment of the non-contact cough detection device; and
the FMCW radar has a frequency between 57 and 64GHz and has a peak Effective Isotropic Radiated Power (EIRP) of 20dBm or less.
71. The contactless sleep analysis device of claim 68, wherein the processing system is further configured to:
during the sleep analysis, initially processing the first directionally targeted radar data stream and the second directionally targeted radar data stream, respectively;
combining data obtained from the first directionally targeted radar data stream and the second directionally targeted radar data stream after the initial processing; and
the sleep analysis is completed using the combined data obtained from the first directionally targeted radar data stream and the second directionally targeted radar data stream.
72. The contactless sleep analysis device according to claim 71, wherein the first weight, second weight, third weight, and fourth weight compensate for the at least three antennas being arranged in an L-shape.
73. The non-contact sleep analysis apparatus according to claim 63, wherein the radar sensor outputs Frequency Modulated Continuous Wave (FMCW) radio waves.
74. The contactless sleep analysis device according to claim 63, further comprising:
a microphone housed by the housing;
a speaker housed by the housing; and
an electronic display housed by the housing, wherein the microphone, speaker, and electronic display are in communication with the processing system, and the processing system is further configured to:
outputting the sleep data via the electronic display in response to a verbal command received by the microphone; and
synthetic speech regarding the sleep data is output via the speaker in response to the verbal command received by the microphone.
75. A method for performing targeted contactless sleep monitoring, the method comprising:
receiving a plurality of digital radar data streams, wherein each digital stream of the plurality of digital radar data streams is based on radio waves received by an antenna of a plurality of antennas of a radar sensor of a bedside-mounted contactless sleep analysis device;
Performing a direction optimization process to determine a first weight and a second weight, wherein the direction optimization process aims at an area on the bed where the user sleeps;
applying the first weight to a first digital radar data stream of the plurality of digital radar data streams;
applying the second weight to a second digital radar data stream of the plurality of digital radar data streams;
combining the weighted first digital radar data stream and the weighted second digital radar data stream to create a first directionally targeted radar data stream;
performing a sleep analysis based on the first directional-aimed radar data stream; and
sleep data of the user is output based on the performed sleep analysis.
76. The method for performing targeted non-contact sleep monitoring of claim 75, wherein the first weight, second weight, or both are complex values that introduce a delay to the first digital data stream, the second digital data stream, or both.
77. The method for performing targeted non-contact sleep monitoring of claim 75, further comprising:
the direction optimization process is performed to determine the first weight and the second weight by determining a direction in which the amount of detected movement is greatest.
78. The method for performing targeted non-contact sleep monitoring of claim 77, wherein performing the directional optimization comprises performing a least squares optimization to obtain the first weight and the second weight.
79. The method for performing targeted non-contact sleep monitoring of claim 77, wherein the direction optimization process determines only an optimized vertical direction and a horizontal direction is fixed.
80. The method for performing targeted non-contact sleep monitoring of claim 75, further comprising:
applying a third weight to the second one of the plurality of digital radar data streams; and
a fourth weight is applied to a third digital data stream of the plurality of digital radar data streams.
81. The method for performing targeted non-contact sleep monitoring of claim 80, the method further comprising:
combining the weighted third digital data stream and the weighted fourth digital data stream to create a second directionally targeted radar data stream, wherein the sleep analysis is further performed based on the second directionally targeted radar data stream.
82. The method for performing targeted non-contact sleep monitoring of claim 81, the method further comprising:
during the sleep analysis, initially processing the first directionally targeted radar data stream and the second directionally targeted radar data stream, respectively;
combining, after the initial processing, partially processed data obtained from a first directionally targeted radar data stream and the second directionally targeted radar data stream; and
the sleep analysis is completed using the combined data obtained from the first directionally targeted radar data stream and the second directionally targeted radar data stream.
83. A contactless sleep tracking device, comprising:
a housing;
an electronic display housed by the housing;
a user interface housed by the housing;
a radar sensor housed by the housing; and
a processing system housed by the housing, comprising one or more processors, the processing system receiving data from the radar sensor and the user interface and outputting data to the electronic display screen for presentation, wherein the processing system is configured to:
Receiving, via the user interface, a user input requesting to perform a sleep tracking setup procedure;
in response to the user input, performing a detection process based on data received from the radar sensor to determine whether a user is present and static;
in response to determining the detection process of the user presence and static, performing a consistency analysis over a period of time to evaluate a duration of the user presence and static; and
based on the consistency analysis over the period of time, sleep tracking is activated such that when the user is detected to be in bed via the radar sensor, sleep of the user is tracked.
84. The contactless sleep tracking device according to claim 83, wherein the detection process includes the processing system using a neural network to determine the user presence and static.
85. The contactless sleep tracking device according to claim 84, wherein the consistency analysis includes determining that the neural network classifies the user as present and static for at least a defined duration.
86. The contactless sleep tracking device of claim 83, wherein the processing system is further configured to:
Based on the consistency analysis, an indication is output via the electronic display that sleep tracking settings have been successfully performed.
87. The contactless sleep tracking device of claim 83, wherein the processing system is further configured to:
in response to receiving the user input, outputting, via the electronic display screen, an indication that the user should lie in a sleeping posture in a bed.
88. The contactless sleep tracking device according to claim 83, wherein the processing system being configured to perform the detection process based on data received from the radar sensor to determine whether the user is present and stationary comprises: the respiration of the user is detected based on data received from the radar sensor.
89. The contactless sleep tracking device according to claim 83, wherein the user interface is a microphone and the user speaks a command requesting execution of the sleep tracking setting process.
90. The contactless sleep tracking device according to claim 83, wherein the electronic display screen is a touch screen that serves as the user interface, wherein the user provides touch input indicating a request to perform the sleep tracking setup process.
91. The contactless sleep tracking device of claim 83, wherein the radar sensor is a Frequency Modulated Continuous Wave (FMCW) radar sensor implemented using a single Integrated Chip (IC) that emits radar having a frequency between 57 and 64GHz and has a peak Effective Isotropic Radiated Power (EIRP) of 20dBm or less.
92. The contactless sleep tracking device of claim 83, wherein the processing system is further configured to:
receiving, via the user interface, a second user input requesting to perform the sleep tracking setup procedure;
in response to the second user input, performing a second detection process based on data received from the radar sensor to determine whether the user is present and static;
determining that there is excessive movement in response to the second detection process; and
in response to determining that the excessive movement is present, a recommendation is output to eliminate nearby sources of movement in the environment of the contactless sleep tracking device, wherein the second user input occurs before the user input.
93. The contactless sleep tracking device of claim 92, wherein the processing system is further configured to:
In response to determining that the excessive movement exists, an indication is output that sleep tracking has not been successfully set.
94. A method for performing an initial setup procedure of a sleep tracking device, comprising:
receiving, via a user interface of the contactless sleep tracking device, a user input requesting to perform a sleep tracking setup procedure;
in response to the user input, performing, by the contactless sleep tracking device, a detection process based on data received from a radar sensor to determine whether a user is present and static;
in response to determining the detection process of the user presence and static, performing, by the sleep tracking device, a consistency analysis over a period of time to evaluate a duration of the user presence and static; and
based on the consistency analysis, sleep tracking is activated such that when the user is detected to be in bed via the radar sensor, sleep of the user is tracked.
95. A method for performing the initial setup process of the sleep tracking device according to claim 94, wherein the detection process comprises using a neural network classifier to determine the user presence and static.
96. The method for performing an initial setup process for a sleep tracking device according to claim 95, wherein the consistency analysis comprises determining that the neural network classifier classifies the user as present and static for the duration.
97. The method for performing the initial setup procedure of the sleep tracking device of claim 94, further comprising:
based on the consistency analysis, an indication is output that the sleep tracking settings have been successfully performed.
98. The method for performing the initial setup procedure of the sleep tracking device of claim 94, further comprising:
in response to receiving the user input, an indication is output that the user should lie in a sleeping posture.
99. The method for performing the initial setup process of the sleep tracking device of claim 94, wherein performing the detection process to determine whether the user is present and static based on data received from the radar sensor comprises detecting respiration of the user based on data received from the radar sensor.
100. The method for performing the initial setup procedure of the sleep tracking device of claim 94, further comprising:
receiving, via the user interface, a second user input requesting to perform the sleep tracking setup procedure;
in response to the second user input, performing a second detection process based on data received from the radar sensor to determine whether the user is present and static;
Determining that there is excessive movement in response to the second detection process; and
in response to determining that the excessive movement is present, a suggestion is output to eliminate a source of movement in an environment of the contactless sleep tracking device, wherein the second user input occurs before the user input.
101. The method for performing the initial setup procedure of the sleep tracking device of claim 100, further comprising:
in response to determining that the excessive movement exists, an indication is output that the sleep tracking has not been successfully set.
102. The method for performing the initial setup process of the sleep tracking device of claim 94, wherein the radar sensor is a frequency modulated continuous wave radar sensor implemented using a single Integrated Chip (IC).
CN202180055343.6A 2020-08-11 2021-07-07 Non-contact sleep detection and disturbance attribution Pending CN116234496A (en)

Applications Claiming Priority (11)

Application Number Priority Date Filing Date Title
US16/990,726 2020-08-11
US16/990,714 US20220047209A1 (en) 2020-08-11 2020-08-11 Contactless sleep detection and disturbance attribution for multiple users
US16/990,705 2020-08-11
US16/990,720 US11406281B2 (en) 2020-08-11 2020-08-11 Contactless cough detection and attribution
US16/990,746 2020-08-11
US16/990,720 2020-08-11
US16/990,746 US11808839B2 (en) 2020-08-11 2020-08-11 Initializing sleep tracking on a contactless health tracking device
US16/990,726 US11754676B2 (en) 2020-08-11 2020-08-11 Precision sleep tracking using a contactless sleep tracking device
US16/990,705 US11832961B2 (en) 2020-08-11 2020-08-11 Contactless sleep detection and disturbance attribution
US16/990,714 2020-08-11
PCT/US2021/040643 WO2022035526A1 (en) 2020-08-11 2021-07-07 Contactless sleep detection and disturbance attribution

Publications (1)

Publication Number Publication Date
CN116234496A true CN116234496A (en) 2023-06-06

Family

ID=77207236

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180055343.6A Pending CN116234496A (en) 2020-08-11 2021-07-07 Non-contact sleep detection and disturbance attribution

Country Status (5)

Country Link
EP (1) EP4196000A1 (en)
JP (1) JP2023539060A (en)
KR (1) KR20230048342A (en)
CN (1) CN116234496A (en)
WO (1) WO2022035526A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115770045B (en) * 2022-11-18 2023-10-20 深圳市心流科技有限公司 Control method of sign detection device based on user state and terminal equipment
CN115868937A (en) * 2023-01-04 2023-03-31 北京百度网讯科技有限公司 Sleep monitoring method, device, equipment, system and storage medium
CN115862877B (en) * 2023-03-03 2023-05-05 安徽星辰智跃科技有限责任公司 Method, system and device for detecting, quantifying and assisting in intervention of sleep sustainability

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2563036A (en) * 2017-05-30 2018-12-05 Sultan & Knight Ltd Systems and methods for monitoring and modulating circadian rhythms

Also Published As

Publication number Publication date
EP4196000A1 (en) 2023-06-21
WO2022035526A1 (en) 2022-02-17
KR20230048342A (en) 2023-04-11
JP2023539060A (en) 2023-09-13

Similar Documents

Publication Publication Date Title
CN111629658B (en) Apparatus, system, and method for motion sensing
EP3727134B1 (en) Processor readable medium and corresponding method for health and medical sensing
CN116234496A (en) Non-contact sleep detection and disturbance attribution
US10410498B2 (en) Non-contact activity sensing network for elderly care
US11114206B2 (en) Vital signs with non-contact activity sensing network for elderly care
US20220047209A1 (en) Contactless sleep detection and disturbance attribution for multiple users
US20210398666A1 (en) Systems, apparatus and methods for acquisition, storage, and analysis of health and environmental data
US20220218224A1 (en) Sleep tracking and vital sign monitoring using low power radio waves
US11832961B2 (en) Contactless sleep detection and disturbance attribution
US11754676B2 (en) Precision sleep tracking using a contactless sleep tracking device
US11808839B2 (en) Initializing sleep tracking on a contactless health tracking device
US20230346265A1 (en) Contactless device for respiratory health monitoring
US11627890B2 (en) Contactless cough detection and attribution
US20230329574A1 (en) Smart home device using a single radar transmission mode for activity recognition of active users and vital sign monitoring of inactive users
KR102658390B1 (en) Devices, systems, and methods for health and medical sensing
WO2023234919A1 (en) Radar-based blood pressure measurement
KR20240053667A (en) Apparatus, system, and method for health and medical sensing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination