WO2021064557A1 - Systèmes et procédés de réglage de dispositifs électroniques - Google Patents

Systèmes et procédés de réglage de dispositifs électroniques Download PDF

Info

Publication number
WO2021064557A1
WO2021064557A1 PCT/IB2020/059067 IB2020059067W WO2021064557A1 WO 2021064557 A1 WO2021064557 A1 WO 2021064557A1 IB 2020059067 W IB2020059067 W IB 2020059067W WO 2021064557 A1 WO2021064557 A1 WO 2021064557A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
sleep state
show
time
display device
Prior art date
Application number
PCT/IB2020/059067
Other languages
English (en)
Inventor
Michael John COSTELLO
Niall O'MAHONY
Kieran GRENNAN
Redmond Shouldice
Michael PINCZUK
Original Assignee
Resmed Sensor Technologies Limited
ResMed Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Resmed Sensor Technologies Limited, ResMed Pty Ltd filed Critical Resmed Sensor Technologies Limited
Publication of WO2021064557A1 publication Critical patent/WO2021064557A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42201Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] biosensors, e.g. heat sensor for presence detection, EEG sensors or any limb activity sensors worn by the user
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4436Power management, e.g. shutting down unused components of the receiver
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4667Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/06Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms

Definitions

  • the present disclosure relates generally to systems and methods for adjusting one or more electronic devices, and more particularly, to systems and methods for adjusting one or more electronic devices based at least in part on data associated with one or more sleep states.
  • data associated with a sleep state of a user is received from a sensor.
  • the received data associated with the sleep state of the user is analyzed. Based at least in part on the analysis, a first current sleep state of the user is generated. Responsive to the determination of the first current sleep state of the user, an operation of one or more electronic devices is caused to be modified.
  • a system includes a control system and a memory.
  • the control system includes one or more processors.
  • the memory has stored thereon machine readable instructions.
  • the control system is coupled to the memory, and any one of the methods disclosed herein is implemented when the machine executable instructions in the memory are executed by at least one of the one or more processors of the control system.
  • a system for modifying an operation of one or more electronic devices includes a control system configured to implement any one of the methods disclosed herein.
  • a computer program product includes instructions which, when executed by a computer, cause the computer to carry out any one of the methods disclosed herein.
  • a system includes a sensor, a memory, and control system.
  • the sensor is configured to generate data associated with a sleep state of a user.
  • the memory stores machine-readable instructions.
  • the control system is arranged to provide control signals to an electronic display device displaying a show.
  • the control system further includes one or more processors.
  • the one or more processors are configured to execute the machine-readable instructions to analyze the generated data associated with the sleep state of the user.
  • the generated data is associated with the sleep state of the user for a duration of time.
  • a first current sleep state of the user is determined.
  • a first time flag is generated.
  • the first time flag is indicative of a first location in the show being displayed on the electronic display device.
  • a volume of the electronic display device that is displaying the show is caused to be lowered.
  • a first time stamp is generated.
  • the first time stamp is indicative of a first point in time.
  • a second current sleep state of the user is determined.
  • a second time flag is generated. The second time flag is indicative of a second location in the show being displayed on the electronic display device.
  • the volume of the electronic display device that is displaying the show is caused to be shut-off.
  • a second time stamp is generated. The second time stamp is indicative of a second point in time. Based at least in part on the analysis, a third current sleep state of the user is determined. Responsive to the third current sleep state of the user being awake, a prompt is caused to be displayed on the electronic display device.
  • a system includes a sensor, a memory, and a control system.
  • the sensor is configured to generate data associated with a sleep state of a user.
  • the memory stores machine-readable instructions.
  • the control system is arranged to provide control signals to an electronic display device displaying a show.
  • the control system further includes one or more processors configured to execute the machine- readable instructions to analyze the generated data associated with the sleep state of the user. Based at least in part on the analysis, the user is determined drowsy at a first point in time. Responsive to the determination that at the first point in time the user is drowsy, a first time flag is generated.
  • the first time flag is indicative of a first location in the show being displayed on the electronic display device at about the first point in time.
  • the user is determined asleep at a second point in time after the first point in time. Responsive to the determination that at the second point in time the user is asleep, a second time flag is generated.
  • the second point in time is indicative of a second location in the show being displayed on the electronic display device at about the second point in time. Responsive to an input, a prompt is caused to be displayed on the electronic display device.
  • a system includes a sensor, a memory, and a control system.
  • the sensor is configured to generate data associated with a sleep state of a user.
  • the memory stores machine-readable instructions.
  • the control system is arranged to provide control signals to one or more lights.
  • the control system further includes one or more processors configured to execute the machine-readable instructions to analyze the generated data associated with the sleep state of the user. Based at least in part on the analysis, the user is determined about to fall asleep. Responsive to the determination that the user is about to fall asleep, the one or more lights are caused to dim. Based at least in part on the analysis, the user is determined sleeping. Responsive to the determination that the user is sleeping, the one or more lights are caused to turn off.
  • a system includes a sensor, a memory, and a control system.
  • the sensor is configured to generate data associated with a sleep state of a user.
  • the memory stores machine-readable instructions.
  • the control system is arranged to provide control signals to one or more entertainment devices.
  • the control system further includes one or more processors configured to execute the machine-readable instructions to analyze the generated data associated with the sleep state the user. Based at least in part on the analysis, the user is determined about to fall asleep. Responsive to the determination that the user is about to fall asleep, a volume of the one or more entertainment devices is caused to lower. Based at least in part on the analysis, the user is determined sleeping. Responsive to the determination that the user is sleeping, the volume of the one or more entertainment devices is caused to further lower or shut off.
  • a system includes a sensor, a memory, and a control system.
  • the sensor is configured to generate data associated with a sleep state of a user.
  • the memory stores machine-readable instructions.
  • the control system is arranged to provide control signals to a device with a temperature setting.
  • the control system further includes one or more processors configured to execute the machine-readable instructions to analyze the generated data associated with the sleep state of the user. Based at least in part on the analysis, a current sleep state of the user is determined. Responsive to the determined current sleep state of the user, the temperature setting of the device is caused to be altered from a first temperature setting to a second temperature setting, or the device is caused to be turned off.
  • a system includes a sensor, a memory, and a control system.
  • the sensor is configured to generate data associated with a sleep state of a user.
  • the memory stores machine-readable instructions.
  • the control system is arranged to provide control signals to an electronic device.
  • the control system further includes one or more processors configured to execute the machine-readable instructions to analyze the generated data associated with the sleep state of the user. Based at least in part on the analysis, a current sleep state of the user is determined. Responsive to the determination of the current sleep state of the user indicating that the user is about to fall asleep, a notification is caused to be provided to the user via the electronic device.
  • FIG. 1 is a block diagram of a system for adjusting settings on one or more electronic devices, according to some implementations of the present disclosure
  • FIG. 2A is a perspective view of a location at a first point in time, the location includes a system for adjusting settings on one or more electronic devices, according to some implementations of the present disclosure
  • FIG. 2B is a perspective view of the location of FIG. 2A at a second point in time, according to some implementations of the present disclosure
  • FIG. 2C is a perspective view of the location of FIG. 2A at a third point in time, according to some implementations of the present disclosure
  • FIG. 2D is a perspective view of the location of FIG. 2A at a fourth point in time, according to some implementations of the present disclosure
  • FIG. 3 is a flow diagram for a method of adjusting settings on one or more electronic devices, according to some implementations of the present disclosure
  • FIG. 4 is a flow diagram for a method of adjusting settings on one or more electronic devices, according to some implementations of the present disclosure.
  • FIG. 5 is a flow diagram for a method of training a machine-learning algorithm for adjusting settings on one or more electronic devices based on historical device data and historical user input data, according to some implementations of the present disclosure.
  • the system 100 includes a control system 110, a memory device 114, an electronic interface 119, one or more electronic devices 120, one or more sensors 130, one or more user devices 170, one or more input devices 118, and an activity tracker 190.
  • the system 100 generally can be used to generate a set of device data associated with a user (e.g., an individual, a person, etc.) of the one or more electronic devices 120, and/or a set of sensor data associated with a sleep state of the user via the one or more sensors 130.
  • the set of sensor data can include physiological data and/or environmental data.
  • the generated sets of sensor data and device data can be analyzed by the system 100 (e.g., using one or more trained algorithms) to modify an operation of the one or more electronic devices 120.
  • the control system 110 includes one or more processors 112 (hereinafter, processor 112).
  • the control system 110 is generally used to control (e.g., actuate) the various components of the system 100 and/or analyze data obtained and/or generated by the components of the system 100.
  • the processor 112 can be a general or special purpose processor or microprocessor. While one processor 112 is shown in FIG.
  • the control system 110 can include any suitable number of processors (e.g., one processor, two processors, five processors, ten processors, etc.) that can be in a single housing, or located remotely from each other.
  • the control system 110 can be coupled to and/or positioned within, for example, a housing of the user device 170, a housing of any one of the one or more electronic devices 120, and/or within a housing of one or more of the sensors 130.
  • the control system 110 can be centralized (within one such housing) or decentralized (within two or more of such housings, which are physically distinct). In such implementations including two or more housings containing the control system 110, such housings can be located proximately and/or remotely from each other.
  • the control system 110 generally controls (e.g., actuate) the various components of the system 100 and/or analyzes data obtained and/or generated by the components of the system 100.
  • the control system 110 is arranged to provide control signals to the one or more electronic devices 120.
  • the control system 110 executes machine readable instructions that are stored in the memory device 114 or a different memory device.
  • the one or more processors of the control system 110 can be general or special purpose processors and/or microprocessors.
  • the memory device 114 stores machine-readable instructions that are executable by the processor 112 of the control system 110.
  • the memory device 114 can be any suitable computer readable storage device or media, such as, for example, a random or serial access memory device, a hard drive, a solid state drive, a flash memory device, etc. While one memory device 114 is shown in FIG. 1, the system 100 can include any suitable number of memory devices 114 (e.g., one memory device, two memory devices, five memory devices, ten memory devices, etc.).
  • the memory device 114 can be coupled to and/or positioned within a housing of any one of the one or more electronic devices 120, within a housing of the user device 170, within a housing of one or more of the sensors 130, or any combination thereof.
  • the memory device 114 can be centralized (within one such housing) or decentralized (within two or more of such housings, which are physically distinct).
  • the memory device 114 stores a user profile associated with the user.
  • the user profile can include, for example, demographic information associated with the user, biometric information associated with the user, medical information associated with the user, self-reported user feedback, sleep parameters associated with the user (e.g., sleep-related parameters recorded from one or more earlier sleep sessions), user preferences, or any combination thereof.
  • the demographic information can include, for example, information indicative of an age of the user, a gender of the user, a race of the user, a geographic location of the user, a relationship status, an employment status of the user, an educational status of the user, a socioeconomic status of the user, or any combination thereof.
  • the electronic interface 119 is configured to receive data (e.g., physiological data and/or environmental data) from the one or more sensors 130 such that the data can be stored in the memory device 114 and/or analyzed by the processor 112 of the control system 110.
  • the electronic interface 119 can communicate with the one or more sensors 130 using a wired connection or a wireless connection (e.g., using an RF communication protocol, a WiFi communication protocol, a Bluetooth communication protocol, over a cellular network, etc.).
  • the electronic interface 119 can include an antenna, a receiver (e.g., an RF receiver), a transmitter (e.g., an RF transmitter), a transceiver, or any combination thereof.
  • the electronic interface 119 can also include one more processors and/or one more memory devices that are the same as, or similar to, the processor 112 and the memory device 114 described herein. In some implementations, the electronic interface 119 is coupled to or integrated in the user device 170 and/or any one of the one or more electronic devices 120. In some implementations, the electronic interface 119 is coupled to or integrated (e.g., in a housing) with the control system 110 and/or the memory device 114.
  • control system 110 and the memory device 114 are described and shown in FIG. 1 as being a separate and distinct component of the system 100, in some implementations, the control system 110 and/or the memory device 114 are integrated in the user device 170 and/or any one of the one or more electronic devices 120.
  • control system 110 or a portion thereof can be located in a cloud (e.g., integrated in a server, integrated in an Internet of Things (IoT) device (e.g., a smart TV, a smart thermostat, a smart appliance, smart lighting, etc.), connected to the cloud, be subject to edge cloud processing, etc.), located in one or more servers (e.g., remote servers, local servers, etc., or any combination thereof.
  • a cloud e.g., integrated in a server, integrated in an Internet of Things (IoT) device (e.g., a smart TV, a smart thermostat, a smart appliance, smart lighting, etc.), connected to the cloud, be subject to edge cloud processing, etc.
  • servers e.g., remote servers, local servers, etc., or any combination thereof.
  • the one or more electronic devices 120 include a light 122, a television 124, a thermostat 180, a radio 182, a stereo 184, a mobile phone 186, a tablet 188, a computer 192, an e-book reader 194, an audio book 196, a smart speaker 198, automated blinds or curtains 102, a fan 104 (e.g., a ceiling fan coupled to a ceiling), a massage chair 106, a gaming console 108, a smart notepad 116, or any combination thereof.
  • the one or more electronic devices 120 are entertainment devices.
  • the one or more electronic devices 120 are shown and described as including the light 122, the television 124, the thermostat 180, the radio 182, the stereo 184, the mobile phone 186, the tablet 188, the computer 192, the e-book reader 194, the audio book 196, the smart speaker 198, the automated blinds 102, the fan 104, the massage chair 106, the gaming console 108, the smart notepad 116, more generally, the one or more electronic devices 120 of the system 100 can include any combination and/or any number of the electronic devices described and/or shown herein.
  • the one or more electronic devices 120 of the system 100 only include the television 124.
  • the one or more electronic devices 120 of the system 100 only include the television 124 and the thermostat 180.
  • the one or more electronic devices 120 of the system 100 only include the television 124 and one or more lights 122.
  • the one or more electronic devices 120 of the system 100 only include the television 124, one or more lights 122, and the mobile phone 186.
  • Various other combinations and/or numbers of the one or more electronic devices 120 are contemplated.
  • the one or more sensors 130 of the system 100 include a pressure sensor 132, a flow rate sensor 134, temperature sensor 136, a motion sensor 138, a microphone 140, a speaker 142, a radio-frequency (RF) receiver 146, a RF transmitter 148, a camera 150, an infrared sensor 152, a photoplethysmogram (PPG) sensor 154, an electrocardiogram (ECG) sensor 156, an electroencephalography (EEG) sensor 158, a capacitive sensor 160, a force sensor 162, a strain gauge sensor 164, an electromyography (EMG) sensor 166, an oxygen sensor 168, an analyte sensor 174, a moisture sensor 176, a LiDAR sensor 178, or any combination thereof.
  • RF radio-frequency
  • each of the one or sensors 130 are configured to output sensor data that is received and stored in the memory device 114 or one or more other memory devices.
  • the sensor data can be analyzed by the control system 110 for use in adjusting one or more aspects, settings, parameters (e.g., volume level, brightness level, etc.), and/or states (e.g., ON or OFF) of one or more of the electronic devices 120, to calibrate one or more of the one or more sensors 130, to determine a set of sleep-related parameters of one or more individuals in a location (e.g., location 205 shown in FIGS. 2A-2D), to generate a report indicative of sleep quality, to train a machine learning algorithm, or any combination thereof.
  • parameters e.g., volume level, brightness level, etc.
  • states e.g., ON or OFF
  • the one or more sensors 130 are shown and described as including each of the pressure sensor 132, the flow rate sensor 134, the temperature sensor 136, the motion sensor 138, the microphone 140, the speaker 142, the RF receiver 146, the RF transmitter 148, the camera 150, the infrared sensor 152, the photoplethysmogram (PPG) sensor 154, the electrocardiogram (ECG) sensor 156, the electroencephalography (EEG) sensor 158, the capacitive sensor 160, the force sensor 162, the strain gauge sensor 164, the electromyography (EMG) sensor 166, the oxygen sensor 168, the analyte sensor 174, the moisture sensor 176, and the LiDAR sensor 178, more generally, the one or more sensors 130 can include any combination and any number of each of the sensors described and/or shown herein.
  • the one or more sensors 130 can be used to generate, for example, physiological data, environmental data, or both.
  • Physiological data generated by one or more of the sensors 130 can be used by the control system 110 to determine a sleep-wake signal associated with a user during a sleep session and one or more sleep-related parameters.
  • the sleep-wake signal can be indicative of one or more sleep states, including wakefulness, relaxed wakefulness, micro awakenings, a rapid eye movement (REM) stage, a first non-REM stage (often referred to as “Nl”), a second non-REM stage (often referred to as “N2”), a third non-REM stage (often referred to as “N3”), or any combination thereof.
  • REM rapid eye movement
  • the sleep-wake signal can also be timestamped to indicate a time that the user enters the bed, a time that the user exits the bed, a time that the user attempts to fall asleep, etc.
  • the sleep-wake signal can be measured by the sensor(s) 130 during the sleep session at a predetermined sampling rate, such as, for example, one sample per second, one sample per 30 seconds, one sample per minute, etc.
  • the pressure sensor 132 outputs pressure data that can be stored in the memory device 114 and/or analyzed by the processor 112 of the control system 110.
  • the pressure sensor 132 is an air pressure sensor (e.g., barometric pressure sensor) that generates sensor data indicative of the respiration (e.g., inhaling and/or exhaling) of the user and/or ambient pressure.
  • the pressure sensor 132 can be coupled to or integrated in any one of the one or more electronic devices 120.
  • the pressure sensor 132 can be, for example, a capacitive sensor, an electromagnetic sensor, a piezoelectric sensor, a strain- gauge sensor, an optical sensor, a potentiometric sensor, or any combination thereof.
  • the pressure sensor 132 can be used to determine a blood pressure of a user.
  • the flow rate sensor 134 outputs flow rate data that can be stored in the memory device 114 and/or analyzed by the processor 112 of the control system 110.
  • the flow rate sensor 134 can be coupled to or integrated in any one of the one or more electronic devices 120.
  • the flow rate sensor 134 can be a mass flow rate sensor such as, for example, a rotary flow meter (e.g., Hall effect flow meters), a turbine flow meter, an orifice flow meter, an ultrasonic flow meter, a hot wire sensor, a vortex sensor, a membrane sensor, or any combination thereof.
  • the temperature sensor 136 generates and/or outputs temperature data that can be stored in the memory device 114 and/or analyzed by the one or more processors of the control system 110.
  • the temperature sensor 136 generates temperatures data indicative of a core body temperature of a user (e.g., a person using or watching or in the vicinity of at least one of the one or more electronic devices 120) of the system 100 (e.g., user 215 shown in FIGS. 2A-2D).
  • the temperature sensor 136 alternatively or additionally generates temperatures data indicative of a skin temperature of the user, an ambient temperature, or any combination thereof.
  • the temperature sensor 136 can be, for example, a thermocouple sensor, a thermistor sensor, a silicon band gap temperature sensor or semiconductor-based sensor, a resistance temperature detector, or any combination thereof.
  • the motion sensor 138 generates and/or outputs motion data that can be stored in the memory device 114 and/or analyzed by the one or more processors of the control system 110.
  • the motion sensor 138 is configured to measure motion of the system 100.
  • the motion sensor 138 is an accelerometer and/or a gyroscope.
  • the motion sensor 138 alternatively or additionally generates one or more signals representing bodily movement of the user, from which may be obtained a signal representing a sleep state of the user; for example, via a respiratory movement of the user.
  • the motion data from the motion sensor 138 can be used in conjunction with additional data from another sensor 130 to determine the sleep state of the user.
  • the microphone 140 generates and/or outputs sound data that can be stored in the memory device 114 and/or analyzed by the one or more processors of the control system 110.
  • the microphone 140 can be used to record sound(s) (e.g., sounds from the user) to determine (e.g., using the control system 110) one or more sleep-related parameters, such as, for example, a respiration signal, a respiration rate, an inspiration amplitude, an expiration amplitude, an inspiration-expiration ratio, a number of events per hour, a pattern of events, a sleep state, a sleep stage, or any combination thereof.
  • the determined event(s) can include snoring, apneas, central apneas, obstructive apneas, mixed apneas, hypopneas, a restless leg, a sleeping disorder, choking, labored breathing, an asthma attack, an epileptic episode, a seizure, or any combination thereof.
  • sleeps states include awake, wakefulness, relaxed wakefulness, drowsy, dozing off (e.g., about to fall asleep), asleep.
  • the sleep state of asleep can include the sleep stage.
  • sleep stages include light sleep (e.g., stage N1 and/or stage N2), deep sleep (e.g., stage N3 and/or slow wave sleep), and rapid eye movement (REM) (including, for example, phasic REM sleep, tonic REM sleep, deep to REM sleep transition, and/or light to REM sleep transition).
  • light sleep e.g., stage N1 and/or stage N2
  • deep sleep e.g., stage N3 and/or slow wave sleep
  • REM rapid eye movement
  • the speaker 142 generates and/or outputs sound waves that are audible to the user.
  • the speaker 142 can be used, for example, as an alarm clock and/or to play an alert or message/notification to the user.
  • the microphone 140 and the speaker 142 can be used collectively as a sonar sensor.
  • the speaker 142 generates or emits sound waves at a predetermined interval and the microphone 140 detects the reflections of the emitted sound waves from the speaker 142.
  • the sound waves generated or emitted by the speaker 142 have a frequency that is not audible to the human ear (e.g., below 20 Hz or above about 18 kHz) so as not to disturb the user.
  • the control system 110 can determine a location of the user and/or sleep-related parameters such as, for example, a respiration signal, a respiration rate, an inspiration amplitude, an expiration amplitude, an inspiration-expiration ratio, a number of events per hour, a pattern of events, a sleep state, a sleep stage, or any combination thereof.
  • sleep-related parameters such as, for example, a respiration signal, a respiration rate, an inspiration amplitude, an expiration amplitude, an inspiration-expiration ratio, a number of events per hour, a pattern of events, a sleep state, a sleep stage, or any combination thereof.
  • the microphone 140 and the speaker 142 can be used as separate devices.
  • the microphone 140 and the speaker 142 can be combined into an acoustic sensor 141, as described in, for example, WO 2018/050913, which is hereby incorporated by reference herein in its entirety.
  • the speaker 142 generates or emits sound waves at a predetermined interval and the microphone 140 detects the reflections of the emitted sound waves from the speaker 142.
  • the sound waves generated or emitted by the speaker 142 have a frequency that is not audible to the human ear (e.g., below 20 Hz or above around 18 kHz) so as not to disturb the sleep of the user 215.
  • the control system 110 can determine a location of the user 215 (FIGS. 2A-2D) and/or one or more of the sleep-related parameters described in herein.
  • the sensors 130 include (i) a first microphone that is the same as, or similar to, the microphone 140, and is integrated in the acoustic sensor 141 and (ii) a second microphone that is the same as, or similar to, the microphone 140, but is separate and distinct from the first microphone that is integrated in the acoustic sensor 141.
  • the RF transmitter 148 generates and/or emits radio waves having a predetermined frequency and/or a predetermined amplitude (e.g., within a high frequency band, within a low frequency band, long wave signals, short wave signals, etc.).
  • the RF receiver 146 detects the reflections of the radio waves emitted from the RF transmitter 148, and this data can be analyzed by the control system 110 to determine a location of the user 215 (FIGS. 2A-2D) and/or one or more of the sleep-related parameters described herein.
  • An RF receiver (either the RF receiver 146 and the RF transmitter 148 or another RF pair) can also be used for wireless communication between the control system 110, any one of the one or more electronic devices 120, the one or more sensors 130, the user device 170, or any combination thereof. While the RF receiver 146 and RF transmitter 148 are shown as being separate and distinct elements in FIG. 1, in some implementations, the RF receiver 146 and RF transmitter 148 are combined as a part of an RF sensor 147. In some such implementations, the RF sensor 147 includes a control circuit. The specific format of the RF communication can be WiFi, Bluetooth, or the like. [0046] In some implementations, the RF sensor 147 is a part of a mesh system.
  • a mesh system is a WiFi mesh system, which can include mesh nodes, mesh router(s), and mesh gateway(s), each of which can be mobile/movable or fixed.
  • the WiFi mesh system includes a WiFi router and/or a WiFi controller and one or more satellites (e.g., access points), each of which include an RF sensor that the is the same as, or similar to, the RF sensor 147.
  • the WiFi router and satellites continuously communicate with one another using WiFi signals.
  • the WiFi mesh system can be used to generate motion data based on changes in the WiFi signals (e.g., differences in received signal strength) between the router and the satellite(s) due to an object or person moving partially obstructing the signals.
  • the motion data can be indicative of motion, breathing, heart rate, gait, falls, behavior, etc., or any combination thereof.
  • the camera 150 outputs image data reproducible as one or more images (e.g., still images, video images, thermal images, or a combination thereof) that can be stored in the memory device 114.
  • the image data from the camera 150 can be used by the control system 110 to determine one or more of the sleep-related parameters described herein.
  • the image data from the camera 150 can be used to identify a location of the user, to determine a time when the user 215 enters the bed, and to determine a time when the user 215 exits the bed.
  • the infrared (IR) sensor 152 outputs infrared image data reproducible as one or more infrared images (e.g., still images, video images, or both) that can be stored in the memory device 114.
  • the infrared data from the IR sensor 152 can be used to determine one or more sleep-related parameters during a sleep session, including a temperature of the user 215 and/or movement of the user 215.
  • the IR sensor 152 can also be used in conjunction with the camera 150 when measuring the presence, location, and/or movement of the user 215.
  • the IR sensor 152 can detect infrared light having a wavelength between about 700 nm and about 1 mm, for example, while the camera 150 can detect visible light having a wavelength between about 380 nm and about 740 nm.
  • the PPG sensor 154 generates and outputs physiological data associated with the user that can be used to determine one or more sleep-related parameters, such as, for example, a heart rate, a heart rate variability, a cardiac cycle, respiration rate, an inspiration amplitude, an expiration amplitude, an inspiration-expiration ratio, a sleep state, a sleep stage, or any combination thereof.
  • the PPG sensor 154 can be worn by the user (e.g., as a wearable watch) and/or embedded in clothing and/or fabric that is worn by the user.
  • the ECG sensor 156 outputs physiological data associated with electrical activity of the heart of the user 215.
  • the ECG sensor 156 includes one or more electrodes that are positioned on or around a portion of the user 215 during the sleep session.
  • the physiological data from the ECG sensor 156 can be used, for example, to determine one or more of the sleep-related parameters described herein.
  • the EEG sensor 158 outputs physiological data associated with electrical activity of the brain of the user 215.
  • the EEG sensor 158 includes one or more electrodes that are positioned on or around the scalp of the user 215, such as in a smart headgear.
  • the physiological data from the EEG sensor 158 can be used, for example, to determine a sleep state of the user 215 at any given time during the sleep session.
  • the capacitive sensor 160, the force sensor 162, and the strain gauge sensor 164 output data that can be stored in the memory device 114 and used by the control system 110 to determine one or more of the sleep-related parameters described herein.
  • the EMG sensor 166 outputs physiological data associated with electrical activity produced by one or more muscles.
  • the oxygen sensor 168 outputs oxygen data indicative of an oxygen concentration of gas (e.g., the ambient concentration surrounding the user 215).
  • the oxygen sensor 168 can be, for example, an ultrasonic oxygen sensor, an electrical oxygen sensor, a chemical oxygen sensor, an optical oxygen sensor, or any combination thereof.
  • the one or more sensors 130 also include a galvanic skin response (GSR) sensor, a blood flow sensor, a respiration sensor, a pulse sensor, a sphygmomanometer sensor, an oximetry sensor, or any combination thereof.
  • GSR galvanic skin response
  • the analyte sensor 174 can be used to detect the presence of an analyte in the exhaled breath of the user 215.
  • the data output by the analyte sensor 174 can be stored in the memory device 114 and used by the control system 110 to determine the identity and concentration of any analytes in the breath of the user 215.
  • the analyte sensor 174 is positioned near a mouth of the user 215 to detect analytes in breath exhaled from the user 215’ s mouth.
  • the analyte sensor 174 is a volatile organic compound (VOC) sensor that can be used to detect carbon-based chemicals or compounds.
  • VOC volatile organic compound
  • the moisture sensor 176 outputs data that can be stored in the memory device 114 and used by the control system 110.
  • the moisture sensor 176 can be used to detect moisture in various areas surrounding the user. In some implementations, the moisture sensor 176 is placed near any area where moisture levels need to be monitored.
  • the moisture sensor 176 can also be used to monitor the humidity of the ambient environment surrounding the user 215, for example, the air inside the bedroom.
  • the Light Detection and Ranging (LiDAR) sensor 178 can be used for depth sensing.
  • This type of optical sensor e.g., laser sensor
  • LiDAR can generally utilize a pulsed laser to make time of flight measurements.
  • LiDAR is also referred to as 3D laser scanning.
  • a fixed or mobile device such as a smartphone
  • having a LiDAR sensor 166 can measure and map an area extending 5 meters or more away from the sensor.
  • the LiDAR data can be fused with point cloud data estimated by an electromagnetic RADAR sensor, for example.
  • the LiDAR sensor(s) 178 can also use artificial intelligence (AI) to automatically geofence RADAR systems by detecting and classifying features in a space that might cause issues for RADAR systems, such a glass windows (which can be highly reflective to RADAR).
  • AI artificial intelligence
  • LiDAR can also be used to provide an estimate of the height of a person, as well as changes in height when the person sits down, or falls down, for example.
  • LiDAR may be used to form a 3D mesh representation of an environment.
  • solid surfaces through which radio waves pass e.g., radio- translucent materials
  • the LiDAR may reflect off such surfaces, thus allowing a classification of different type of obstacles.
  • one or more of the one or more sensors 130 can be integrated in and/or coupled to any of the other components of the system 100 (e.g., the one or more electronic devices 120, the control system 110, the one or more sensors 130, or any combination thereof).
  • the microphone 140 and the speaker 142 can be integrated in and/or coupled to the mobile phone 186, the television 124, the light 122, or any combination thereof.
  • at least one of the one or more sensors 130 are not coupled to the one or more electronic devices 120 or the control system 110, and is positioned generally adjacent to the user during use of the one or more electronic devices 120.
  • the user device 170 includes a display device 172.
  • the user device 170 can be, for example, any one of the one or more electronic devices 120. Alternatively, the user device 170 can be an external sensing system different from the one or more electronic devices 120
  • the display device 172 is generally used to display image(s) including still images, video images, or both. In some implementations, the display device 172 acts as a human-machine interface (HMI) that includes a graphic user interface (GUI) configured to display the image(s) and an input interface.
  • HMI human-machine interface
  • GUI graphic user interface
  • the display device 172 can be an LED display, an OLED display, an LCD display, or the like.
  • the input interface can be, for example, a touchscreen or touch-sensitive substrate, a mouse, a keyboard, or any sensor system configured to sense inputs made by a human user interacting with the user device 170.
  • one or more user devices can be used by and/or included in the system 100.
  • the display device 172 is included in and/or is a portion of the television 124.
  • the display device 172 is included in and/or is a portion of the computer 192.
  • the display device 172 is included in and/or is a portion of the mobile phone 186.
  • the input device 118 of the system 100 is generally used to receive user input to enable user interaction with the control system 110, the memory 114, the one or more electronic devices 120, the one or more sensors 130, or any combination thereof.
  • the input device 118 can include a microphone for speech (e.g., the microphone 140), a touch-sensitive screen for gesture or graphical input, a keyboard, a mouse, a motion input (e.g., the motion sensor 138, the camera 150), or any combination thereof.
  • the input device 118 includes multimodal systems that enable a user to provide multiple types of input to communicate with the system 100.
  • the input device 118 can alternatively or additionally include a button, a switch, a dial to allow the user to interact with the system 100.
  • the button, the switch, or the dial may be a physical structure, or a software application accessible via the touch-sensitive screen.
  • the input device 118 may be arranged to allow the user to select a value and/or a menu option.
  • the input device 118 is included in and/or is a portion of the television 124.
  • the input device 118 is included in and/or is a portion of the computer 192.
  • the input device 118 is included in and/or is a portion of the mobile phone 186.
  • the input device 118 includes a processor, a memory, and a display device, that are the same as, or similar to, the processor(s) of the control system 110, the memory device 114, and the display device 172.
  • the processor and the memory of the input device 118 can be used to perform any of the respective functions described herein for the processor and/or the memory device 114.
  • the control system 110 and/or the memory 114 is integrated in the input device 118.
  • the display device 172 alternatively or additionally acts as a human-machine interface (HMI) that includes a graphic user interface (GUI) configured to display the image(s) and an input interface.
  • HMI human-machine interface
  • GUI graphic user interface
  • the display device 172 can be an LED display, an OLED display, an LCD display, or the like.
  • the input interface can be, for example, a touchscreen or touch-sensitive substrate, a mouse, a keyboard, or any sensor system configured to sense inputs made by a human user interacting with the system 100 with or without direct user contact/touch.
  • the display device 172 and the input device 118 is described and depicted in FIG.
  • the display device 172 and/or the input device 118 are integrated in and/or directly coupled to one or more of the one or more electronic devices 120 and/or one or more of the one or more sensors 130, and/or the control system 110, and/or the memory 114.
  • the activity tracker 190 is generally used to aid in generating physiological data for determining an activity measurement associated with the user.
  • the activity measurement can include, for example, a number of steps, a distance traveled, a number of steps climbed, a duration of physical activity, a type of physical activity, an intensity of physical activity, time spent standing, a respiration rate, an average respiration rate, a resting respiration rate, a maximum he respiration art rate, a respiration rate variability, a heart rate, an average heart rate, a resting heart rate, a maximum heart rate, a heart rate variability, a number of calories burned, blood oxygen saturation, electrodermal activity (also known as skin conductance or galvanic skin response), or any combination thereof.
  • the activity tracker 190 includes one or more of the sensors 130 described herein, such as, for example, the motion sensor 138 (e.g., one or more accelerometers and/or gyroscopes), the PPG sensor 154, and/or the ECG sensor 156.
  • the motion sensor 138 e.g., one or more accelerometers and/or gyroscopes
  • the PPG sensor 154 e.g., one or more accelerometers and/or gyroscopes
  • ECG sensor 156 e.g., ECG sensor
  • the activity tracker 190 is a wearable device that can be worn by the user, such as a smartwatch, a wristband, a ring, or a patch.
  • the activity tracker 190 can also be coupled to or integrated a garment or clothing that is worn by the user.
  • the activity tracker 190 can also be coupled to or integrated in (e.g., within the same housing) the user device 170.
  • the activity tracker 190 can be communicatively coupled with, or physically integrated in (e.g., within a housing), the control system 110, the memory 114, any one of the one or more electronic devices 120, and/or the user device 170.
  • a first alternative system includes the control system 110, the memory 114, the camera 150, and the television 124.
  • a second alternative system includes the control system 110, the speaker 142, the microphone 140, and the thermostat 180.
  • a third alternative system includes the control system 110, the memory 114, the television 124, and the input device 118.
  • various systems for adjusting sleep-related settings associated with a user of one or more electronic devices can be formed using any portion or portions of the components shown and described herein and/or in combination with one or more other components.
  • a location 205 (e.g., a room, a family room, a portion of a house, a hotel room, etc., or any combination thereof) including a system 200, that is the same as, or similar to, the system 100 is shown.
  • the system 200 includes a control system 210, a memory 214, a light 222, a television 224, one or more sensors 230, and an input device 218, that are the same as, or similar to, the control system 110, the memory 114, the light 122, the television 124, the one or more sensors 130, and the input device 118 described above in connection with FIG. 1.
  • control system 210 the memory 214, and the one or more sensors 230 are shown as being integrated into and/or coupled to the television 224, it is contemplated that the control system 210, the memory 214, the one or more sensors 230, or any combination thereof can be positioned in one or more alternative positions and/or devices within the location 205. Additionally or alternatively, in some implementations, the control system 210 can be integrated into a remote device (e.g., using IR, Bluetooth, or other RF) that operates the television 224, where the remote device is separate and distinct from and/or spaced from the television 224.
  • a remote device e.g., using IR, Bluetooth, or other RF
  • control system 210, the memory 214, at least one of the one or more sensors 230, or any combination thereof are positioned in the input device 218.
  • the control system 210, the memory 214, at least one of the one or more sensors 230, or any combination thereof are positioned in and/or coupled to the light 222.
  • the control system 210, the memory 214, at least one of the one or more sensors 230, or any combination thereof are positioned in and/or coupled to a mobile phone 286.
  • the control system 210 and the memory 214 are positioned in and/or coupled to the television 224 and at least one of the one or more sensors 230 are positioned in and/or coupled to the light 222.
  • control system 210, the memory 214, any of the one or more sensors 230, or any combination thereof can be located on and/or in any surface and/or structure in the location 205 that is generally adjacent to a sofa 235 and/or a user 215 (e.g., an individual/person in the location 205).
  • a user 215 e.g., an individual/person in the location 205.
  • at least one of the one or more sensors 230 can be located at a first position 255A on and/or in a side table 265 adjacent to the sofa 235 and/or the user 215.
  • At least one of the one or more sensors 230 can be located at a second position 255B on and/or in the sofa 235 (e.g., the one or more sensors 230 are coupled to and/or integrated in the sofa 235). Further, alternatively or additionally, at least one of the one or more sensors 230 can be located at a third position 255C on and/or in a wall that is generally adjacent to the sofa 235 and/or the user 215 (e.g., the one or more sensors 230 are coupled to and/or integrated in the wall).
  • At least one of the one or more sensors 230 can be located at a fourth position 255D on and/or in a ceiling of the location 205 that is generally adjacent to the sofa 235 and/or the user 215. Alternatively or additionally, at least one of the one or more sensors 230 can be located at a fifth position 255E on and/or in a housing of the light 222.
  • At least one of the one or more sensors 230 can be located at a sixth position 255F such that the at least one of the one or more sensors 230 are coupled to and/or positioned on the user 215 (e.g., the one or more sensors 230 are embedded in or coupled to fabric, clothing, lab-on-chip patch, and/or a smart device worn by the user 215). More generally, at least one of the one or more sensors 230 can be positioned at any suitable location relative to the user 215 (e.g., within the location 205 and/or adjacent to the location 205 and/or remote from the location 205) such that the one or more sensors 230 can generate sensor data associated with the user 215.
  • the location 205 further includes a sofa 235, a side table 265 adjacent to the sofa 235, at least one wall, and a ceiling.
  • a user 215 is located on the sofa 235 in front of the television 224.
  • the television 224 is placed on the at least one wall of the location 205.
  • the mobile phone 286 is placed on the side table 265.
  • the light 222 is installed on the ceiling.
  • the input device 218 is placed on the sofa 235.
  • the user 215 is currently awake and sitting on the sofa 235, and watching media content (e.g., an episode of a TV show, a TV show, a movie, a streaming video, a live video, a music video, a music album, an audiobook, a group video chat, a conference call, etc.) on the television 224.
  • the television 224 includes a first indication 295 indicative of a current sleep state of the user 215, a second indication 275 indicative of a location of a show that is currently being displayed on the television 224, and a third indication 285 indicative a current volume level of the television 224.
  • the system 200 can determine the current sleep state of the user 215 based at least in part on analyzing sensor data generated by at least one of the one or more sensors 230.
  • the television 224 displays, via the first indication 295, a text indication of “awake” to indicate that the system 200 has determine the current sleep state of the user 215 is awake.
  • the first indication 295 is not displayed on the television 224.
  • the example show being watched by the user 215 is 60 minutes long as indicted by the 60:00 notation on the television 224.
  • a snapshot of the television 224 indicates, via the second indication 275 (e.g., a progress bar), that the user 215 is currently at the 12 minute and 18 second moment in the displayed show/media content as indicated by the 12:18 notation on the television 224.
  • the television 224 displays the third indication 285, which indicates that the volume of the television 224 is currently set at a level of 50 (e.g., 50/100).
  • the volume of the television 224 can be automatically adjusted by the system 200 and/or manually adjusted by the user 215 (e.g., via a remote control, one or more buttons of the television 224, etc.).
  • the light 222 in the location 205 is currently ON and emitting electromagnetic radiation (e.g., light) at a first intensity.
  • the status of the light (ON or OFF) and or the level of intensity (e.g., brightness of the light 222) can be automatically adjusted by the system 200 and/or manually adjusted by the user 215 (e.g., via a light switch, a remote control, the mobile phone 286, etc.).
  • the present disclosure contemplates various methods to determine a sleep state of the user 215, which is used by the system 200 to control and/or modify one or more parameters/settings/aspects of one or more electronic devices (e.g., the television 224, the light 222, etc.).
  • one or more electronic devices e.g., the television 224, the light 222, etc.
  • a method of determining a current sleep state of a user includes “looking back” for a specific sleep state (e.g., a moment of time that the user falls asleep) after analyzing a set of sensor data generated by the one or more sensors 230 for a predetermined amount of time (e.g., a minute, five minutes, thirty minutes, an hour, two hours, etc.).
  • a specific sleep state e.g., a moment of time that the user falls asleep
  • a predetermined amount of time e.g., a minute, five minutes, thirty minutes, an hour, two hours, etc.
  • the determination of the current sleep state of the user 215 is not in real time as the system 200 requires data that is generated after the user 215 falls asleep to then look back and determine when the user fell asleep (e.g., at 22:34:06, at 10 PM, at 11PM, etc.) and/or when the user was about to fall asleep and/or drowsy.
  • the method analyzes longer trends in, for example, a respiration signal of the user 215 from the generated sensor data.
  • the set of sensor data is analyzed to determine a current sleep state of the user 215 based at least in part on a classical circadian rhythm model.
  • a method of determining a current sleep state of the user 215 includes analyzing data from one or more of the sensors 230 to analyze a sound of breathing of the user 215 (e.g., detected via a transducer such as a microphone, like the microphone 140).
  • a change in the sound of breathing can be associated with the user’s change in inspiration/expiration ratio, shallowness/deepness, notches on breathing waveform, onset of snoring, sniffing, grunting, humming, clearing of throat, scratching, rubbing eyes, or the like, or any combination thereof.
  • a method of determining a current sleep state of the user 215 includes analyzing data from one or more of the sensors 230 to analyze a breathing pattern of the user.
  • a reduction in breath variability e.g., a reduction in the variability below a personal threshold, and/or a drop in the average or median rate below a personal threshold
  • analyzing the data can include analyzing a large movement of the user. The user’s settling into a sleeping position and/or getting comfortable can be associated with the user’s transition from wakefulness to drowsiness, from drowsiness to asleep, etc.
  • a method of determining a current sleep state of the user 215 includes analyzing data from one or more of the sensors 230 to analyze a micro movement of the user 215.
  • the micro movement includes hypnagogic jerk (e.g., muscle spasms, “sleep starts”).
  • the muscle spasms tend to occur more frequently when the user 215 has an elevated anxiety level, or there is sound or light (e.g., atypical sounds or light, if a user is travelling and in an unusual surrounding), which may affect the user’s 215 transition between sleep states (e.g., from awake to asleep).
  • a method of determining a current sleep state of the user 215 includes analyzing data from one or more of the sensors 230 to analyze a core body temperature of the user, analyzing a heart rate of the user, analyzing a blood pressure of the user, analyzing an eye movement of the user, etc. With access to core body temperature, tracking a drop in the core body temperature can be helpful to detect the user’s 215 transition from wakefulness to drowsiness, from drowsiness to asleep, etc.
  • a user in an automotive setting (as opposed to the location 205), can wear smart glasses or smart contact lenses, and/or a camera apparatus can be included in the system to track an eye movement (e.g., a blink) of the user to determine a sleep state and/or to predict the user’s transition from wakefulness to drowsiness, from drowsiness to asleep, etc.
  • Tracking of the eye movement of a user can be implemented in safety critical applications, such that a notification (e.g., a feedback mechanism) can be provided to wake the user up and/or to stop machinery (e.g., the automotive) being operated by the user.
  • a notification e.g., a feedback mechanism
  • the system 200 receives one or more personal characteristics associated with the user that are analyzed in addition to the data from the one or more sensors 230.
  • the one or more personal characteristics can include an average breathing rate of the user over a number of time scales, a typical large and/or micro movements associated with the user, changes in heart rate, changes in blood pressure, changes in body temperature, etc.
  • a user profile of historical personal characteristics and/or baseline personal characteristics are stored (e.g., in the memory device 214) as input data for determining a sleep state of the user.
  • information about the user’s day can be used to determine historical/baseline personal characteristics - whether it was a typical or atypical day, strenuous, stressful, exciting, boring, etc.
  • social interactions of the user can be used to determine historical/baseline personal characteristics - who he/she met during the day; were they studying; who and how he/she interacted on social media; how he/she performed in a computer game, etc.
  • the determined historical/baseline personal characteristics can be used to model (i) a likely time period when the user is sleepy, (ii) a type of personal change(s) that is most likely to occur when the user transitions from a first sleep state to a second sleep state (e.g., from awake to drowsy, from drowsy to asleep), or both.
  • the historical/baseline personal characteristics allows a near real-time determination of the user’s transitioning among different sleep states. Such a personal profiling can be used to predict when the user is likely to fall asleep, to detect and/or confirm that user is/has fallen asleep, or both.
  • the user 215 is currently falling asleep and still sitting on the sofa 235 and watching the same media content on the television 224 as described in connection with FIG. 2A.
  • the eyes of the user 215 are now halfway closed and the head of the user is tilted downward, indicating that the user 215 is dozing off and/or falling asleep and/or drowsy. That is, the user 215 is about to fall asleep.
  • the same assessment of sleep state of the user 215 described in connection with FIG. 2A and/or FIG. 1 is conducted by the system 200. That is, the system 200 can determine the current sleep state of the user 215 based at least in part on analyzing sensor data generated by at least one of the one or more sensors 230.
  • the television 224 displays, via the first indication 295, a text indication of “about to fall asleep” to indicate that the system 200 has determined the current sleep state (FIG. 2B) of the user 215 is about to fall asleep.
  • the illustrated snapshot of the television 224 indicates, via the second indication 275 (e.g., the progress bar), that the user 215 is currently at the 30 minute and 2 second moment in the displayed show/media content as indicated by the 30:02 notation on the television 224.
  • a first time flag 245 is generated and displayed on the television 224 that is indicative of a first location (e.g., at time 30:02) in the show being displayed on the television 224.
  • the first time flag 245 is associated with a first time stamp (e.g., a date and/or a time of day) when the user 215 transitioned from awake (FIG. 2 A) to drowsy (FIG. 2B) as determined by the system 200.
  • the television 224 displays the third indication 285, which indicates that the volume of the television 224 is currently set at a level of 30 (e.g., 30/100), which is lower than the volume when the user 215 was awake (FIG. 2A).
  • the system 200 upon the determination that the user 215 is about to fall asleep, caused the volume of the television 224 to lower from a first setting (50/100) to a second setting (30/100).
  • the system upon the determination that the user 215 is about to fall asleep by the system 200, the system caused the intensity of the light 222 to lower from a first setting (FIG. 2 A) to a second setting (FIG. 2B).
  • the second setting of the light 222 (FIG. 2B) is dimmer than the first setting of the light 222 (FIG. 2A).
  • the system 200 monitors the user 215, and when it is determined that the user 215 is about to fall asleep and/or is dozing off, a notification can be provided to the user 215, for example, via the television 224 and/or any other one of the one or more electronic devices 120.
  • the notification can include a message that is displayed and/or played (e.g., via one or more speakers of the television 224) on/by the television 224.
  • the message is the same as, or similar to, the first indication 295.
  • the message is separate and distinct from the first indication 295.
  • the message is displayed and/or played (e.g., audio messages etc.) on another one of the one or more electronic devices 120 (e.g., a mobile phone, a smart watch, etc.).
  • the message can include text and/or one or more audio messages, sounds, or the like that is indicative of one or more reminders for the user 215 to get ready for bed, to wake up the user (e.g., if the user needed to be awake for a certain period, such as waiting up for a child to come), to remind the user to set an alarm for the next day, or any combination thereof.
  • the notification in response to determining that the user 215 is about to fall asleep in a room that is not their standard sleeping place (e.g., when the user is about to fall asleep in the location 205 as opposed to a bedroom of the user with a bed) around the user’s expected bedtime, the notification can include a reminder for the user to go to bed in his/her bedroom.
  • the user can avoid falling into deep sleep at an unusual place (e.g., on the sofa 235), and waking up uncomfortable later (e.g., sleeping on the sofa 235 is typically less comfortable than a bed).
  • the show being displayed on the television 224 can resume on another electronic device (e.g., a second television) at a second location (e.g., a bedroom).
  • the message can include: alerting the user 215 that the user 215 is having one or more events, prompting the user 215 to take a sleep study, reminding the user to put on a CPAP, or any combination thereof.
  • the notification is provided to the user 215 via the television 224, the mobile phone 286, the input device 218, or any one or more electronic devices (e.g., the notification may be provided to the user 215 via a smart watch).
  • the notification includes a sound played via a speaker of the television 224, a sound and/or vibration played via the mobile phone 286, a sound and/or vibration played via the input device 218, etc.
  • the notification can include playing the sound and/or vibration to remind the user 215 to get ready for bed, wake up the user 215, remind the user 215 to set an alarm for the next day, or any combination thereof.
  • a list of outstanding tasks is stored in the memory 214.
  • the notification described herein can be associated with one or more of the outstanding tasks on the list of outstanding tasks.
  • the list of outstanding tasks can include a task to book a flight, a task to depart a bus at a specific stop, a task to remove food from an oven, a task to switch off a power intensive system, a task to book tickets to an event, a task to watch media content, or any combination thereof.
  • the user 215 is currently asleep and laying down on the sofa 235, and no longer watching the same media content on the television 224 as described in connection with FIG. 2A.
  • the eyes of the user 215 are now closed and user 215 is asleep.
  • the same assessment of sleep state of the user 215 described in connection with FIGS. 2A, 2B, and/or FIG. 1 is conducted by the system 200. That is, the system 200 can determine the current sleep state of the user 215 based at least in part on analyzing sensor data generated by at least one of the one or more sensors 230.
  • the television 224 displays, via the first indication 295, a text indication of “asleep” to indicate that the system 200 has determined the current sleep state (FIG. 2C) of the user 215 is asleep.
  • the illustrated snapshot of the television 224 indicates, via the second indication 275 (e.g., the progress bar), that the television 224 is currently displaying the 40 minute and 18 second moment in the displayed show/media content, as indicated by the 40: 18 notation on the television 224, even though the user 215 is not watching it.
  • the second indication 275 e.g., the progress bar
  • the system 200 causes the television 224 is to pause the show at the 40:18 moment and/or display a status indicator (e.g., as the status display 249) such as a text “pause.”
  • a status indicator e.g., as the status display 249
  • the show may continue to play indefinitely, or for a predetermined amount of time (e.g., a predetermined duration of time, a predetermined number of episodes, the remaining portion of the show/episode, etc.).
  • a second time flag 247 is generated and displayed on the television 224 that is indicative of a second location (e.g., at time 40:18) in the show being displayed on the television 224.
  • the second time flag 247 is associated with a second time stamp (e.g., a date and/or a time of day) when the user 215 transitioned from drowsy (FIG. 2B) to asleep (FIG. 2C) as determined by the system 200.
  • the television 224 displays the third indication 285, which indicates that the volume of the television 224 is currently set at a level of zero (e.g., 0/100), which is lower than the volume when the user 215 was awake (FIG. 2 A) and lower than the volume when the user 215 was about to fall asleep/drowsy (FIG. 2B).
  • the system 200 upon the determination that the user 215 is asleep, caused the volume of the television 215 to lower from the second setting (30/100) to a third setting (e.g., 0/100).
  • the system caused the intensity of the light 222 to lower from the second setting (FIG. 2B) to a third setting (FIG.
  • the third setting of the light 222 (FIG. 2C) is off (e.g., zero intensity) and the second setting of the light 222 (FIG. 2B) is on (e.g., a non-zero intensity).
  • the user 215 is currently awake again and sitting back up on the sofa 235.
  • the eyes of the user 215 are open and the user 215 is awake.
  • the same assessment of sleep state of the user 215 described in connection with FIGS. 2A, 2B, 2C, and/or FIG. 1 is conducted by the system 200. That is, the system 200 can determine the current sleep state of the user 215 based at least in part on analyzing sensor data generated by at least one of the one or more sensors 230.
  • the television 224 displays, via the first indication 295, a text indication of “awake” to indicate that the system 200 has determined the current sleep state (FIG. 2D) of the user 215 is awake.
  • a third time stamp (e.g., a date and/or a time of day) is created when the user 215 is awake and returns to the television 224, and/or the user 215 transitions from asleep (FIG. 2C) to awake again (FIG. 2D), and/or via an input by the user 215.
  • the system 200 is configured to determine and/or indicate to the user 215 a recommended location within the show/media content, that the user was previously watching (e.g., FIG. 2A) when the system 200 determined that the user 215 was falling asleep and/or asleep, for the user 215 to resume watching the show/media content.
  • the recommended location within the show can be the same as the first time flag 245 (FIGS. 2B, 2C), the same as the second time flag 247 (FIG. 2C), a time before the first time flag 245, or a time between the first time flag 245 and the second time flag 247.
  • the illustrated snapshot of the television 224 indicates, via the second indication 275 (e.g., a progress bar), the recommended location for the user 215 to resume watching is at 33 minutes and 23 seconds into the show, as indicated by the 33:23 notation on the television 224.
  • a status display 249 is configured to display a text “resume watching?”.
  • the status display 249 is a prompt to the user 215 to determine if the user 215 wants to return to the show where the user 215 left off or if the user 215 wants to return to a different location in the show, or if the user 215 wants to do something else (e.g., watch a different show).
  • the television 224 displays the third indication 285, which indicates that the volume of the television 224 is currently set at a level of 50 (e.g., 50/100), which is the same volume level of the television 224 when the user 215 was previously watching the show (FIG. 2A).
  • the light 222 is currently at the first intensity, which is the same intensity of the light 222 when the user was previously watching the show (FIG. 2A).
  • the system 200 can modify and/or adjust aspects/parameters/operation of one or more other electronic devices, such as, for example, the thermostat 180, the radio 182, the stereo 184, the mobile phone 186, the tablet 188, the computer 192, the e-book reader 194, the audio book 196, the smart speaker 198 , the automated blinds 102 , the fan 104, the massage chair 106, the gaming console 108, the smart notepad 116.
  • the aspects, parameters, and/or operation of such electronic devices can be modified and/or adjusted based at least in part on a determined current sleep state of the user 215.
  • an operation of the massage chair 106 is adjusted according to the sleep state of the user 215. For example, the massage chair 106 turns off and/or reclines when the user falls asleep; turns off and/or reclines when the user is drowsy; creates a sudden movement to wake up the user like an alarm (e.g., so that the user can get up and go to bed or get up for the morning); or any combination thereof.
  • the massage chair 106 turns off and/or reclines when the user falls asleep; turns off and/or reclines when the user is drowsy; creates a sudden movement to wake up the user like an alarm (e.g., so that the user can get up and go to bed or get up for the morning); or any combination thereof.
  • FIGS. 2A-2D While the television 224 is shown in FIGS. 2A-2D as displaying the first indication 295, the second indication 275, and the third indication 285, more or fewer indications are contemplated for being displayed on the television 224.
  • a first alternative television can be caused to display the second indication 275 indicative of a location of a show that is displayed on the television 224 and the third indication 285 indicative a volume of the television 224; while a different display device (e.g., of the input device 218, of the mobile phone 286, etc.) can be caused to display the first indication 295 indicative of a sleep state of the user 215.
  • a second alternative television can be caused to display the first indication 295 indicative of a sleep state of the user 215, the second indication 275 indicative of a location of a show that is displayed on the television 224, the third indication 285 indicative a volume of the television 224, and a fourth indication indicative of a brightness of the television 224.
  • a display device other than the television 224 can be caused to display the first indication 295, the second indication 275, the third indication 285, the fourth indication, or any combination thereof. Therefore, the other display device can be correspondingly configured to display the sleep state of the user 215, the location of a show that is displayed on the television 224, the volume of the television 224, and/or the brightness of the television 224. In some implementations, the other display device can display the brightness of the light 222.
  • Step 310 of the method 300 includes generating and/or obtaining data associated with a sleep state of a user.
  • step 310 can include generating or obtaining data using one or more sensors (e.g., the one or more sensors 130, 230).
  • the generated or obtained data is associated with a user of one or more electronic devices (e.g., the one or more electronic devices 120 of the system 100).
  • the data can be generated and/or obtained before, during, and/or after the use of any of the one or more electronic devices by the user.
  • the generated data can include (i) a first portion that is generated when the user is awake, (ii) a second portion that is generated when the user is about to fall asleep, and (iii) a third portion that is generated when the user is asleep, or any combination thereof.
  • the generated data can include, for example, a sleep score, a flow signal, a respiration signal, a respiration rate, an inspiration amplitude, an expiration amplitude, an inspiration-expiration ratio, a heart rate, a heart rate variability, a movement of the user, or any combination thereof.
  • the generated data is associated with the sleep state of the user.
  • the sleep state of the user is determined multiple times, at one or more set intervals (e.g., every second, every five seconds, every ten seconds, every thirty second, every minute, every two minutes, etc. or at any other interval, whether consistent or different or random).
  • set intervals e.g., every second, every five seconds, every ten seconds, every thirty second, every minute, every two minutes, etc. or at any other interval, whether consistent or different or random.
  • the one or more sensors used during step 310 for generating and/or obtaining the data is not coupled to or integrated in the one or more electronic devices. That is, the one or more sensors used in step 320 is separate and distinct from the one or more electronic devices. As described herein, the one or more sensors can be positioned generally adjacent to the user as an independent, standalone sensor. Alternatively, the one or more sensors used during step 310 can be integrated in or coupled to a housing of the one or more electronic devices.
  • Step 320 of the method 300 includes analyzing the generated data (step 310) to determine a first set of sleep-related parameters for the user.
  • the generated data can be stored in a memory device (e.g., the memory device 114 of the system 100), and machine-readable instructions stored in the memory device are executed by a processor to analyze the generated data.
  • the first set of sleep-related parameters includes a first current sleep state of the user.
  • the first current sleep state of the user can be any of awake, about to fall asleep, drowsy, asleep, about to wake up.
  • Step 330 of the method 300 includes modifying an operation of one or more electronic devices responsive to the determination of the first current sleep state of the user (step 320).
  • the modification of the operation includes, for example, causing at least one of the one or more electronic devices to enter into a power saving mode, pausing the operation of the one or more electronic devices (e.g., pausing a show, a movie, an eSports session, a video game session, a document review session, a program, or anything else that is displayed on the one or more electronic devices), changing a volume level, changing a brightness level, flagging one or more moments in displayed media content, changing a state (e.g., ON or OFF), causing a notification to be displayed and/or played audibly, triggering an alarm, etc., or any combination thereof.
  • pausing the operation of the one or more electronic devices e.g., pausing a show, a movie, an eSports session, a video game session, a document review session, a program, or anything else that is displayed on the one or more electronic devices
  • changing a volume level changing a brightness level
  • flagging one or more moments in displayed media content changing a
  • the one or more electronic devices of the method 300 includes a light (e.g., the light 122 of the system 100, the light 222 of the system 200), and the modification of step 330 includes changing an intensity (e.g., brightness) of the lights.
  • the changing the intensity of the light 222 can include (i) lowering the intensity of the light 222 such that the light 222 continues to emit light (compare FIGS. 2 A to 2B), (ii) turning off the light 222 such that the light 222 does not continue to emit light (compare FIGS. 2 A or 2B with 2C), or (iii) turning on the light 222 such that the light 222 emits light (compare FIG. 2C with 2D).
  • the modification of the light as recited in step 330 can be a modification of one or more lights and the modification to the one or more lights can be different among the lights.
  • the one or more lights include a first light in a first location and a second light in a second location near the first location.
  • the modification of step 330 includes turning off the first light and turning on the second light. Additionally or alternatively, the modification of step 330 includes changing a color of one or more of the one or more lights.
  • the one or more electronic devices of the method 300 includes a television (e.g., the television 124 of the system 100, the television 224 of the system 200), and the modification of step 330 includes changing a volume of the television.
  • the changing the volume of the television 224 can include (i) lowering the volume of the television 224 such that the television 224 continues to produce sound, (ii) turning off the volume such that the television 224 does not continue to produce sound, or (iii) turning on the volume of the television 224 such that the television produces sound.
  • the television of the method 300 includes one or more built- in speakers, a sound bar, an external amplifier, an external speaker, or any combination thereof.
  • the modification of step 320 can include changing a power mode on the television, the sound bar, the external amplifier, the external speaker, or any combination thereof.
  • the changing the power mode can include powering down or entering into a power saving mode.
  • Another example electronic device of step 330 includes a fan.
  • the modification of step 330 can include changing a rotational speed of the fan.
  • the rotational speed of the fan can be changed from zero to non-zero, from non-zero to zero, from slow to fast, from fast to slow, from slow to zero, etc.
  • the modification of step 330 can include changing a rotational direction of the fan.
  • the rotational direction of the fan can be changed from counterclockwise to clockwise or from clockwise to counter cl ockwi se .
  • Yet another example electronic device of step 330 includes a thermostat (e.g., the thermostat 180 of the system 100).
  • the thermostat can be a thermostat of a HVAC system, a thermostat of a water heating device, a thermostat of a toaster, or any combinations thereof.
  • the modification of step 330 can include changing a temperature setting of the thermostat.
  • the changing the temperature setting of the thermostat can include (i) changing the temperature setting of the thermostat such that the temperature setting is higher, (ii) changing the temperature setting of the thermostat such that the temperature setting is lower, (iii) turning on the thermostat, (iv) turning off the thermostat, or any combination thereof.
  • Step 340 of the method 300 includes analyzing the generated data (step 310) to determine a second set of sleep-related parameters for the user, which is similar to the determining of the first set of sleep-related parameters (step 320).
  • the second set of sleep- related parameters includes a second current sleep state of the user.
  • the second current sleep state of the user can be any of awake, about to fall asleep, drowsy, asleep, about to wake up.
  • Step 350 of the method 300 includes further modifying an operation of the one or more electronic devices (e.g., the devices being modified in step 330). In some such implementations, the further modification is only done in response to the second current sleep state of the user (step 340) being different than the first current sleep state (step 320).
  • the one or more electronic devices includes an electronic display device emitting sound and/or displaying a show.
  • the first modification of the operation can include lowering a volume of emitted sound and/or lowering a brightness level of the electronic display device.
  • the further modification of the operation can include further lowering the volume of the emitted sound and/or further lowering the brightness level of the electronic display device.
  • the modification of the one or more electronic devices is automatically modified by a control system (e.g., the control system 110 of the system 100, the control system 210 of the system 200).
  • the control system of method 300 causes instructions or other indicia to be displayed (e.g., using the display device, via the one or more electronic devices, or both) to aid in prompting the user to modify the one or more electronic devices (e.g., physically select a setting of the one or more electronic devices).
  • Step 410 of the method 400 is the same as, or similar to, step 310 of the method 300 (FIG. 3) in that step 410 includes generating and/or obtaining data associated with a sleep state of a user.
  • Step 420 of the method 400 includes analyzing the generated data (step 410) to determine a first current sleep state of the user.
  • the first current sleep state of the user can be indicative of the user being fully awake, relaxed awake, drowsy, dozing off (e.g., about to fall asleep), asleep in light sleep (e.g., stage N1 and/r stage N2), asleep in deep sleep (e.g., stage N3 and/or slow wave sleep), or asleep in rapid eye movement (REM) (including, for example, phasic REM sleep, tonic REM sleep, deep to REM sleep transition, and/or light to REM sleep transition).
  • REM rapid eye movement
  • Step 430 of the method 400 includes, responsive to the first current sleep state of the user being about to fall asleep (step 420), generating a first time flag (e.g., the first time flag 245 of FIGS. 2B) that is indicative of a first location in a show being displayed on an electronic display device. Additionally or alternatively, responsive to the first current sleep state of the user being about to fall asleep, the method can further include generating a first time stamp that is indicative of a first date and/or a first time of day, lowering a volume of the electronic display device that is displaying the show, or any combination thereof.
  • a first time flag e.g., the first time flag 245 of FIGS. 2B
  • Step 440 of the method 400 includes further analyzing the generated data (step 410) to determine a second current sleep state of the user.
  • the second current sleep state of the user can be indicative of the user being fully awake, relaxed awake, drowsy, dozing off (e.g., about to fall asleep), asleep in light sleep (e.g., stage N1 and/r stage N2), asleep in deep sleep (e.g., stage N3 and/or slow wave sleep), or asleep in rapid eye movement (REM) (including, for example, phasic REM sleep, tonic REM sleep, deep to REM sleep transition, and/or light to REM sleep transition).
  • REM rapid eye movement
  • Step 450 of the method 400 includes, responsive to the second current sleep state of the user being asleep (step 440), generating a second time flag (e.g., the second time flag 247 of FIG. 2C) that is indicative of a second location in the show being displayed on the electronic display device. Additionally or alternatively, responsive to the second current sleep state of the user being asleep, the method can further include generating a second time stamp that is indicative of a second date and/or a second time of day, shutting off the volume of the electronic display device that is displaying the show, or any combination thereof.
  • a second time flag e.g., the second time flag 247 of FIG. 2C
  • step 450 further includes, generating a masking sound via a speaker (e.g., the speaker 142), responsive to the second current sleep state of the user being asleep (step 440).
  • a speaker e.g., the speaker 142
  • An example speaker can be coupled to a housing of the electronic display device. The speaker can generate the masking sound prior to the volume of the electronic display device being lowered and/or shut-off, so not to create a sudden change in sound that might wake up the user.
  • the masking sound can include a soothing sound, white noise, shaped white noise, pink noise, brown noise, or any other sound(s) or combination of sounds.
  • the soothing sound can include beach sounds, bird sounds, waterfall sounds, running water sounds, wind sounds, or any combination thereof.
  • the user can be given a choice to perceive a relatively flat shaped white noise sound, or for a quieter (lower level and/or low pass filtered) shaped noise signal.
  • Some variants of the flat shaped white noise sound are referred to as pink noise, brown noise, violet noise, etc.
  • the higher frequency sounds/noises e.g., “harsher” sounds, sound from the electronic display device
  • an optimized set of fill-in sound frequencies is selected to achieve a target noise profile, via, for example, the machine learning methods discussed herein. For example, if certain components of sound already exist in the frequency spectrum (e.g., related to a box fan in the room, a television, etc.), then fill-in sounds are selected with sound parameters/characteristics that fill in the quieter frequency bands, for example, up to a target amplitude level.
  • the speaker can adaptively attenuate the higher and/or lower frequency components using active adaptive masking and/or as adaptive noise canceling such that the perceived sound is more pleasant and relaxing to the ear (the latter being more suited to more slowly varying and predictable sounds).
  • step 450 of the method 400 further includes, responsive to the determination of the current sleep state of the user indicating that the user is about to fall asleep, cause a notification to be provided to the user via the electronic device.
  • the electronic device in such an implementation can be a television (e.g., the television 124 of the system 100, the television 224 of the system 200).
  • the providing the of notification can include displaying, on the television, a message. If the user is getting sleepy and/or falling asleep, step 450 of the method 400 can provide for alerting the user by displaying and/or playing audibly the message via the television.
  • the alerting the user can include reminding the user to get ready for bed, waking up the user (e.g., if the user needed to be awake for a certain period, such as waiting up for a child to come), reminding the user to set an alarm for the next day, or any combination thereof.
  • the television can be directly and/or wirelessly coupled to a speaker.
  • the providing the notification can include playing, via the speaker, sound.
  • the alerting the user can include, playing the sound to remind the user to get ready for bed, wake up the user, remind the user to set an alarm for the next day, or any combination thereof.
  • a list of outstanding tasks is stored in a memory device. The notification is associated with one or more of the outstanding tasks on the list of outstanding tasks.
  • Step 460 of the method 400 includes further analyzing the generated data (step 410) to determine a third current sleep state of the user.
  • the third current sleep state of the user can be indicative of the user being fully awake, relaxed awake, drowsy, dozing off (e.g., about to fall asleep), asleep in light sleep (e.g., stage N1 and/or stage N2), asleep in deep sleep (e.g., stage N3 and/or slow wave sleep), or asleep in rapid eye movement (REM) (including, for example, phasic REM sleep, tonic REM sleep, deep to REM sleep transition, and/or light to REM sleep transition).
  • REM rapid eye movement
  • Step 470 of the method 400 includes, responsive to the third current sleep state of the user being awake (step 460), displaying a prompt on the electronic display device. Additionally or alternatively, responsive to the third current sleep state of the user being awake, the method can further include generating a third time stamp that is indicative of a third date and/or a third time of day.
  • the prompt of step 470 includes a first selectable element, and a second selectable element. Selection of the first selectable element causes the electronic display device to display the show starting at the first time flag (step 430). Selection of the second selectable element causes the electronic display device to display the show starting at the second time flag (step 450).
  • the prompt of step 470 includes a pick-up selectable element (in addition to or in lieu of the first and second selectable elements). Selection of the pick-up selectable element causes the show to be picked back up where the user left off (e.g., fell asleep, was about to fall asleep, was drowsy, etc.). That is, the pick-up selectable element causes the show to play from a recommended location to restart the show, where the recommended location within the show is determined using one or more algorithms.
  • the pick-up selectable element causes the show to play from a recommended location to restart the show, where the recommended location within the show is determined using one or more algorithms.
  • the difference between the first and second time stamps of step 470 may indicate that the user was drowsy for quite some time (e.g., 15 minutes, 30 minutes, an hour etc.) before falling asleep.
  • the algorithm can determine that the user should resume at or around the first time flag (e.g., the position in the show when the user transitioned from awake to drowsy) because the user may not have remembered much of the show when he/she was drowsy (e.g., the time between the first and second time flags 245, 247).
  • the difference between the first and second time stamps corresponds directly and/or proportionally to a difference between the first and second time flags (e.g., the show continues to play at a same pace). In some implementations, the difference between the first and second time stamps does not directly correspond to the difference between the first and second time flags (e.g., an auto-play setting of the show allows a period of pause before starting a new episode).
  • the algorithm is configured to compare (i) a difference between time stamps, (ii) a difference between time flags, or (iii) both and to use that comparison in the determination of the recommended location to restart the show.
  • the difference between the second and third time stamps of step 470 may indicate that the user woke up relatively soon (e.g., five seconds, ten seconds, half a minute, one minute, etc.) after the user fell asleep.
  • the algorithm can determine that the user should resume at or around the second time flag (e.g., the position in the show when the user fell asleep) because the user may have remembered much of the show when he/she was drowsy.
  • the difference between the second and third time stamps of step 470 may indicate that the user has returned to watch the show several days, weeks, or months after previously watching the show.
  • the algorithm can determine that the user should resume at some time (e.g., one minute, two minutes, five minutes, twenty minutes, etc.) before the first time flag (e.g., when the user first began to doze off) because the user may have forgotten much of the show as the user is returning sometime later.
  • the prompt can direct the user to watch only the bits the user missed when the user was asleep.
  • the prompt can cause the electronic display device to display a first clip (when the user is asleep in a first sleeping session), skip some of the show (when the user is awake between the first sleeping session and a second sleeping session), display a second clip (when the user is asleep in the second sleeping session), skip some of the show (when the user is awake between the second sleeping session and a third sleeping session), etc.
  • a control system (e.g., control system 110, 210) is arranged to have access to an electronic source of the show. If the show is pre-recorded or streamed, the control system can be configured to allow the user to, after the user is awake (step 460) and via the prompt (step 470 of FIG. 4), view at least a portion of the show displayed between the first time flag and a third time flag (e.g., if the show were to continue to play, the third time flag corresponds to a location of the show when the user returns), or between the second time flag and the third time flag.
  • a third time flag e.g., if the show were to continue to play, the third time flag corresponds to a location of the show when the user returns
  • the control system can be configured to initiate recording (e.g., on a DVR) of the show and be able to allow the user to, after the user is awake (step 460) and via the prompt (step 470), view at least a portion of the show displayed between the first time flag and the third time flag, or between the second time flag and the third time flag.
  • the recording can be initiated based on the determined sleep state.
  • the recording of the show is only initiated based at least in part on the determined sleep state (e.g., when the user starts to fall asleep).
  • the recording is continuous, and the recording is only saved based at least in part on the determined sleep state. The saved recording can be automatically deleted if the user does not come back to watch it within a predetermined period of time.
  • control system of the method 400 is configured to execute machine-readable instructions (e.g., stored in the memory device of the method 400) to receive an input for a machine learning algorithm.
  • the input can be received via any of the electronic devices of the method 400 or via a separate device.
  • the input can include the first current sleep state (step 420), the first time flag (step 430), the first time stamp (step 430), the second current sleep state (step 440), the second time flag (step 450), the second time stamp (step 450), the third current sleep state (step 460), a time associated with the prompt being displayed (e.g., the third time stamp (step 470)), or any combination thereof.
  • the control system can be further configured to generate an output for the machine learning algorithm that includes a recommend pick-up location in the show.
  • the recommended pick-up location includes a recommended location to restart the show based on a trained algorithm (e.g., the machine learning algorithm).
  • a method 500 of training a machine-learning algorithm is illustrated.
  • One or more of the steps of the method 500 described herein can be implemented using the system 100 (FIG. 1) and/or the system 200 (FIGS. 2A-2D).
  • Step 510 of the method 500 includes receiving device data associated with one or more users of one or more electronic devices.
  • the device data can be generated or obtained via any of the one or more electronic devices.
  • the device data associated with each of the one or more users includes a first current sleep state, a first device setting (e.g., the first time flag of step 430), a first time stamp, a second current sleep state, a second device setting (e.g., the second time flag of step 450), a second time stamp, a third current sleep state, a recommended device setting (e.g., the prompt of step 470), or any combination thereof.
  • Step 520 of the method 500 includes receiving user input data from the one or more users of the electronic devices.
  • the user input data can be obtained via a user interface of any of the electronic devices or a separate device.
  • the user input data includes a user device setting (e.g., a user response to the prompt of step 470).
  • the user input data can be elicited using a survey.
  • the survey can be displayed using a display device (e.g., the display device 172 of the system 100, the one or more electronic devices 120, or both).
  • the survey generally instructs the one or more users to provide individual feedback and can be displayed or communicated visually to the one or more users as alphanumeric text, a dropdown list, a voting button, and/or other indicia.
  • the survey can be communicated audibly to the one or more users (e.g., using a speaker coupled to or integrated in the display device and/or the one or more electronic devices).
  • the individual feedback from the one or more users can be directly received by the display device of the step 520, an input device (e.g., the input device 118 of the system 100), and/or the one or more electronic devices of the method 500.
  • the requested individual feedback can be provided by selecting (e.g., clicking or tapping) one or more indicium displayed on the display device and/or the one or more electronic devices), by inputting alphanumeric text (e.g., using the input device, using a touch keyboard displayed on the display device and/or the one or more electronic devices), using speech to text (e.g., using a microphone), or any combination thereof.
  • Step 530 of the method 500 includes accumulating the device data and user input data.
  • the accumulated device data includes the device data that is currently being generated or obtained during step 510 (hereinafter, current device data) and previously recorded device data from prior iterations of the method 500 (hereinafter, historical device data).
  • the accumulated user input data includes the user input data that is currently being generated or obtained during step 520 (hereinafter, current user input data) and previously recorded user input data from prior iterations of the method 500 (hereinafter, historical user input data).
  • the historical device data and the historical user input data can be generated over the course of multiple iterations and can be stored in a memory device.
  • Step 540 of the method 500 includes training a machine-learning algorithm (MLA) using the device data accumulated and the user input data accumulated during step 530.
  • MLA machine-learning algorithm
  • the MLA is trained such that the MLA can receive as an input the current device data (step 510) and the current user input data (step 520) and determine as an output a predicted recommended device setting (e.g., a recommended location in a show to pick-up watching the show).
  • the MLA is trained using the historical device data and the historical user input data as a training data set.
  • the historical device data and the historical user input data can be continuously accumulated or updated (step 530) to update the training data set for the MLA.
  • the MLA can be, for example, a deep learning algorithm or a neural network, and can be stored as machine-readable instructions in the memory device that can be executed by one or more processors.
  • the MLA is further trained using only device data and user input data associated with a particular user.
  • Step 550 of the method 500 includes, selecting the predicted recommended device setting as the recommended device setting for the particular user.
  • the MLA described herein can be trained during step 540 using sensor data generated or obtained via any of one or more sensors.
  • the sensor data provided over the course of multiple iterations of the method 500 can be stored in the memory device as historical sensor data for training the MLA in the same or similar manner as the historical device data and the historical user input data described above.
  • the method 500 of FIG. 5 can be performed by a supervised or unsupervised algorithm.
  • the system may utilize more basic machine learning tools including (1) decision trees (“DT”), (2) Bayesian networks (“BN”), (3) artificial neural network (“ANN”), or (4) support vector machines (“SVM”).
  • DT decision trees
  • BN Bayesian networks
  • ANN artificial neural network
  • SVM support vector machines
  • deep learning algorithms or other more sophisticated machine learning algorithms e.g., convolutional neural networks (“CNN”), recurrent neural networks (“RNN”), or capsule networks (“CapsNet”) may be used.
  • CNN convolutional neural networks
  • RNN recurrent neural networks
  • CapsNet capsule networks
  • DT are classification graphs that match user input data to device data at each consecutive step in a decision tree.
  • the DT program moves down the “branches” of the tree based on the user input to the recommended device settings (e.g., First branch: Did the device data include certain sleep states? yes or no. Branch two: Did the device data include certain time stamps? yes or no, etc.).
  • Bayesian networks are based on likelihood something is true based on given independent variables and are modeled based on probabilistic relationships. BN are based purely on probabilistic relationships that determine the likelihood of one variable based on another or others. For example, BN can model the relationships between device data, user input data, and any other information as contemplated by the present disclosure.
  • ANN Artificial neural networks
  • ANN are computational models inspired by an animal's central nervous system. They map inputs to outputs through a network of nodes. However, unlike BN, in ANN the nodes do not necessarily represent any actual variable. Accordingly, ANN may have a hidden layer of nodes that are not represented by a known variable to an observer. ANNs are capable of pattern recognition. Their computing methods make it easier to understand a complex and unclear process that might go on during determining a symptom severity indicator based a variety of input data.
  • Support vector machines came about from a framework utilizing of machine learning statistics and vector spaces (linear algebra concept that signifies the number of dimensions in linear space) equipped with some kind of limit-related structure. In some cases, they may determine a new coordinate system that easily separates inputs into two classifications. For example, a SVM could identify a line that separates two sets of points originating from different classifications of events.
  • DNN Deep neural networks
  • CNN Convolutional Neural Network
  • RBM Restricted Boltzmann Machine
  • LSTM Long Short Term Memory
  • Machine learning models require training data to identify the features of interest that they are designed to detect. For instance, various methods may be utilized to form the machine learning models, including applying randomly assigned initial weights for the network and applying gradient descent using back propagation for deep learning algorithms. In other examples, a neural network with one or two hidden layers can be used without training using this technique.
  • the machine learning model can be trained using individual data and/or data that represents a certain user.
  • the data will only be updated with individual data and historical data from a plurality of users may be input to train the machine learning algorithm.
  • the systems and methods of the present disclosure provide that the input device (e.g., the smart phone or smart watch) can (i) listen to a user consuming media (e.g., TV program, streaming video, audiobook, or the like), (ii) identify what is being consumed, and/or (iii) keep track of the time-location in the media (e.g., where the user is in the movie or episode). Additionally or alternatively, in some implementations, the user is being monitored to determine their alertness and/or sleep state. The correlation between time-location in the media and the time when the user lost alertness are analyzed and/or processed to provide enhanced functionality.
  • a user consuming media e.g., TV program, streaming video, audiobook, or the like
  • identify what is being consumed e.g., where the user is in the movie or episode
  • the user is being monitored to determine their alertness and/or sleep state. The correlation between time-location in the media and the time when the user lost alertness are analyzed and/or processed to provide enhanced functionality.
  • the user device e.g., an app on a smart phone
  • the user device is configured to display a message that the user was watching X show and fell asleep (or became drowsy or lost attention) at approximately Y time within the show, then the user device could prompt the user to restart the show at Y time or some time-location before Y time (e.g., at a “suggested resume time”).
  • the suggested resume time can be tailored for the individual, which may be machine-trained using short-term-memory assessment techniques (e.g., generic, or media-specific).
  • a memory-test montage is created.
  • a quick recap of most recent episodes can be suggested and/or displayed to the user based at least in part on identified sleep time.
  • the system can automatically find and/or generate a recap of recent events that occurred in the media up to the identified sleep time.
  • custom recap can be generated by selecting a number of segments of interest that occurred immediately prior to the identified sleep time.
  • the segments of interest can be preset (e.g., by the producer of the media), and/or can be dynamically generated based on the historical measurements of the user’s alertness while watching the media (e.g., portions where the user is especially alert may be more important for the recap). For example, in some such implementations, if there is something the user does not remember, the user could press a button to pop them back into the episode at that location to re-watch that part the user does not remember.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Neurosurgery (AREA)
  • Social Psychology (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Dermatology (AREA)
  • Neurology (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

Des aspects de la présente invention concernent la réception de données associées à un état de sommeil d'un utilisateur en provenance d'un capteur. Les données reçues associées à l'état de sommeil de l'utilisateur sont analysées. Sur la base, au moins en partie, de l'analyse, un premier état de sommeil actuel de l'utilisateur est généré. En réponse à la détermination du premier état de sommeil actuel de l'utilisateur, un fonctionnement d'un ou de plusieurs dispositifs électroniques est amené à être modifié.
PCT/IB2020/059067 2019-09-30 2020-09-28 Systèmes et procédés de réglage de dispositifs électroniques WO2021064557A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962908417P 2019-09-30 2019-09-30
US62/908,417 2019-09-30

Publications (1)

Publication Number Publication Date
WO2021064557A1 true WO2021064557A1 (fr) 2021-04-08

Family

ID=72811908

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2020/059067 WO2021064557A1 (fr) 2019-09-30 2020-09-28 Systèmes et procédés de réglage de dispositifs électroniques

Country Status (1)

Country Link
WO (1) WO2021064557A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113835670A (zh) * 2021-09-22 2021-12-24 深圳Tcl数字技术有限公司 设备控制方法、装置、存储介质及电子设备
CN114159024A (zh) * 2021-11-17 2022-03-11 青岛海信日立空调系统有限公司 一种睡眠分期方法及装置
US20230059947A1 (en) * 2021-08-10 2023-02-23 Optum, Inc. Systems and methods for awakening a user based on sleep cycle

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140210625A1 (en) * 2013-01-31 2014-07-31 Lytx, Inc. Direct observation event triggering of drowsiness
US20150258301A1 (en) * 2014-03-14 2015-09-17 Aliphcom Sleep state management by selecting and presenting audio content
EP3026937A1 (fr) * 2014-11-27 2016-06-01 Samsung Electronics Co., Ltd. Procédé de commande de dispositif électronique annexe basé sur un état d'utilisateur et son dispositif électronique
US9665169B1 (en) * 2015-03-11 2017-05-30 Amazon Technologies, Inc. Media playback after reduced wakefulness
WO2018050913A1 (fr) 2016-09-19 2018-03-22 Resmed Sensor Technologies Limited Appareil, système et procédé de détection de mouvement physiologique à partir de signaux audio et multimodaux

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140210625A1 (en) * 2013-01-31 2014-07-31 Lytx, Inc. Direct observation event triggering of drowsiness
US20150258301A1 (en) * 2014-03-14 2015-09-17 Aliphcom Sleep state management by selecting and presenting audio content
EP3026937A1 (fr) * 2014-11-27 2016-06-01 Samsung Electronics Co., Ltd. Procédé de commande de dispositif électronique annexe basé sur un état d'utilisateur et son dispositif électronique
US9665169B1 (en) * 2015-03-11 2017-05-30 Amazon Technologies, Inc. Media playback after reduced wakefulness
WO2018050913A1 (fr) 2016-09-19 2018-03-22 Resmed Sensor Technologies Limited Appareil, système et procédé de détection de mouvement physiologique à partir de signaux audio et multimodaux

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230059947A1 (en) * 2021-08-10 2023-02-23 Optum, Inc. Systems and methods for awakening a user based on sleep cycle
CN113835670A (zh) * 2021-09-22 2021-12-24 深圳Tcl数字技术有限公司 设备控制方法、装置、存储介质及电子设备
CN114159024A (zh) * 2021-11-17 2022-03-11 青岛海信日立空调系统有限公司 一种睡眠分期方法及装置
CN114159024B (zh) * 2021-11-17 2023-10-31 青岛海信日立空调系统有限公司 一种睡眠分期方法及装置

Similar Documents

Publication Publication Date Title
JP7238194B2 (ja) 睡眠管理の方法及びシステム
US10492721B2 (en) Method and apparatus for improving and monitoring sleep
US20230173221A1 (en) Systems and methods for promoting a sleep stage of a user
WO2021064557A1 (fr) Systèmes et procédés de réglage de dispositifs électroniques
CN110049714A (zh) 用于促进觉醒的系统和方法
JP7083803B2 (ja) 睡眠管理の方法及びシステム
US11648373B2 (en) Methods and systems for sleep management
JP2023513888A (ja) 無呼吸-低呼吸指数計算のための睡眠状態検出
KR20230053547A (ko) 수면-관련 파라미터 분석을 위한 시스템 및 방법
JP2023526888A (ja) Rem行動障害を検出するためのシステム及び方法
JP2023547497A (ja) 治療中の睡眠性能スコアリング
US20240062872A1 (en) Cohort sleep performance evaluation
US20230165498A1 (en) Alertness Services
JP2023148754A (ja) 機器制御システム、機器制御方法、及び、プログラム
NZ755198B2 (en) Methods and Systems for Sleep Management
NZ755198A (en) Methods and systems for sleep management

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20789272

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20789272

Country of ref document: EP

Kind code of ref document: A1