CN118044225A - System for improving sleep through feedback - Google Patents

System for improving sleep through feedback Download PDF

Info

Publication number
CN118044225A
CN118044225A CN202280066536.6A CN202280066536A CN118044225A CN 118044225 A CN118044225 A CN 118044225A CN 202280066536 A CN202280066536 A CN 202280066536A CN 118044225 A CN118044225 A CN 118044225A
Authority
CN
China
Prior art keywords
human user
sleep
sensor
signals
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280066536.6A
Other languages
Chinese (zh)
Inventor
伊泰·肯奈恩
埃本·詹姆斯·比顿
罗萨里亚·曼尼诺
索拉布·古普塔
布拉德利·迈克尔·埃克特
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cocoa Home Co ltd
Original Assignee
Cocoa Home Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/401,737 external-priority patent/US11997455B2/en
Application filed by Cocoa Home Co ltd filed Critical Cocoa Home Co ltd
Publication of CN118044225A publication Critical patent/CN118044225A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M21/02Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis for inducing sleep or relaxation, e.g. by direct nerve stimulation, hypnosis, analgesia
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4812Detecting sleep stages or cycles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4836Diagnosis combined with treatment in closed-loop systems or methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/0507Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  using microwaves or terahertz waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1113Local tracking of patients, e.g. in a hospital or private home
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1118Determining activity level
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0022Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the tactile sense, e.g. vibrations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0027Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the hearing sense
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0044Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the sight sense
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0066Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus with heating or cooling
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/18General characteristics of the apparatus with alarm
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/33Controlling, regulating or measuring
    • A61M2205/3306Optical measuring means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/33Controlling, regulating or measuring
    • A61M2205/332Force measuring means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/33Controlling, regulating or measuring
    • A61M2205/3331Pressure; Flow
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/33Controlling, regulating or measuring
    • A61M2205/3368Temperature
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/33Controlling, regulating or measuring
    • A61M2205/3375Acoustical, e.g. ultrasonic, measuring means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/35Communication
    • A61M2205/3576Communication with non implanted data transmission devices, e.g. using external transmitter or receiver
    • A61M2205/3584Communication with non implanted data transmission devices, e.g. using external transmitter or receiver using modem, internet or bluetooth
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/35Communication
    • A61M2205/3576Communication with non implanted data transmission devices, e.g. using external transmitter or receiver
    • A61M2205/3592Communication with non implanted data transmission devices, e.g. using external transmitter or receiver using telemetric means, e.g. radio or optical transmission
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/50General characteristics of the apparatus with microprocessors or computers
    • A61M2205/52General characteristics of the apparatus with microprocessors or computers with memories providing a history of measured variating parameters of apparatus or patient
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/60General characteristics of the apparatus with identification means
    • A61M2205/609Biometric patient identification means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2230/00Measuring parameters of the user
    • A61M2230/04Heartbeat characteristics, e.g. ECG, blood pressure modulation
    • A61M2230/06Heartbeat rate only
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2230/00Measuring parameters of the user
    • A61M2230/40Respiratory characteristics
    • A61M2230/42Rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2230/00Measuring parameters of the user
    • A61M2230/63Motion, e.g. physical activity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/02Casings; Cabinets ; Supports therefor; Mountings therein
    • H04R1/028Casings; Cabinets ; Supports therefor; Mountings therein associated with devices performing functions other than acoustics, e.g. electric candles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/406Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers microphones

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Pathology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Psychology (AREA)
  • Biophysics (AREA)
  • Anesthesiology (AREA)
  • Molecular Biology (AREA)
  • Pain & Pain Management (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Business, Economics & Management (AREA)
  • Hematology (AREA)
  • Acoustics & Sound (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Radiology & Medical Imaging (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

In an example, the present technology provides a method for processing signals from a human user related to sleep states. Preferably, the method comprises using information from the signal for digital cognitive behavioral therapy to improve the sleep state of the human user. In an example, the method generally includes sensing human activity, processing information from such sensing, outputting tasks to a user, monitoring a user's reaction, and adjusting any of the above to improve a user's sleep state.

Description

System for improving sleep through feedback
Cross Reference to Related Applications
The present application is a continuation of, and claims priority to, U.S. patent application Ser. No. 17/401,737, filed on day 13, 8, 2021 (attorney docket number 5924.017US1); the entire contents of which are incorporated herein by reference.
Technical Field
The present invention relates to techniques, including methods and systems, for processing audio, motion, ultra wideband ("UWB") and frequency modulated continuous wave ("FMCW") signals using multiple antenna arrays, as well as other conditions and events. More specifically, as an example, the present technology may be combined with feedback for digital cognitive behavioral therapy to improve sleep. By way of example only, various applications may include daily life, sleep, and the like.
Background
There are various conventional techniques for monitoring personnel within a home or building environment. Such techniques include using a camera to view a person. Other technologies include pendants (pendants) or other sensing devices that are placed on a person to monitor his/her movements. Examples include personal emergency response system (Personal Emergency Response Systems, PERS) devices (e.g.,And/>LifeLine) each of which is simply an emergency button that the elderly presses in an emergency. Other techniques have also been proposed for monitoring sleep. Unfortunately, all of these techniques have limitations. That is, each of these techniques fails to provide reliable and high quality signals to accurately detect critical human activity of the monitored person. Furthermore, many techniques fail to provide meaningful feedback or countermeasures to counteract any adverse events.
From the above, it can be seen that techniques for identifying and monitoring persons are highly desirable.
Disclosure of Invention
In accordance with the present invention, techniques are provided relating to methods and systems for processing audio, UWB, FMCW signals using multiple antenna arrays, as well as other signals and events. More specifically, as an example, the present technology may be combined with feedback for digital cognitive behavioral therapy to improve sleep. By way of example only, various applications may include daily life, sleep, and the like.
In an example, the present technology provides a method for processing signals from a human user related to sleep states. Preferably, the method comprises using information from the signal for digital cognitive behavioral therapy to improve the sleep state of the human user. In an example, the method generally includes sensing human activity, processing information from the sensing, outputting tasks to a user, monitoring reactions from the user, and adjusting any of the above to improve a sleep state of the user.
The above examples and embodiments are not necessarily mutually inclusive or exclusive and may be combined in any non-conflicting and other possible ways, whether they are associated with the same or different embodiments or examples or implementations. The description of one embodiment or implementation is not intended to limit other embodiments and/or implementations. Furthermore, any one or more of the functions, steps, operations, or techniques described elsewhere in this specification may be combined in alternative embodiments with any one or more of the functions, steps, operations, or techniques described in the summary section. Accordingly, the above-described example embodiments are illustrative, and not restrictive.
Drawings
Fig. 1 is a simplified illustration of a radar/wireless backscatter sensor system in accordance with an example of the present invention.
Fig. 2 is a simplified illustration of a sensor array according to an example of the invention.
Fig. 3 is a simplified illustration of a system according to an example of the invention.
Fig. 4 is a detailed illustration of a hardware apparatus according to an example of the invention.
Fig. 5 is a simplified illustration of a hinge in a spatial region according to an example of the present invention.
Fig. 6 is a simplified illustration of mini-modes in a spatial region according to an example of the present invention.
Fig. 7 is a simplified illustration of a movement pattern in a spatial region according to an example of the invention.
Fig. 8 is a simplified illustration of a hub device according to an example.
Fig. 9 is a simplified illustration of an ultra-wideband module for a hinge in accordance with an example of the present invention.
Fig. 10 is a simplified illustration of electrical parameters of an example ultra-wideband module according to the present invention.
Fig. 11 is a simplified system diagram of an ultra-wideband module according to an example of the invention.
Fig. 12 is an example of antenna array parameters for an ultra-wideband module according to the present invention.
Fig. 13 is an example of an antenna array configuration for an ultra wideband module according to the present invention.
Fig. 14 is a simplified illustration of an exemplary FMCW module and antenna array according to the present invention.
Fig. 15 is a simplified illustration of an exemplary three antenna array according to the present invention.
Fig. 16 is a table illustrating device parameters according to an example of the present invention.
Fig. 17 is a simplified illustration of a system architecture for an FMCW apparatus according to an example of the invention.
Fig. 18 is a simplified illustration of an alternative system architecture for an FMCW apparatus according to an example of the invention.
Fig. 18A is a simplified illustration of various elements in a microcontroller module according to an example of the invention.
Fig. 19 is a simplified illustration of an alternative system architecture for an FMCW apparatus according to an example of the invention.
Fig. 20 is a simplified illustration of each antenna in an array according to an example of the invention.
Fig. 21 is a simplified top view of an audio module according to an example of the invention.
Fig. 22 and 23 are a simplified circuit diagram and microphone array arrangement, respectively, according to an example of the invention.
FIG. 24 is a simplified top view of an inertial sensing module according to an example of the invention.
Fig. 25 is a simplified illustration of a user interface according to an example of the present invention.
Fig. 26 is a simplified illustration of a processing system according to an example of the present invention.
Fig. 27 is a simplified block diagram of a cellular module coupled to a processing system.
FIG. 28 is a simplified illustration of a process of deep interaction with a human user that senses signals associated with sleep and active feedback, according to an example of the present invention.
FIG. 29 is a more detailed illustration of a deep interaction process according to an example of the invention.
Fig. 30 is a simplified diagram illustrating feedback with respiratory exercise as a deep interaction process, according to an example of the present invention.
FIG. 31 is a simplified illustration showing details of a deep interaction process according to an example of the present invention.
FIG. 32 is a detailed illustration of a process showing deep interaction using ambient lighting, according to an example of the invention.
Detailed Description
In accordance with the present invention, techniques are provided relating to methods and systems for processing UWB and FMCW signals using multiple antenna arrays. In an example, a plurality of antenna arrays (including a receive antenna array and a transmit antenna array) are configured to capture and transmit signals in an omni-directional manner. By way of example only, various applications may include daily life, sleep, and the like.
Fig. 1 is a simplified illustration of a radar/wireless backscatter sensor system 100 in accordance with an example of the present invention. This illustration is merely an example and should not unduly limit the scope of the claims herein. In an example, the system is a wireless backscatter detection system. The system has a control line 101 coupled to a processing device. The control line is configured with a switch to trigger the activation of the wireless signal. In an example, the system has a waveform pattern generator 103 coupled to a control line. The system has an RF transmitter 105 coupled to a waveform pattern generator. The system has a transmit and receive antenna 107. In an example, the system has a transmit antenna coupled to an RF transmitter and an RF receiver 105 coupled to an RF receive antenna. In an example, the system has an analog front end that includes a filter 109. Analog to digital converter 111 is coupled to the analog front end. The system has a signal processing device 113 coupled to the analog-to-digital converter. In a preferred example, the system has an artificial intelligence module coupled to the signal processing device 113. The module is configured to process information associated with the backscatter signal captured from the RF receive antenna. For more details on the system, reference may be made to the entire specification, in particular to the following.
Antenna
In an example, aspects of antenna design may improve performance of a daily living activity (ACTIVITIES OF DAILY LIFE, "ADL") system. For example, in scan mode, the present technique may continually look for a moving target person (or user) to extract ADLs or fall events (fall). Since these situations may occur anywhere in the residential space area, the present system has an antenna with a wide field of view. Once the target person is identified, the technology concentrates on signals from only specific targets and attenuates echoes from all other targets. This may be accomplished by: first, the location of the target is estimated by our technique using a wide field of view antenna, and then RF energy is focused on the target after a particular target of interest is identified. In an example, the technique may electronically switch different antennas with narrow fields of view, or may use beamforming techniques to transmit beams from multiple transmit antennas simultaneously and control their phases so that RF energy builds constructively around the object of interest while destructively canceling everywhere else. Such echoes will be cleaner and can improve the performance of our adl+fall event+vital sign (adl+fall+ VITAL SIGN) sensor.
In another example, the layout (layout) of the antenna itself is considered. In an example, the technique places the transmit and receive antennas in various different physical configurations (ULA, circle, square, etc.), which can help us create the direction in which the radar signal returns by comparing the phases of the same radar signal on different receive antennas. The configuration works because different configurations enable the direction of arrival to be measured from different dimensions. For example, when a human target falls, the vertical angle of arrival changes from top to bottom, so a vertical ULA is more suitable for capturing this information. Also, the horizontal arrival angle of the signal varies more during walking, so it is reasonable to use a more sensitive horizontal ULA, so that more information can be provided for our algorithm. Of course, other variations, modifications, and alternatives are also possible.
RF unit
In an example, the wireless RF unit may be either a pulsed doppler radar or a Frequency Modulated Continuous Wave (FMCW) or continuous wave doppler (CW). In an example, at the transmitting end, it will have a standard RF unit, such as VCO, PLL, etc. At the receiving end, it may have matched filters, LNAs, mixers and other elements. Multiple antennas may be driven by a single transmit/receive chain by sharing the single transmit/receive chain in time, or with one chain per antenna.
Waveform unit
In an example, the waveform pattern generator generates control signals that define the type of radar signal generated by the radar RF unit. For example, for FMCW, it may generate a triangular wave of a particular slope and period that will linearly scan the frequency of the RF unit in accordance with the parameter. For pulsed doppler radar, the technique will remain to produce pulses of a particular width and period, which will modulate the RF output accordingly.
Baseband unit
In an example, the gain and filtering stage filters the radar echo to remove any unwanted signals and then amplifies the remaining signals by different techniques. For example, current artificial intelligence or AI technology can determine which targets are to be tracked and provide feedback to the AI technology, which will filter out radar echoes of any and all other signals except the signal that needs to be tracked. If the target person is moving, the echo signal will fluctuate, in which case the technique applies Automatic Gain Control (AGC) to find the optimal gain to meet the full dynamic range of the ADC in the subsequent stage. In an example, the return signal is converted to digital samples by a front-end component such as an analog-to-digital converter (ADC).
Fig. 2 is a simplified illustration of a sensor array 200 according to an example of the invention. The illustration is merely an example and should not unduly limit the scope of the claims herein. A sensor array is shown. The sensor array includes a plurality of passive sensors 201. In an example, a plurality of passive sensors are spatially placed within a spatial region of a living area. The sensor array has active sensors, such as one or more radar sensors 203. In addition, the array has a feedback interface 205, such as a speaker for calling a target person in a spatial region of the living area.
In an example, the present technology is provided to identify various activities in a home using a non-wearable device. In an example, the present technology is as little invasive as possible to privacy and will use less invasive sensors. Examples of sensors may include, but are not limited to, wireless backscatter (e.g., radar, wiFi), audio (e.g., microphone array, speaker array), video (e.g., mount Pan Tilt (PTZ), stereo), pressure pads, infrared, temperature, ultraviolet, humidity, pressure, smoke, any combination thereof, and the like.
Active sensor for RADAR (RADAR)
In an example, the technique may use wireless backscatter to measure a person's motion, position and environmental state (e.g., door open/closed), or other environmental conditions. In an example, wireless backscatter can also be used to measure vital signs, such as heart rate and respiratory rate, etc. In an example, wireless technology may operate in a non-line-of-sight range and is not invasive compared to cameras or microphones, etc. In an example, the technology may use radar/backscatter sensors to achieve two objectives: (1) find the location of the action; (2) sensing different activities associated with the action. Of course, other variations, modifications, and alternatives are also possible.
In an example, the present technology and system includes radar systems operating in multiple frequency bands, such as 10GHz or less, 24GHz or thereabout, 60GHz, 77-81GHz, and so on. In an example, the different frequencies interact differently with various objects in the environment. In an example, there are different specifications for the available signal bandwidths and the allowed signal powers for the different frequency bands. In an example, the present techniques optimally combine reflections from multiple frequency bands of the reflector to achieve large area coverage and/or to improve accuracy. Of course, other variations, modifications, and alternatives are also possible.
In an example, as shown, each radar operating in a particular frequency band will use multiple transmit and receive antennas. In an example, with the plurality of transmitters, the technique may perform transmit beamforming to concentrate radar signals onto a particular target. In an example, the technique uses multiple receivers to collect reflected signals from different reflectors (e.g., human body, wall). After further processing this will allow us to find the direction of the reflector relative to the radar. In an example, the technique also uses multiple transmitters and receivers to form a virtual array, which would allow a radar array with large elements to be simulated using a small number of transmitter and receiver chains. The main benefit of this is to increase angular resolution without using large arrays, thus saving space and component costs. In an example, different antenna array configurations are included to improve coverage (by using beamforming) or to increase three-dimensional positioning capabilities (by using two-dimensional arrays).
In examples where standard radar signal modulation techniques (e.g., FMCW/UWB) are used on MIMO radar, the techniques will first separate signals from different ranges and angles. The technique will then identify the static reflector (e.g., chair, wall, or other feature) and the moving reflector (e.g., target character, pet, or the like). For tracked moving objects, the technique will further process the signals for each of the reflectors. For example, the technique will use a different technique to extract the raw motion data (e.g., similar to a spectrogram). In an example, the technique will apply various filtering processes to extract periodic signals generated by vital signs such as heart rate, respiratory rate, and the like. In an example, the raw motion data and extracted vital signs will be passed to a downstream process where they will be combined with data from other sensors, such as radar outputs operating at different operating frequencies or disparate sensors, to extract a higher insight about the environment. Of course, other variations, modifications, and alternatives are possible.
Audio sensor
In an example, the present technology uses a sensor array having a plurality of microphone arrays. In an example, these microphones will be used to determine the direction of arrival of any audio signal in the environment. In an example, the use of a microphone in combination with other sensors (e.g., radar) will be critical to performing two tasks: first, it will enhance the radar signal to identify various activities (different from the sounds produced by walking and sitting), which are easier to determine by audio signals if the target is watching Television (TV); second, in the event of an emergency such as a fall, the technique can use the radar signals to identify the location of the fall and then beam the microphone array towards that location so that any audio signal generated by the target can be captured. Of course, other variations, modifications, and alternatives are also possible.
Sensor fusion and soft sensor
In addition to radar sensors, which are considered active sensors, the sensor system (e.g. one or more boxes) will have additional passive sensors for capturing sound, chemical features, environmental conditions. Each sensor is capable of capturing different environmental contexts of the premises in which the person being tracked resides or occupies. For example, an Ultraviolet (UV) sensor may monitor the frequency of sunlight entering a room. In an example, the light sensor may determine a lighting condition of a person's home or living being.
In an example, the microphone array may have a variety of functions, such as for sensing sounds in a room, calculating the length of time a person spends watching television by listening to the sound of a toilet flush or other audio feature, or how often they go to a restroom. In an example, the present technology may use inventive solutions in which a person's location may be found using active sensors, and then the microphone array is adjusted to enhance only sound from that location, among other features. In an example, the present technology may refer to sensors derived from hardware sensors using a particular algorithm as software sensors or soft sensors. Thus, by creating different software sensors, the same hardware sensor can be used for a variety of different applications. Here, the software sensor may combine signals from one or more sensors and then apply sensor fusion and Artificial Intelligence (AI) techniques to generate the desired output. Of course, other variations, modifications, and alternatives are possible.
Soft sensor for detecting cooking and eating habits
For example, the radar sensor may determine information about the location of a person in the home, such as whether in a kitchen area or other area. In an example, when a target person turns on a microphone oven, a specific RF signature is generated that can be tracked. In an example, the technique may combine this information to infer whether the target person walks to the kitchen and turns on the microphone. Also, when the target person is preparing food in the kitchen, he/she emits a lot of specific noise, such as a tableware collision sound, a vegetable cutting sound, or other audio features. Thus, if the target person walks into the kitchen and stays in the kitchen for a period of time while the microphone captures the sounds, the technique can infer that food is being cooked or that other activities are being performed.
Soft sensor for detecting toilet habits
In an example, the toilet frequency may be very valuable in indicating a person's health condition. The present technology may utilize radar or other sensing technology to track whether a person is going to a restroom. Furthermore, in an example, the technique may also collect the acoustic characteristics of the toilet flush. In an example, the technique combines these two types of information, which may be related to toilet frequency. Similarly, in the example, bathing is a unique activity that requires a specific action of 4-5 minutes. By learning these patterns, the technique can find a person's bathing routine.
Soft sensor for detecting movement habits
In an example, different movements of the target person trigger different sensors. In an example, the radar can detect a fall of a person by observing micro-doppler patterns generated by different parts of the target when falling. In an example, the technique can also detect falls from the microphone array and the vibration sensor simultaneously. In an example, the technique may also detect how the individual's movement speed changes over a long period of time by monitoring the positional information provided by radar or other sensing techniques. Also, in an example, the technique may collect unstable transition information by analyzing gait of the target. In an example, the technique may find a strolling ahead of a door by analyzing radar signal patterns. In an example, the technique may find immobility by analyzing radar returns. In this case, the technique may ascertain the presence of the target by analyzing vital signs (e.g., respiratory rate or heart rate) of the target or breadcrumbs that track the target's location trajectory.
In any and all of the above cases, the technique can also learn about the exact environmental conditions that trigger a particular state. For example, the technique may ascertain whether the target person is not moving because of watching television or video for a long time, or whether the target is lying in bed for a long time only. These can be used to design motivational measures to alter the behavioral patterns of the target to improve life.
Soft sensor for detecting vital signs
In an example, the technique may estimate a person's vital signs by sensing the vibration of the body of the target in response to respiration or heartbeat, each action resulting in a small phase change in the radar echo signal, which may be detected. In an example, the technique will use a variety of signal processing techniques to extract the signals. Of course, other variations, modifications, and alternatives are also possible.
In an example, the radio waves of different frequencies interact differently with the environment. For example, 77GHz radar has a much higher phase change than 10GHz radar. 77GHz is therefore more suitable for estimating the heartbeat more accurately. But generally the higher the frequency, the faster the decay. Thus, the detection range of the low frequency radar is much larger. These important tradeoffs can be achieved by using multi-frequency radars in the present technology.
Soft sensor for detecting sleep habits
In an example, the present radar sensor may detect movements that occur during sleep, such as roll over. In an example, as previously described, the radar sensor may also sense vital signs such as respiratory rate and heart rate. In an example, the technique can now effectively monitor the sleep of the subject in combination with the roll-over mode and the different breathing and heartbeat modes. Furthermore, the technology can now also find out the correlation between a specific sleep mode and environmental conditions in combination with the results of passive sensors such as thermometers, UV, photodiodes, etc. In an example, the technique may also use sleep monitoring soft sensors to learn about the diurnal reversal of sleep and associated environmental conditions by looking at the different passive sensors. In an example, the technique may play an important role in providing feedback to improve sleep of the target person. For example, the technique may determine or understand that certain environmental conditions bring about better sleep and thereby improve future sleep. For more details on the sleeping process, see the present description, in particular below.
Soft sensor for security applications
In an example, the technology may be used with many of the sensor retrofit applications previously described for security applications. For security applications, the technology may determine the location of one or more persons, which may be detected using presence detection sensors built on top of radar signals. In an example, the techniques may eliminate one or more false positives triggered by conventional security systems. For example, when a window is suddenly blown open by wind, the technique (and system) looks to see if someone is nearby before triggering an alarm. Likewise, vital signs, movement patterns, and other combinations may be used to identify any target person. The technique may trigger an alarm or alert if an unknown target person is detected nearby at some time during the day.
In examples, any of the above-described sensing techniques may be combined, separated, or integrated. In an example, in addition to radar and audio sensors, other sensors may be provided in the sensor array. Of course, other variations, modifications, and alternatives are also possible.
Fig. 3 is a simplified illustration of a system 300 according to an example of the invention. The illustration is an example only and should not be unduly limiting the scope of the claims herein. As shown, the system has hardware and methods (e.g., algorithms), cloud computing, personalized analysis, customer interactions, and APIs to various partners such as police, medical, and the like. For more details on the system, reference is made to the present description, in particular the following.
Fig. 4 is a detailed illustration 400 of a hardware apparatus according to an example of the invention. The illustration is an example only and should not be unduly limiting the scope of the claims herein. As shown, the hardware elements include at least a hub device 401, a node 403, and a mobile node 405, each of which will be described in more detail below.
In an example, the hub includes various sensing devices. These sensing devices include radar, wiFi, bluetooth (Bluetooth), zigbee sniffer, microphone and speaker, smoke detector, temperature detector, humidity detector, ultraviolet (UV) detector, pressure detector, MEMS (e.g., accelerometer, gyroscope and compass), UWB sensor (for finding the position of all deployed elements relative to each other), and the like. In an example, the hub is a gateway that connects to the internet through WiFi, GSM, ethernet, landline, or other technology. The hub may also connect with other units (mini nodes/mobile nodes) via bluetooth, wiFi, zigbee, UWB and coordinate with each other. In an example, some data processing, such as denoising, feature extraction, is also included to reduce the amount of data uploaded to the cloud. In an example, a hub alone is sufficient to cover a small living space. In an example, the hub is deployed as a single device in a suitable location (e.g., in the middle of a living space) to maintain good connectivity with all other units. The following diagram provides an example of such a deployment.
Fig. 5 is a simplified illustration 500 of a hinge in a spatial region according to an example of the invention. The illustration is merely an example and should not unduly limit the scope of the claims herein. As shown, the hub is deployed in the middle of the living space of the house.
In an example, as shown in fig. 6, a system 600 has sensors that are a subset of the sensors within a hub. The sensors are configured in different spatial locations to improve coverage and to improve accuracy in detecting critical events (e.g., falls, person calling for help). The sensor may also communicate with the hub via WiFi, bluetooth, zigBee or UWB or other technology. In addition, the or each mini-node may be deployed in a bathroom (where the chance of falling is high), in a kitchen (where we can learn about eating habits by listening to sound, RF waves, vibrations), or around the living space (which would allow we to learn about the approximate map of the space under consideration), etc. In addition, each mini-node may save power and cost by increasing the complexity of the hub. This may even allow us to use the battery for a long time. For example, each node may have only a single antenna WiFi, while the hub may have multiple antennas to enable WiFi-based sensing. In addition, each node uses simpler radar (e.g., single antenna Doppler), while Multiple Input Multiple Output (MIMO) FMCW is used in the hub. Furthermore, each node may be configured with a single microphone, while the hub may have an array of microphones. Of course, other variations, modifications, and alternatives are also possible. As shown, each node is configured in a kitchen, shower, perimeter, or other location.
Fig. 7 is a simplified illustration 700 of a mobile node according to an example of the invention. The illustration is merely an example and should not unduly limit the scope of the claims herein. In an example, each mobile node is a subset of the sensors in the hub. The mobile node sensor includes a camera, such as RGB or IR. In an example, each node and hub cooperatively find an event of interest and communicate that information to the mobile node. The technique is then moved to the site and further probed. In an example, a camera may be used to intuitively find what is happening at the location. In an example, free patrol may be used to detect any anomalies or refine details of a map drawn based on surrounding nodes. In an example, on-board UWB may enable accurate positioning of mobile nodes, as well as wireless tomography, to determine accurate RGB and wireless maps of living space. As shown, a mobile node (e.g., a cell phone or smart phone or other mobile device) may physically move throughout a spatial location. The mobile node may also be a drone or other device. Of course, other variations, modifications, and alternatives are possible. For more details on examples of hub devices, see the present description, in particular the following.
Fig. 8 is a simplified illustration of a hub device 800 in accordance with an example of the present invention. As shown, the hub device has a cylindrical housing 801 having a length and a diameter. The housing has an upper top region and a lower bottom region arranged parallel to each other. In the example, the maximum length of the housing is six to twenty-four inches and the width is no more than six inches, but other lengths and widths are possible, such as diameters. In an example, the housing has sufficient structural strength to erect and protect the interior area within the housing.
In an example, the housing has a height that characterizes a top region from a bottom region of the housing. In an example, the plurality of layers 803 are located within the housing, numbered from 1 to N, where N is an integer greater than 2, which may be 3, 4, 5, 6, 7, etc.
As shown, various elements are included. As shown, speaker device 809 is disposed within the housing and above the bottom region. The hub device also has a computing module 811 above the speaker device, the computing module 811 comprising a processing device (e.g. a microprocessor). The device has an artificial intelligence module configured above the computing module, an ultra wideband ("UWB") module 813 including an antenna array configured above the artificial intelligence module, and a frequency modulated continuous wave ("FMCW") module 815 having an antenna array configured above the UWC module. In an example, the FMCW module is configured to process electromagnetic radiation having a frequency range of 24GHz to 24.25 GHz. In an example, the FMCW module outputs an FMCW signal using a transmitter and receives a backscatter signal using a receiver (e.g., a receiver antenna). The device has an audio module configured on top of the FMWC module and an inertial measurement unit ("IMU") module configured on top of the FMCW module. In an example, the audio module includes a microphone array for detecting energy in a sound frequency range for communication and detecting sound energy. In an example, the IMU module includes at least one motion detection sensor including one of a gyroscope, accelerometer, magnetic sensor, or other motion sensor, and combinations thereof.
As shown, the speaker devices, computing modules, artificial intelligence modules, UWB modules, FMCW modules, audio modules, and IMU modules are arranged in a stacked configuration and are configured in multiple tiers numbered 1 through N, respectively. As shown, the speaker device is spatially configured to output energy over a 360 degree range from a midpoint of the device.
In an example, the computing module includes a microprocessor-based unit coupled to a bus. In an example, the computing module includes a signal processing core, a microprocessor core for an operating system, a synchronization processing core configured to time stamp and synchronize incoming information from each of the FMCW module, the IMU module, and the UWB module.
In an example, the device further includes a real-time processing unit configured to control the FMCW switch or the UWB switch or other switch requiring real-time switching operations of less than 1/2 milliseconds after receiving feedback from the plurality of sensors.
In an example, a device has a graphics processing unit configured to process information from an artificial intelligence module. In an example, the artificial intelligence module includes an artificial intelligence inference accelerator configured to apply a trained module using a neural network-based process. In an example, the neural network-based process includes a plurality of nodes numbered 1 through N. Further details of UWB modules may be found throughout the present specification and more particularly below.
Fig. 9 is a simplified illustration of an ultra-wideband module 900 for a hinge in accordance with an example of the present invention. An ultra wideband RF sensing device or module is shown. In an example, the apparatus has at least three antenna arrays 901, 903, 905 configured to sense backscatter of electromagnetic energy over a 360 degree range from a spatial location of a zero degree location associated with a midpoint of the device, wherein each antenna array is configured to sense a 120 degree range. As shown, each of the three antenna arrays includes a support, and a plurality of transmit antennas 909 are spatially arranged on a first portion of the support. The support also has a transmitting integrated circuit coupled to each of the plurality of transmitting antennas and configured to transmit UWC signals outwardly. Each antenna array also has a plurality of receive antennas spatially arranged on the second portion of the support. The support also has a receive integrated circuit coupled to each of the plurality of receive antennas, the receive integrated circuit configured to receive an incoming UWB signal and configured to convert the UWC signal to baseband.
In an example, the device has a triangular configuration including a first antenna array, a second antenna array, and a third antenna array, the three antenna arrays being included in at least three antenna arrays. The three arrays provide a 360 degree visual range measured from the horizontal plane and an 80 degree visual range measured from a vertical plane perpendicular to the horizontal plane. As previously described, the three arrays are enclosed in a housing that provides mechanical support. In an example, each sensor array is provided on one substrate member, which will be configured in a triangular configuration. The base plate member has a face arranged in a perpendicular manner along each support member direction.
In an example, the UWB module may operate with multiple antenna arrays at a center frequency of 7.29GHz and a bandwidth of about 1.5GHz to meet FCC/ETSI compliance standards. In an example, the module has a combined horizontal field of view of 360 degrees about a module center point. In the example, the range of the module is greater than 10 meters, but may be shorter or longer. In an example, the module is configured to achieve a frame rate per second (FPS) equal to or greater than 330 transmit and receive rates per Tx-Rx. In an example, the module has a combined horizontal field of view of 360 degrees implemented using three (3) antenna arrays, each covering 120 degrees. In an example, each antenna array includes 1-TX and 4-RX. Each antenna array may complete frame acquisition in 1 millisecond or less. Thus, a total time of three (3) milliseconds may cover all three (3) sectors, with a frame rate of up to 330 frames per second per sector (per Tx-Rx) in an example. In an example, the module has programmability of various parameters similar to the Novelda X4M03 module. In an example, the module is a hybrid architecture with four-by-four radar integrated circuit devices, employing a MIMO configuration that can be switched between three antenna arrays. This configuration enables capturing all four Rx frames in the antenna array simultaneously. Further details of the present UWB module will be provided in this specification, particularly below.
Fig. 10 is a simplified diagram 1000 of electrical parameters according to an example of an ultra-wideband module. In an example, the various parameters are as listed in the table. Each of the parameters listed in the table are suggested and can be adjusted to minimize cost and complexity while still achieving performance. In an example, the data transmission speed of the module is 3.2MBps (e.g., 330 frames/second×200 frames length×2 bytes×2×4 receivers×3 modules). In an example, the module may include a microcontroller unit to communicate with the X4 SoC over an SPI interface. In an example, the central processing unit communicates with the computing module through a serial interface such as a universal serial bus (i.e., USB). The microcontroller is configured on board with sufficient memory to store the raw data. In an example, the memory capacity is greater than 128MB, such as a 128MB SDRAM. The electrical parameters configured in the system diagram will be described in further detail below.
Fig. 11 is a simplified system diagram 1100 of an ultra-wideband module according to an example of the invention. As shown, the system has a microcontroller 1101, such as an integrated circuit sold under the name ATSAM E16E by Microchip technologies Inc. (Microchip Technology Inc.) located in the West Chandler channel 2355 of Chandler 85224-6199, arizona, USA. The microcontroller has a serial interface, such as a universal serial interface. The controller is coupled to random access memory 1105 for storing raw data as well as to clock and other miscellaneous circuitry 1103. In an example, the output of the controller communicates 1107 with four XETHRU X socs manufactured by Novelda AS, norway.
In an example, the basic components of the X4SoC are a transmitter, a receiver, and associated control circuitry. The system is controlled by the system controller and may be configured via a 4 (6) wire Serial Peripheral Interface (SPI). For example, the X4 receive path (RX) includes a Low Noise Amplifier (LNA), a digital-to-analog converter (DAC), 1536 parallel digital integrators, and an output memory buffer, which may be accessed through the SPI. RX is tightly integrated with the Transmitter (TX) and is designed for coherent integration of the received energy. The X4 transmit path (TX) includes a pulse generator capable of generating pulses at a rate of up to 60.75 MHz. The design of the output frequency and bandwidth meets the global regulatory requirements. The radar transceiver can operate fully autonomously and can be programmed to capture data at predetermined intervals and then alert or wake up the host MCU or DSP through a dedicated interrupt pin. The power management unit controls the on-chip voltage regulator and enables low power applications to use efficient duty cycle by powering down the circuit portion when no longer needed. In idle mode, the power consumption of the system may be configured to be less than 1mW when all analog front end components are off. As shown, each of the four X4 socs is connected in parallel with a switch.
In an example, a switch 1109 is coupled to each antenna array as shown. In an example, the switch may be an HMC241/HMC7992/ADRF5040 SP4T RF switch of Analog Devices, inc. These switches are Direct Current (DC) to 12GHz non-reflective RF switches suitable for 4G cellular communications, military communications, and radio applications. Examples of HMCs 241, HMCs 7992, and ADF5040 are Radio Frequency (RF) non-reflective/absorptive single pull four throw (SP 4T) switches that can interface with 3.3V, TTL, LVTTL, CMOS and LVCMOS logic. The operating frequency range of these switches is DC to 12GHz. HMC241 is GAAS MMIC RF switch with operating frequency range DC to 4GHz. The switch adopts a single power supply of +5V. The frequency range of HMC7992 is 100MHz to 6GHz. The ESD rating of the switch is 2kV (HBM) level 2. HMC7992 uses a single voltage power supply from ±3.3v to +5v. ADRF5040 is packaged in a 4mm by 4mm LFCSP small form factor package, requiring a + -3.3V dual power supply. The switching frequency range is 9kHz to 12GHz. ADRF5040 also has the additional advantage of a 4kV (HBM) ESD rating. HMC241, HMC7992, and ADF5040 are ideal choices for 4G cellular infrastructure such as base stations and repeaters, as well as military communication and industrial test measurement applications. Of course, other variations, modifications, and alternatives are also possible.
In an example, the UWC module includes a switch configured between a plurality of UWC transceivers. The switch is configured to select one of the three antenna arrays to sense directional scattering while the other two antenna arrays are turned off. In an example, the switches are RF switches, such as those listed under part number ADRF-5040 of Analog Devices, inc. In an example, the UWC module further has a controller configured to control the switch and the three antenna arrays. In an example, the controller decides which of the three antenna arrays is activated and the other two antenna arrays are turned off through a predetermined process.
In an example, the at least three antenna arrays are configured to sense electromagnetic energy at a frequency of 6 to 8 GHz. As previously described, the sensing device is spatially centered in the geographic location of the room to detect movement of the human user.
In an example, the present invention provides a method of processing an electromagnetic signal generated from an ultra wideband RF signal to detect human user activity. Referring to fig. 11, the method includes generating an outgoing UWC signal in baseband from a transmitting integrated circuit coupled to a microcontroller device. The method includes transmitting and then receiving an outgoing UWC signal of the baseband at a switching device coupled to the microcontroller. The switch is configured to direct an outgoing UWC signal to one of the three antenna arrays using the switching device. In an example, three antenna arrays are configured in a triangular configuration to transmit an outgoing UWC signal extending over a 360 degree visual range from a spatial location of a zero degree location related to a midpoint of the device, where each antenna array is configured to sense a 120 degree range on a horizontal plane. Each antenna array is configured to sense and transmit a visibility range of at least 80 degrees, as measured from a plane perpendicular to the horizontal plane. In an example, each of the three antenna arrays includes a support, a plurality of transmit antennas spatially configured on a first portion of the support, a transmit integrated circuit coupled to each of the plurality of transmit antennas and configured to transmit outgoing UWC signals. Each antenna array also has a plurality of receive antennas spatially arranged on the second portion of the support. The antenna array also has a receive integrated circuit coupled to each of the plurality of receive antennas, the receive integrated circuit configured to receive the incoming UWB signal and configured to convert the UWC signal to baseband. In an example, the method also receives a backscattered electromagnetic signal caused by activity of a human user redirecting outgoing UWB signals. In an example, the received signal is processed with an artificial intelligence module to form an output. Of course, other variations, modifications, and alternatives are also possible.
Fig. 12 is an example 1200 of antenna array parameters for an ultra-wideband module according to the present invention. As shown, there is one 1-Tx and four 4-Rx per antenna array. Each Tx/Rx is designed to cover a 120 degree azimuth field of view and maximize elevation field of view as needed. In an example, a serial feed patch antenna may be used. In an example, the antenna is fabricated using a material such as a substrate of Rogers 4350 (Rogers 4350). In an example, the antenna may be an integrated WiFi filter, optimized for frequencies between 6.0 and 8.5GHz, if desired. In an example, the antenna design complies with the FCC/ETSI standard for the transmit center frequency. Of course, other variations, modifications, and alternatives are also possible.
Fig. 13 is an example of an antenna array configuration 1300 of an ultra-wideband module according to the present invention. As shown, the antenna array is spatially disposed on a support (e.g., a board). The antenna array includes four (4) Rx's in the antenna array in a two-dimensional (2D) configuration, as shown. As shown, R4 is aligned with R1, R2 or R3 and separated by a fraction of λ. As shown, each antenna is separated by a factor of two λ. Of course, other variations, modifications, and alternatives are also possible.
In an example, the present invention provides a method of processing electromagnetic signals generated from ultra wideband RF signals to detect human user activity. In an example, the method includes generating an outgoing UWC signal of the baseband. The method further includes receiving an outgoing baseband UWC signal at a switching device and directing the outgoing UWC signal to one of three antenna arrays using the switching device, the three antenna arrays configured in a triangular configuration to transmit the outgoing UWC signal over a 360 degree visual range from a spatial location of a zero degree location associated with a midpoint of the device, wherein each antenna array is configured to sense the 120 degree range on a horizontal plane. Each antenna array is configured to sense and transmit a visual range of at least 80 degrees when measured from a vertical plane perpendicular to the horizontal plane.
In an example, each of the three antenna arrays has a support, such as a board, printed circuit board. In an example, each array has a plurality of transmit antennas spatially configured on a first portion of the support, a transmit integrated circuit coupled to each of the plurality of transmit antennas and configured to transmit outgoing UWC signals, a plurality of receive antennas spatially configured on a second portion of the support, and a receive integrated circuit coupled to each of the plurality of receive antennas and configured to receive incoming UWB signals and configured to convert UWC signals to baseband signals. In an example, the method includes receiving an outgoing electromagnetic signal that is redirected by activity of a human user to cause the backscattered electromagnetic signal to pass out of UWB.
The apparatus of fig. 11, wherein the UWB module comprises a microcontroller unit and a clock circuit coupled to the memory resource, the microcontroller unit configured with a universal serial bus interface coupled to the computing module; wherein the computing module is configured with an artificial intelligence module to process information of the backscattered electromagnetic signals from the baseband signal to detect activity of the human entity.
In an example, the support includes a major plane positioned perpendicular to the direction of gravity.
In an example, the antenna array comprises at least three antenna arrays spatially arranged in a triangular configuration comprising a first antenna array, a second antenna array and a third antenna array, the first antenna array, the second antenna array and the third antenna array comprised in the at least three antenna arrays being used to provide a 360 degree visual range measured from a horizontal plane and an 80 degree visual range measured from a vertical plane perpendicular to the horizontal plane. In an example, the antenna array includes at least three antenna arrays spatially arranged in a triangular configuration including a first antenna array, a second antenna array, and a third antenna array, the at least three antenna arrays including a first antenna array, a second antenna array, and a third antenna array to provide a 360 degree visual range measured from a horizontal plane and an 80 degree visual range measured from a vertical plane perpendicular to the horizontal plane, and further including a controller configured to control a switch coupled to each of the three antenna arrays, the controller cycling through a predetermined process to decide which of the three antenna arrays to activate to turn off the other two antenna arrays.
In an example, each antenna array includes 1-TX and 4-RX.
In an example, a system has a switching device coupled between each antenna array and four receive channels, each receive channel coupled to a receive integrated circuit device, one transmit channel coupled to a transmit integrated circuit device, and a microcontroller unit coupled to a bus of the receive integrated circuit device and the transmit integrated circuit device, the microcontroller unit coupled to a memory resource configured with a microcontroller to store raw data from information derived from the four receive channels, the microcontroller unit coupled to a clock.
In the example, each antenna array includes 1 TX and 4 RX. In an example, a system has a switching device coupled between each of three antenna arrays, and four receive channels, each receive channel coupled to a receive integrated circuit device, one transmit channel coupled to a transmit integrated circuit device, and a microcontroller unit coupled to a bus, the bus coupled to the receive integrated circuit device and the transmit integrated circuit device, the microcontroller unit coupled to a memory resource configured with a microcontroller to store raw data from information derived from the four receive channels, the microcontroller unit coupled to a clock.
In examples, the present technology includes methods, apparatuses, and devices for processing signals. As shown at 1400 in fig. 14, the present FMCW apparatus operates in the 24GHz ISM band with a plurality of antenna arrays 1401, 1403, 1405. In an example, the device has various functions such as a combined horizontal angle of view of 360 degrees, a range of 12 meters or more, an FPS of Tx-Rx of 1000 or more at a time, programmability of various parameters, and the like. In an example, each antenna array (including TX and RX) is in communication with an FMCW module, as shown. The three antenna arrays are arranged in a triangular configuration, with each antenna array having a viewing angle range of 120 degrees.
Referring now to fig. 15, a device 1500 has various elements, such as antenna array 1, antenna array 2, and antenna array 3. In an example, the device has a 360 degree horizontal field of view, which would be achieved by three sets of antenna arrays, each covering 120 degrees (as wide a vertical field of view as possible). In the example, each antenna array consists of 2 transmitters and 4 receivers. In an example, the device implements a per TX-RX 1000 FPS by generating 6 chirps for 6 TX in turn in 1 millisecond. Of course, other variations, modifications, and alternatives are also possible.
As shown in the table in fig. 16, various device parameters are described. In the examples, the listed parameters are suggested, and may be modified or replaced to minimize cost and complexity while achieving desired performance. In an example, a computing module accesses sampled radar data through a USB interface, the computing module being part of an overall system. In an example, the data transmission rate of the device is 6.14MBps (e.g., 1000fps×128 samples/frame×2 bytes×8 antennas×3 modules). In an example, the device has a microcontroller, such as a micro-controller of the laplace semiconductor company (Cypress Semiconductor), including memory resources for storing raw radar data. In an example, the device has a memory with a capacity of 2 gigabits or greater. In an example, various configurations are described throughout this specification, particularly below.
In an example, fig. 17 shows a simplified diagram 1700 of the system architecture of an exemplary FMCW device according to the invention. In an example, the present system has three antenna arrays 1701, each with 2-TX plus 4-RX (i.e., 8 virtual arrays). Each antenna array is coupled to a dual channel TX, a four channel RX, a four channel AFE RX and an FMCW frequency generator 1703. In an example, the system has a Radio Frequency (RF) module including a dual channel TX listed under part number ADF5901 of Analog Devices. In the example, the system has four channels RX, listed under part number ADF5904 from analog devices Inc. The system also has a four channel AFE RX, listed under part number ADAR7251 from analog devices corporation. In addition, the system has an FMCW generator listed under part number ADF4159 of analog devices Inc. The system has a microcontroller 1705, listed under part number CYYSB X from the Sorption microcontroller company (Cypress Microcontroller), which is coupled to a system memory such as 2GB-SDRAM, with SPI interface control between the RF module and the microcontroller. The system also connects the microcontroller to TCP via Universal Serial Bus (USB) 1707. Of course, other variations, modifications, and alternatives are also possible.
In an example, fig. 18 shows a simplified diagram 1800 of the system architecture of an exemplary FMCW device according to the invention. In the example, the system has three antenna arrays 1801, each with 2-tx+4-RX (i.e., 8 virtual arrays). In an example, the system has a radio frequency module (RF module) 1803. The RF module has a dual channel TX listed under part number ADF5901 of analog devices corporation. The module has four channels RX listed under part number ADF5904 of analog devices corporation.
In an example, the system has a processing and acquisition module 1807. The module has a four channel AFE RX, listed under ADAR7251 from analog devices, and an FMCW generator, listed under ADF4159 from analog devices, usa. This module is coupled to and communicates with a12 channel-3:1 de-multiplexing switch 1805 (listed under TS3DV621 of Texas instruments). The system has a microcontroller, such as a laplace microcontroller listed under part number CYYSB301X, coupled to a memory resource, such as a 2GB SDRAM. The system has SPI interface control between the RF module and the microcontroller. The USB interface is coupled to TCP 1809. Of course, other variations, modifications, and alternatives are also possible. For more details, see the more detailed illustration 1850 of FIG. 18A, described below.
In an example, on the transmit channel 1851 with reference to FIG. 18A, a microcontroller is coupled to a waveform generator to output a digital signal (e.g., in register programming) that is converted in an analog-to-digital converter to a baseband analog signal that is fed to a switch. The switch is an analog switch, and one of the three arrays can be selected. The baseband analog signal is transmitted to the RF integrated circuit, which configures the baseband analog signal into an FMCW RF signal that is to be transmitted through the TX antenna.
In an example, four FMCW signals are received from four RX antennas on receive path 1853. These four signals are received in parallel and fed into the RF integrated circuit for processing to output corresponding four baseband analog signals, each of which is fed to a switch. The switch allows signals from one of the three antenna arrays to be passed to a respective analog-to-digital converter, each in parallel. Each analog-to-digital converter is connected to a microcontroller. Each analog-to-digital converter configures the incoming baseband signal as a digital signal, which is fed to the microcontroller. Of course, other variations, modifications, and alternatives are also possible.
In an example, fig. 19 shows a simplified diagram 1900 of a system architecture of an example FMCW device according to the invention. The system has three antenna arrays 1901, each with 2-tx+4-RX (i.e., 8 virtual arrays). The system has an RF switch 1903 for switching between any of the antenna arrays. In an example, the system has an RF module and an acquisition module 1905. The RF module and acquisition module have a dual channel TX as listed under ADF5901 of analog devices inc. The module has a four channel RX (listed under ADF5904 from analog devices), a four channel AFE RX (listed under ADAR7251 from analog devices) and an FMCW generator (listed under ADF4159 from analog devices). The module has a microcontroller, such as the microcontroller listed under CYYSB X from the laplace semiconductor company. The microcontroller is coupled to a memory resource, such as a 2GB-SDRAM device. The system also has an interface, such as SPI interface control 1907 between the RF module and the laplace microcontroller. The system also has a serial interface, such as a USB interface, for connecting TCP. Of course, other variations, modifications, and alternatives are also possible.
Fig. 20 is a simplified example of an antenna array according to an embodiment of the present invention. As shown, a serial feed patch antenna may be included. In the example, each antenna array 2001 has 2 TX and 4 RX, and various variations are possible. In an example, each RX covers a 120 degree horizontal field of view. In an example, rx has an ideal wide vertical field of view. In the example, the antenna array has four (4) RX s, which are equally spaced apart by λ -half in the horizontal direction.
In the example, each antenna array has two (2) TX's spaced apart by λ in the horizontal direction and by λ -half in the vertical direction, with 4 RX 2003 forming a virtual 2D array. In an example, the present virtual antenna mapping is provided to achieve the goal of balancing physical channel power across multiple physical antennas, especially when multiple input multiple output is deployed in the downlink. In an example, the virtual antenna mapping gives an illusion that the number of antennas for the base station is actually less than the actual number. By virtual antenna mapping, the unbalanced power on the two transmit paths is converted to balanced power at the physical antenna ports. This is achieved using phase and amplitude coefficients. Thus, even for the signal transmitted on the first antenna, the two power amplifiers can be optimally utilized. Of course, other variations, modifications, and alternatives are also possible.
In an example, FMCW using higher power may be used to capture finer features such as respiration, heart rate, and other small range features. In an example, lower power and UWB are suitable for thicker features, which are lower in frequency. Lower frequencies may also penetrate walls and other physical features.
In an example, the present invention provides an FMCW sensor device. The device has at least three transceiver modules. Each transceiver module has an antenna array configured to sense backscatter of electromagnetic energy over a 360 degree range from a spatial location of zero degree position associated with a midpoint of the device, wherein each antenna array is configured to sense a 120 degree range. In an example, each antenna array has a support, a plurality of receive antennas, a receiver integrated circuit coupled to the receive antennas and configured to receive and convert incoming FMCW signals to baseband signals, and a plurality of transmit antennas. Each antenna array has a transmitter integrated circuit coupled to a transmit antenna for transmitting outgoing FMCW signals. The device has a virtual antenna array configured by a plurality of receiving antennas and a plurality of transmitting antennas, and a spatial area larger than a physical spatial area of the plurality of receiving antennas is formed by the virtual antenna array. In an example, the apparatus has a triangular configuration including a first antenna array, a second antenna array, and a third antenna array, the three antenna arrays being included in at least three antenna arrays to provide a 360 degree visual range measured from a horizontal plane and an 80 degree visual range measured from a vertical plane perpendicular to the horizontal plane. The device has a main control board coupled to each support and configured in a normal direction relative to each support. The device has a housing enclosing at least three transceiver modules.
In an example, the FMCW sensor device includes a switch configured between a plurality of FMCW transceivers, such that the switch is configured to select one of the three antenna arrays to sense directional scattering while the other two antenna arrays are turned off. In an example, the antenna array is configured to process electromagnetic radiation having a frequency range of 24GHz to 24.25 GHz.
In an example, an apparatus has a controller configured to control a switch and three antenna arrays. In an example, the controller decides which of the three antenna arrays is activated and the other two antenna arrays are turned off through a predetermined process. In an example, the three antenna arrays are configured to sense electromagnetic energy in the 24GHz to 24.25GHz frequency band. In an example, the sensing device space is spatially centered within a geographic location of the room to detect movement of a human user. In an example, each sensor array is disposed on a substrate member configured in a triangular configuration.
In an example, the device has a housing. The maximum length of the housing is six to twenty-four inches and the width is no more than six inches. In an example, the housing has sufficient structural strength to erect and protect the interior region within the housing.
In an example, an apparatus has a height that characterizes a plurality of levels numbered 1 through N within a housing from a bottom region to a top region of the housing, and a speaker device configured within the housing and above the bottom region. In an example, the apparatus has a computing module (including a processing device above a speaker device), an artificial intelligence module configured above the computing module, an ultra wideband ("UWB") module (including an antenna array configured above the artificial intelligence module), and an audio module configured above the FMWC module. The apparatus has an inertial measurement unit ("IMU") module configured above the FMCW module.
In an example, the speaker device, the computing module, the artificial intelligence module, the UWB module, the FMCW module, the audio module, and the IMU module are arranged in a stacked configuration and are respectively configured in a plurality of levels numbered 1 through N.
In an example, the speaker device includes an audio output configured to be included in the housing, the speaker device configured to output energy over a 360 degree range from a midpoint of the device.
In an example, the computing module includes a microprocessor-based unit coupled with a bus. In an example, the computing module includes a signal processing core, a microprocessor core for an operating system, a synchronization processing core configured to time stamp and synchronize incoming information from each of the FMCW module, the IMU module, and the UWB module.
In an example, the apparatus has a real-time processing unit configured to control an FMCW switch or UWB switch or other switch requiring real-time switching operation of less than 1/2 milliseconds after receiving feedback from the plurality of sensors. In an example, the apparatus has a graphics processing unit configured to process information from an artificial intelligence module.
In an example, the artificial intelligence module includes an artificial intelligence inference accelerator configured to apply the trained module using a neural network-based process including a plurality of nodes numbered 1 through N.
In an example, the FMCW module includes at least three antenna arrays configured to sense backscatter of electromagnetic energy over a 360 degree range from a spatial location of a zero degree location associated with a midpoint of the device, where each antenna array is configured to sense a 120 degree range.
In an example, each antenna array includes an FMCW transceiver and a switch configured between each FMCW transceiver and the controller, such that the switch is configured to select one of the three antenna arrays and the FMWC transceiver to sense backscatter while the other two antenna arrays are off, and further includes a serial interface.
In an example, the audio module includes a micro-phone array for detecting energy in a sound frequency range for communication and detecting sound energy.
In an example, a UMU module includes a support substrate, an electrical interface disposed on the support structure, an accelerometer coupled to the electrical interface, a gyroscope coupled to the electrical interface, a compass coupled to the electrical interface, a UV detector coupled to the interface configured to detect ultraviolet radiation, a pressure sensor coupled to the interface, and an ambient gas detector configured and coupled to the interface to detect a chemical entity.
In an example, the present invention provides an apparatus for processing activities of a human user. The apparatus has an audio module and a computing module coupled to the audio module. The apparatus has a transceiver module coupled to a computing module. In an example, the transceiver module has antenna arrays configured to sense backscatter of electromagnetic energy having a frequency range of 24GHz to 24.25GHz throughout a 360 degree range from a spatial location of a zero degree location associated with a midpoint of the device, wherein each antenna array is configured to sense a 120 degree range.
In an example, an antenna array includes a support, a plurality of receive antennas, a receiver integrated circuit coupled to the receive antennas and configured to receive an incoming Frequency Modulated Continuous Wave (FMCW) signal and convert the incoming FMCW signal to a baseband signal, a plurality of transmit antennas, and a transmitter integrated circuit coupled to the transmit antennas to transmit an outgoing FMCW signal.
In an example, an apparatus has a virtual antenna array configured from a plurality of receive antennas and a plurality of transmit antennas to form a spatial area greater than a physical spatial area of the plurality of receive antennas using the virtual antenna array. In an example, the device has a main control board coupled to the support and configured in a normal direction relative to the support, and a housing enclosing the transceiver module, the computing module, and the audio module.
In an example, the present invention has a method of using an apparatus, device, and system. In an example, the method is for processing signals from human activity. The method includes generating an RF signal using a transceiver module coupled to the computing module and transmitting the RF signal using one of three antenna arrays configured to sense an entire 360 degree range from a spatial location of a zero degree location associated with a midpoint of the three antenna arrays, and sensing using one of the three antenna arrays, wherein each antenna array is configured to sense a 120 degree range to capture backscatter of electromagnetic energy in a 24GHz to 24.25GHz frequency range associated with human activity.
In an example, the present invention provides an alternative Radio Frequency (RF) sensing device. The apparatus has an Ultra Wideband (UWB) module comprising at least three Ultra Wideband (UWB) antenna arrays configured in a triangular arrangement to sense backscatter of electromagnetic energy from a spatial location, such that the triangular arrangement allows sensing a 360 degree visible range measured from a horizontal plane and an 80 degree visible range measured from a vertical plane perpendicular to the horizontal plane from a zero degree location associated with a midpoint of the triangular arrangement, wherein each UWB antenna array is configured to sense at least 120 degree ranges.
In an example, each UWB antenna array includes a support, a plurality of transmit antennas spatially configured on a first portion of the support, a transmit integrated circuit coupled to each of the plurality of transmit antennas and configured to transmit outgoing UWB signals, a plurality of receive antennas spatially configured on a second portion of the support, and a receive integrated circuit coupled to each of the plurality of receive antennas and configured to receive incoming UWB signals and configured to convert UWB signals to baseband signals.
In an example, an apparatus has one frequency modulated continuous wave module that includes at least three Frequency Modulated Continuous Wave (FMCW) transceiver modules. Each FMCW transceiver module has an array of FMCW antennas. In an example, the three FMCW transceiver modules are configured in a triangular arrangement to sense backscatter of electromagnetic energy from a spatial location such that the triangular arrangement allows sensing from a zero degree location relative to a midpoint of the triangular arrangement through a 360 degree viewable range measured from a horizontal plane and an 80 degree viewable range measured from a vertical plane perpendicular to the horizontal plane, wherein each FMCW antenna array is configured to sense at least 120 degree ranges.
In an example, each FMCW antenna array includes a support, a plurality of receive antennas, a receiver integrated circuit coupled with the receive antennas and configured to receive an incoming FMCW signal and convert the incoming FMCW signal to a baseband signal, a plurality of transmit antennas, a transmitter integrated circuit coupled to the transmit antennas to transmit an outgoing FMCW signal, and a virtual antenna array configured from the plurality of receive antennas and the plurality of transmit antennas to form a larger spatial area than a physical spatial area of the plurality of receive antennas using the virtual antenna array.
In an example, the apparatus has a main control board coupled to each support and configured in a normal direction relative to each support, and a housing enclosing at least three FMCW transceiver modules and at least three UWB antenna arrays.
In an example, the apparatus has an FMCW switch configured between a plurality of FMCW transceivers, such that the FMCW switch is configured to select one of three FMCW antenna arrays to sense a backscatter signal, while turning off the other two FMCW antenna arrays; wherein each FMCW antenna array is configured to process electromagnetic radiation in a frequency range of 24GHz to 24.25 GHz.
In an example, the apparatus has an FMCW controller configured to control an FMCW switch and three FMCW antenna arrays, the FMCW controller cycling through a predetermined process to decide which of the three FMCW antenna arrays to activate while the other two FMCW antenna arrays are turned off. In an example, three FMCW antenna arrays are configured to sense electromagnetic energy in the 24GHz to 24.25GHz frequency band. In an example, the RF sensing device is spatially positioned in the center of the geographic location of the room to detect movement of a human user using an outgoing FMCW signal or an outgoing UWB signal.
In an example, the apparatus has a UWB switch configured between the plurality of UWC transceivers such that the UWB switch is configured to select one of the three UWB antenna arrays to sense backscatter and the other two UWB antenna arrays are turned off. In an example, the apparatus has a UWB controller configured to control a UWB switch and UWB three antenna arrays, the UWB controller cycling through a predetermined flow to decide which of the three UWB antenna arrays to activate and the other two UWB antenna arrays to turn off. In an example, at least three UWB antenna arrays are configured to sense electromagnetic energy having frequencies in the range of 6 to 8 GHz.
In an example, the maximum length of the housing is six to twenty-four inches and the width is no more than six inches, the housing having sufficient structural strength to erect and protect the interior area within the housing; height characterization from bottom region to top region of the housing; a plurality of levels numbered 1 through N within the housing. The apparatus may also have a speaker device disposed within the enclosure and above the bottom region, a computing module including a processing device above the speaker device, an artificial intelligence module disposed above the computing module, an audio module, and an inertial measurement unit ("IMU") module.
In an example, the present invention has an alternative Radio Frequency (RF) sensing device. The device has an Ultra Wideband (UWB) antenna array configured in a spatial arrangement to sense backscatter of electromagnetic energy from a spatial location, such that the spatial arrangement allows sensing from a first location relative to a second location. In an example, a UWB antenna array includes a support, a plurality of transmit antennas spatially configured on a first portion of the support, a transmit integrated circuit coupled to each of the plurality of transmit antennas and configured to transmit outgoing UWB signals, a plurality of receive antennas spatially configured on a second portion of the support, and a receive integrated circuit coupled to each of the plurality of receive antennas and configured to receive incoming UWB signals and configured to convert UWB signals to baseband signals.
In an example, an apparatus has a Frequency Modulated Continuous Wave (FMCW) transceiver module. In an example, the FMCW transceiver module has an FMCW antenna array. In an example, the FMCW transceiver module is configured to sense backscatter of electromagnetic energy from a first location relative to a second location.
In an example, an FMCW antenna array includes a support, a plurality of receive antennas, a receiver integrated circuit coupled to the receive antennas and configured to receive an incoming FMCW signal and convert the incoming FMCW signal to a baseband signal, a plurality of transmit antennas; a transmitter integrated circuit coupled to the transmit antennas to transmit outgoing FMCW signals, and a virtual antenna array configured from a plurality of receive antennas and a plurality of transmit antennas to form a larger spatial area with the virtual antenna array than a physical spatial area of the plurality of receive antennas.
In an example, the apparatus has a master control board coupled to each support and configured in a normal direction relative to each support; and a housing enclosing the FMCW transceiver module and the UWB antenna array.
In an example, the apparatus has an FMCW switch configured to the FMCW transceiver such that the FMCW switch is configured to select the FMCW antenna array to sense the backscatter signal; the FMCW antenna array is configured to process electromagnetic radiation having a frequency in the range of 24GHz to 24.25 GHz. In an example, the apparatus has an FMCW controller configured to control the FMCW switch and the FMCW antenna array, the FMCW controller cycling through a predetermined process to determine when to activate the FMCW antenna array. In an example, the FMCW antenna array is configured to sense electromagnetic energy in the 24GHz to 24.25GHz frequency band.
In an example, the RF sensing device is spatially centered in the geographic location of the room to detect movement of a human user using an outgoing FMCW signal or an outgoing UWB signal. In an example, the apparatus has a UWB switch configured to the UWC transceiver such that the UWB switch is configured to select the UWB antenna array to sense the backscatter signal. In an example, the apparatus has a UWB controller configured to control the UWB switch and the UWB antenna array, the UWB controller cycling through a predetermined flow to determine when to activate the UWB antenna array. In an example, the UWB antenna array is configured to sense electromagnetic energy having frequencies in the range of 6 to 8 GHz.
In an example, the maximum length of the housing is six to twenty-four inches and the width is no more than six inches, the housing having sufficient structural strength to erect and protect the interior area within the housing; height characterization from bottom region to top region of the housing; a plurality of levels numbered 1 through N within the housing. In an example, an apparatus may have a speaker device configured within a housing and above a bottom region, a computing module including a processing device above the speaker device, an artificial intelligence module configured above the computing module, an audio module, and an inertial measurement unit ("IMU") module.
In an example, the invention also provides an apparatus for monitoring a human user. The device has a movable housing. In an example, the maximum length of the housing is six to twenty-four inches and the width is no more than six inches. In an example, the housing has sufficient structural strength to erect and protect the interior area within the housing. The housing has a height that characterizes a top region from a bottom region of the housing; a plurality of tiers numbered 1 through N within the enclosure, each tier having, in an example, a module selected from at least one of:
An Ultra Wideband (UWB) antenna array configured in a spatial arrangement to sense backscatter of electromagnetic energy from a spatial location such that the spatial arrangement allows sensing from a first location relative to a second location, the UWB antenna array comprising:
A support;
a plurality of transmit antennas spatially arranged on the first portion of the support;
A transmit integrated circuit coupled to each of the plurality of transmit antennas, configured to transmit an outgoing UWB signal;
a plurality of receiving antennas spatially arranged on the second portion of the support;
A receiving integrated circuit coupled to each of the plurality of receiving antennas, configured to receive an incoming UWB signal and configured to convert the UWB signal to a baseband signal; and
A Frequency Modulated Continuous Wave (FMCW) transceiver module having an FMCW antenna array configured to sense backscatter of electromagnetic energy from a first location relative to a second location, the FMCW antenna array comprising:
A support;
A plurality of receiving antennas;
A receiver integrated circuit coupled to the receive antenna, configured to receive the incoming FMCW signal and convert the incoming FMCW signal to a baseband signal;
A plurality of transmitting antennas;
a transmitter integrated circuit coupled to the transmit antenna to transmit the outgoing FMCW signal;
A virtual antenna array configured by a plurality of receiving antennas and a plurality of transmitting antennas to form a spatial area larger than a physical spatial area of the plurality of receiving antennas with the virtual antenna array;
a main control board coupled to each support, the main control board being configured in a normal direction with respect to each support;
A speaker device disposed within the housing and above the bottom region;
a computing module comprising a processing device located above the speaker device;
An artificial intelligence module configured above the computing module;
an audio module; and
An inertial measurement unit ("IMU") module.
Wherein the FMCW antenna array is configured to sense electromagnetic energy in a 24GHz to 24.25GHz band; the UWB antenna array is configured to sense electromagnetic energy having frequencies in the range of 6 to 8 GHz.
Of course, other variations, modifications, and alternatives are also possible.
Fig. 21 is a simplified top view of an audio module according to an example of the invention. In an example, the device has an audio module, represented by a circular base plate member. The audio module has a microphone array comprising seven microphones including six peripheral microphones and one central microphone arranged and arranged in a circular array, although other microphone configurations, numbers and spatial arrangements are possible. In an example, each microphone is electrically connected to a dual four (4) channel analog-to-digital converter (ADC) having a 103db signal-to-noise ratio, or other suitable design.
In an example, the analog-to-digital converter is connected to a processing system, including a processing device, a signal processor, and other elements, using a bus. In an example, the analog-to-digital converter uses an I2S interface. In an example, the I2S interface was developed by philips semiconductors (Philips Semiconductor) (i.e., NXP semiconductors today). In the example, the interface uses push-pull data signals, 1 data line (SD) +2 clock lines (SCK, WS) in width, and one serial protocol. For example, "I 2 S" (inter-IC sound), which is pronounced as eye-squared-ess, is an electrical serial bus interface standard for connecting digital audio devices together, as defined by the Wikipedia website. For example, I2S transmits pulse code modulated ("PCM") audio data between integrated circuits of electronic devices. In an example, the I 2 S bus separates the clock signal and the serial data signal, making the receiver simpler than that required for an asynchronous communication system, which requires recovering the clock from the data stream.
In an example, a processing system has a Digital Signal Processing (DSP) core that receives digital audio and performs beamforming operations, including deploying adaptive spectral noise reduction processing and multi-source selection (MSS) processing, to improve audio quality. In an example, the processing devices, including the micro-processing unit and the audio signal processing unit, are provided in separate computing modules or other hardware devices.
In an example, the multi-source selection process inputs audio information from multiple microphones in an array (each microphone sensing an audio signal from one spatial region) directly into the DSP core without transmitting such data into the processing device to more quickly detect and select at least one microphone device in the array having the highest audio signal. Once a microphone is selected, the processing system outputs or further processes audio information from the selected microphone. In an example, the multi-source selection process saves at least a few milliseconds of standard processing time, which is often run by a processor, through which audio information is passed in the processing device. As shown, the audio signal is captured from the surrounding environment and converted to a digital signal by an a/D converter, which is transmitted to a digital processing device for audio processing, as shown, without the signal traversing the ARM microprocessor core.
In an example, the ADC of the audio module has a dedicated I2S channel that can also be docked to drive an audio amplifier coupled to the speaker. In an example, multiple speakers (e.g., dual speakers) are integrated into the device. In an example, the audio amplifier may be a product listed under part number TPA3126D2DAD produced by texas instruments (Texas Instruments Incorporated) and the like. In an example, the driver may be a 50 watt stereo low idle current class D amplifier employing a thermally enhanced package. In an example, the driver has a hybrid modulation scheme that dynamically reduces idle current at low power levels to extend battery life of a portable audio system (e.g., bluetooth speaker, etc.). In an example, the class D amplifier integrates comprehensive protection functions including short circuit, thermal shutdown, overvoltage, undervoltage, and DC speaker protection. The fault is fed back to the processor to prevent the device from being damaged in the event of an overload condition. Other functions may also be included.
In examples, the audio module may also include other sensing devices. For example, the audio module includes an inertial measurement device, a pressure sensor, a gas sensor, and a plurality of LED devices, each of which is coupled to an LED driver. Each device is coupled to auxiliary control hardware that communicates with the microprocessor unit core using a bus (e.g., an I2C bus, but other buses are possible).
Fig. 22 and 23 are a simplified circuit diagram and an arrangement of a microphone array, respectively, according to an example of the invention. As shown, the microphone arrays 1-3 are coupled to an audio analog-to-digital converter (ADC), which acts as the primary ADC device, and to a reference clock. As shown, the ADC may be a PCM1864 Circular Microphone Board (CMB) from texas instruments. The ADC is a low-cost and easy-to-use reference design, and is suitable for applications requiring clear voice, such as voice triggering, voice recognition and the like. The ADC design captures the speech signal using a microphone array and converts it into a digital stream for use by the DSP system to extract clean audio from a noisy environment. The microphone arrays 4-6 are coupled to a slave ADC apparatus, which is coupled to a master ADC apparatus. In an example, a digital audio output is included and a digital audio signal is fed to a bus, such as an I2S interface or the like. The I2S interface is coupled to a computing system that includes audio output to an audio driver and speakers.
FIG. 24 is a simplified top view of an inertial sensing module according to an example of the invention. In an example, an apparatus has an inertial motion and a sensing module. In an example, the module has a multi-axis motion sensor. In an example, the sensor may be a component listed under TDK-ICM20948 that provides a 9-axis motion sensor including a three (3) axis accelerometer, magnetometer, gyroscope, and digital motion processor. In an example, the module has an interface with a slave I2C communication interface of the processing system. The module has a primary I2C interface to connect to an auxiliary pressure sensor (e.g., bosch-BMP 180) to perform a function similar to a ten (10) axis motion sensor.
In an example, the module has an accelerometer, a gyroscope, and a magnetometer to form a 9-axis inertial motion unit sensor. In an example, these sensors are important for the accurate positioning of the detection device. In an example, the module also provides additional information about the displacement of the device from one spatial position to another.
In an example, the module has a pressure sensor to provide additional information of pressure changes in the surrounding environment or surrounding area. In an example, a pressure sensor may be configured with a processor to detect opening and/or closing of a door or other building structure.
In an example, the module has a gas sensor. In an example, a gas sensor is configured with a processor to detect the content of carbon monoxide and other toxic gases that may be present in the surrounding environment in which the device is located. In an example, the gas sensor is a sensor sold under part number ICM 10020 of a TDK or other manufacturer.
In an example, the module has an array of LEDs. In an example, the LED array may be twelve (12) RGBW LED rings for illumination. The LED driver used is, for example, a driver sold under part number LP 5569. As shown, the LED array is spatially arranged around the peripheral region of the substrate member, in this example circular.
As shown, each sensor communicates using an I2C bus that communicates with various input/output devices on the processing system, as described in more detail below. Also shown are general purpose input and output interfaces coupled to the processing system.
Fig. 25 is a simplified illustration of a user interface according to an example of the present invention. In an example, the module also has a user interface. Examples of easy-to-use interfaces include buttons, such as General Purpose Input and Output (GPIO) buttons, disposed in an area external to the housing. In an example, 4 GPIO buttons are used for a multi-purpose application, configured on a housing, and coupled with a processing device. As shown, the button includes: (1) making a call; (2) receiving an incoming call or muting the A/C audio CODEC; (3) an A/C audio CODEC volume increase; and (3) A/C audio CODEC volume is reduced. Of course, other configurations of GPIO buttons are possible.
Fig. 26 is a simplified illustration of a processing system according to an example of the present invention. As shown, the processing system has a system-on-chip processing platform, i.e., a single integrated circuit chip, including a dual ARM core microprocessor unit, dual core digital signal processor and dual core image processing unit, as well as associated firmware, interconnect, power management and other functions. Each processing resource is coupled to a bus or buses.
In an example, the system has multiple interfaces. The USB 3.0 interface communicates with the FMCW module. The I2S interface communicates with the audio module. The USB 2.0 interface communicates with the UWB module. The other USB 2.0 interface communicates with user interfaces such as a keyboard and mouse. Other types of serial interfaces may also be included. The system also has RJ-45 and Ethernet interfaces, wi-Fi and Bluetooth interfaces, cellular interfaces such as LTE. The system has a global positioning sensor interface. The system has a power and clock module for power and clock functions. The system has an inertial measurement unit connector and a module. The system has a plurality of PCIE connector interfaces, one of which is connected to a Wi-Fi sensor device. Other functions include dynamic random access memory interfaces, embedded multimedia card connections and modules, solid state drive connectors, and serial advanced technology attachment connectors, among others.
An example of a processing system may be a single integrated circuit chip manufactured by texas instruments and sold as AM572xSitaraArm application processor. In the SITARAARM data sheet of texas instruments, the AM572x device brings high processing performance through the maximum flexibility of a fully integrated hybrid processor solution. These devices also combine programmable video processing with a highly integrated set of peripherals. Each AM572x device has a cryptographic acceleration function. Dual core ArmCortex-A15 RISC CPU with Neon TM extensions and two TIC66xVLIW floating point DSP cores provide programmability. Arm allows developers to separate control functions from other algorithms programmed on the DSP and co-processor, thereby reducing the complexity of the system software. In addition, TI provides a complete set of development tools for Arm and C66xDSP, including a C compiler, a DSP assembly optimizer for simplified programming and scheduling, and a debug interface for viewing source code execution.
In an example, the processing system is coupled with an energy source that includes a battery and plug connection. The system also has a graphics processing module or artificial intelligence module for performing processing functions based on data received from the interface. An example of a processing unit is a processing unit sold under the Movidius TM brand by intel corporation (Intel Corporation).
In an example, movidius provides an ultimate low-power vision processing solution, including Myriad series 2 Vision Processing Units (VPUs), as well as a comprehensive Myriad Development Kit (MDK), reference hardware EVM, and optional machine vision application packages. For example, myriad MA2x5x series system on chip (SoC) devices provide significant computational performance and image processing capabilities, and are low in power consumption. The Myriad series includes the following product configurations: MA2150:1Gbit DDR MA2155:1Gbit DDR and secure boot MA2450:4Gbit DDR MA2455:4Gbit DDR and secure boot.
For example, myriad VPUs provide TeraFLOPS (trillion floating point operations per second) of performance within a nominal 1 watt power envelope. The Myriad architecture performs adequately to support multiple cameras, each with flexible image signal processing pipelines, and to support software programmable vision processing, supporting fixed point and floating point data types. A powerful overall data flow design may ensure that processing bottlenecks are alleviated.
For example Myriad MA2x5x combines image signal processing with visual processing using innovative methods. A set of imaging/vision hardware accelerators support world-wide streaming ISP pipelines without having to go to and from memory; at the same time, they are repurposed to combine with a set of specialized vision processor cores to speed up the developer's vision processing algorithms. All processing elements are coupled to the multiport memory so that demanding applications can be efficiently implemented. For more details, refer to Myriad's data sheet available from Intel corporation. Of course, other processing units may also be suitable for processing applications.
Fig. 27 is a simplified block diagram of a cellular module coupled to a processing system. In an example, the cellular module may be of any suitable design, such as a U-BLOXLTE module sold under the part number LARA-R204/SARA-U260, or the like. The module may be configured to serve a provider such as AT & TWIRELESS, SPRINT, VERIZON. In an example, the modules communicate via a universal asynchronous receiver-transmitter (UART) configured for asynchronous serial communication, where both data format and transmission speed are configurable. The module is also coupled with a removable phone number SIM card for configuring the system. Of course, other variations, modifications, and alternatives are also possible.
In an example, the present invention provides a system for capturing information from a spatial region to monitor human activity. In an example, the system has a housing with a maximum length of 6 to 24 inches and a width of no more than 6 inches, but other dimensions are possible. In the example, the housing has sufficient structural strength to erect and protect the interior area within the housing, but other variations may be included. In an example, the enclosure has a height that characterizes a bottom region to a top region of the enclosure, and a plurality of levels numbered 1 through N within the enclosure, each level configured with one or more modules.
In an example, a system has an audio module that includes a substrate member and a plurality of peripheral microphone devices spatially arranged along a peripheral region of the substrate member. In an example, each peripheral microphone device has an analog output. In an example, the module has one central microphone device spatially disposed within a central region of the base member. In an example, the central microphone arrangement has an analog output. In an example, the module has an analog-to-digital converter coupled to each analog output. The module has a spatial configuration that includes a circular area of peripheral area to provide a 360 degree field of view for the plurality of peripheral microphone devices. A bus device is coupled to each analog-to-digital converter. In an example, a bus device communicates with each of a plurality of peripheral microphone devices and a central microphone device. The module is coupled to a signal processor, which is coupled to the bus device. The module is coupled to a processor device, the processor device is coupled to the signal processing device, the module is configured to process audio information including audio events from the plurality of microphone devices using the signal processor without transmitting the audio information to the processing device to enable a faster selection process of at least one millisecond to select one of the microphone devices having the strongest audio signal and then transmit the audio information from the selected microphone device. The system also has a cellular network module including an interface coupled to the processing device. The system has a user interface disposed outside of the housing and coupled to the processor. The user interface allows the user to initiate and place external calls through the cellular network, or also receive external calls from the network, as desired.
In an example, the system also has other elements. That is, the speaker device is coupled to the processor device; an audio driving device is coupled with the speaker device to drive the speaker device. In an example, the LED array is coupled to a processor device. In an example, a plurality of MEMS devices are coupled to a processor device. In an example, the gas sensor device is coupled with the processor device. In an example, a pressure sensor device is coupled with a processor device. In an example, the user interface may be a general purpose input and output device.
In an example, a system has an inertial measurement module including an LED array, an accelerometer device, a gas sensor device, and a pressure sensor device configured to detect pressure within an environment of a housing. In an example, the inertial measurement module includes a gas sensor for detecting the presence of carbon dioxide and is connected to a processor device configured to issue an alarm based on the level of carbon dioxide. In an example, the system has a plurality of LED devices spatially arranged around the periphery of the substrate member to allow electromagnetic radiation illumination. In an example, the inertial measurement module includes an I2C bus coupled with the plurality of LED devices, the gyroscope device, the accelerometer device, the compass device, the pressure device, and the gas sensor, the I2C bus being coupled with the processing device. In an example, the processing unit includes an ARM processing unit coupled to the digital signal processor and the image processing unit.
Optionally, the system has a network module that includes an interface that is coupled with the processing device. In an example, a system has a speaker device coupled to a processor device configured with a network module to transmit audio information to output acoustic energy from the speaker device, and an audio drive device coupled to the speaker device. The system has a user interface disposed outside the housing and coupled to the processor.
In an example, the present invention provides a method of capturing information from a spatial region to monitor human activity. In an example, the method uses an apparatus that includes an enclosure located within a spatial region of a populated area occupied by one or more human users. In an example, the enclosure has sufficient structural strength to erect and protect an interior region within the enclosure having a plurality of levels numbered 1 through N within the enclosure, each level configured with one or more modules, which may include any of the modules described herein and others.
In an example, a housing has an audio module that includes a base member; a plurality of peripheral microphone devices spatially arranged along a peripheral region of the substrate member, each peripheral microphone device having an analog output; using the spatial configuration of the edge region of the peripheral region to provide a 360 degree field of view from the plurality of peripheral microphone devices; a bus device coupled to each analog-to-digital converter, the bus device in communication with each of the plurality of peripheral microphone devices; a signal processor coupled to the bus device; and a microprocessor device coupled to the signal processing device.
In an example, the method includes sensing, from each of a plurality of microphone devices, a plurality of audio signals including an audio event. Each of the plurality of microphone devices may receive an audio signal of a different signal strength based on the spatial location of each microphone device. The method includes converting each audio signal from each microphone device into a plurality of digital signals in a first format using an analog-to-digital converter. In an example, the method includes processing the digital signal in the first format into a second format, which may be a compressed format or other form, for transmission over the interface. The method includes transmitting the digital signal in the second format from each of the plurality of microphone devices to a receiving interface device coupled to the signal processing device using the dedicated interface device without transmitting the digital signal in the second format to the micro-processing device. The method processes information associated with the digital signal using the signal processing device to select one of the microphone devices that has the strongest audio signal as compared to any other microphone device; and transmitting information associated with the digital signal from the selected microphone device to the outgoing interface device. In a preferred example, the method includes processing the digital signal from the selected microphone device using an artificial intelligence program to identify the event.
In an example, the technique transmits learned information and activity information to a third party. The technology can self-learn by utilizing artificial intelligence technology to show high-level behavior of a person well-being. In an example, the present technology will generate a summary of these activities and send them to the person's relatives, caregivers, or even emergency team, depending on the urgency of the situation. For example, on a usual day, the technique may simply send a brief summary, such as "you mom is doing daily activities today or" she is doing much less activities today ". For another example, if there are caregivers to visit several times a week, the technique can send a notification to them: "she seemed more uncomfortable yesterday" so that the caregiver could look at it to ensure everything is normal. In addition, the technique can also be used for falling, shortness of breath or other sudden events requiring rapid attention. In these cases, the technique may inform the medical response team to provide assistance immediately. Of course, other variations, modifications, and alternatives are also possible.
In an example, the present technology may categorize the target personas according to the listed ADLs or the like. Examples of ADLs include bathing, brushing, dressing, toileting, drinking, sleeping, and the like. Other ADLs include preparing meals, preparing beverages, resting, chores, using phones, taking medications, etc. Ambulatory activities include walking, doing exercise (e.g., running, cycling), transitional activities (e.g., from sitting to standing, sitting to lying, standing to sitting, lying to sitting in and out of a bed or chair), and stationary activities (e.g., sitting on a sofa, standing a child, lying on a bed or sofa). Of course, other variations, modifications, and alternatives are also possible.
In alternative examples, the present technology may determine the activity of the target persona and any of the activities listed. The activities listed include going out, preparing breakfast, eating breakfast, preparing lunch, eating lunch, preparing dinner, eating dinner, washing dishes, eating midnight snack, sleeping, watching television, learning, bathing, going to the toilet, napping, surfing the net, reading books, shaving, brushing teeth, making a call, listening to music, cleaning a hygiene, chatting, hosting guests, etc., among others.
In an example, the present technology may also identify rare events. In an example, the present technology can identify a situation where an elderly person falls down in the home without surrounding people. In an example, the present technique is robust, without any false negatives. In an example, the technique uses a sequence of events before and after a potential fall. In an example, the technique robustly determines whether a fall has occurred in conjunction with the context information. Of course, there can be other variations, modifications, and alternatives.
In an example, the technique also detects and measures vital signs of each target person by a continuous, non-invasive method. In an example, vital signs of interest include heart rate and respiratory rate, which may provide valuable information about human health. Furthermore, if more than two target persons are living in a household, heart rate and respiratory rate may also be used to identify a particular person. Of course, there can be other variations, modifications, and alternatives.
By knowing the condition of the target person (e.g., seniors), the technology can also provide valuable feedback directly to the seniors using a voice interface. For example, the technique may sense the person's emotion based on the person's activity sequence and vital signs, and then ask "hi, do you want me to call you son. Based on human feedback, the technique can help connect to a third party (or parent) if their answer is affirmative. Of course, other alternatives, variations and modifications are possible.
Technology for improving sleep
In an example, the present technology provides a method for processing signals from a human user related to sleep states. Preferably, the method comprises using information from the signal for digital cognitive behavioral therapy to improve the sleep state of the human user. In an example, the method generally includes sensing human activity, processing information from such sensing, outputting tasks to a user, monitoring a user's reaction, and adjusting any of the above to improve a user's sleep state.
In an example, the method detects, using a plurality of sensing devices configured in proximity to a human user, a plurality of signals associated with an event, the signals associated with sleep stages of the human user at a predetermined time. In an example, the method includes receiving a plurality of signals into an input device. In an example, the input device is connected to an engine device, which may contain artificial intelligence technology. The method includes processing by parsing information associated with a plurality of signals using an engine; determining, using the engine, a classification associated with the event; and storing the classification associated with the event at a predetermined time. The method then includes continuing to perform the steps of detecting, receiving, processing, and storing a plurality of other predetermined times from the first time to the second time to create a sleep data history of the human user. In an example, the first time corresponds to a start of a first process and the second time corresponds to an end of a second process. In an example, the method includes processing, using an interaction engine, historical data to determine a task to output to a human user, the task being one of a plurality of tasks stored in a computing device memory, and generating, using a logic therapy block, an output based on the task.
In an example, the task is associated with content configured to be transmitted to the human user through one of a plurality of transmission events selected from a text message, a voice message, an optical notification, or a mechanical vibration.
In an example, the method further includes inputting data from a human user into a memory of a computing device, the data associated with a total sleep time, a fall-to-sleep time, a wake time, and a wake-up break between a first time and a second time; and transmitting the data to the engine to update the history of sleep data.
In examples, the plurality of sensing devices includes an RF sensor, a light sensor, one or more microphones, a mechanical motion sensor, a temperature sensor, a humidity sensor, an image sensor, a pressure sensor, a depth sensor, or an optical sensor.
In an example, the engine includes various functions. As an example, the engine includes a pre-trained model that is composed of a plurality of statistical features. Each feature of the plurality of statistical features is associated with a different sleep stage. In an example, each of the plurality of statistical features is associated with a selected set of sensors and is associated with a classification. In an example, the engine further includes a detector module configured to receive the incoming information stream of the plurality of signals from the selected sensor group and to perform statistical inference based on the plurality of currently observed signals and a pre-training model provided for the sleep history.
In an example, the interaction engine has various features. As an example, the interaction engine includes a plurality of pre-training therapies configured according to the user's age, gender, BMI, and one or more sleep quality indicators. Each pre-training regimen is configured to provide tasks to a human user and to adjust frequency and intensity based on feedback and one or more objective sleep metrics monitored. In an example, the interaction engine is configured to perform statistical reasoning to adjust the task by associating the sleep index with the feedback.
In an example, the method includes continuing to perform steps for third through fourth times and continuing to perform steps for nth through mth times to form a plurality of historical data corresponding to the four week period. In an example, the output includes a digital cognitive behavioral therapy output. In an example, the output may be selected from audio information transmitted to a human user, mechanical vibrations to a human user, light emitted to a human user, screen indications to a user, or environmental settings that change light or temperature, among others.
In an example, the output includes an audio dialog interactively between the engine and the human user. In an example, the output is provided interactively with a human user or at a specified time. In an example, the output is automatically generated using a logical therapy block.
In an example, the plurality of signals includes a motion signal, an vital organ signal, a heart rate, a respiratory rate, a spatial location of a human user, or a spatial configuration of a human user.
In other examples, the method includes initiating a relaxation routine for the human user based on processing the historical data using the interaction engine. In an example, the historical data includes at least information about heart rate and respiratory rate. In an example, the output includes information about a sleep window of the human user. In an example, the output includes information related to stimulation control of a human user. In an example, the output relates to an emotional state of the human user. These and other features include variations, modifications, and alternatives.
In examples, the present technology provides one or more benefits and/or advantages. In an example, the present technology implements behavior modification through an intervention-sensing feedback framework that uses a combination of sensing techniques, artificial intelligence techniques, and active feedback mechanisms. In an example, the present technology may be implemented using conventional hardware, software, and systems. These and other advantages will be described throughout this specification, and particularly below.
Definition of sleep terminology:
in the examples, we provide the following terms for understanding the present technology, although variations, modifications and alternatives are possible.
Cognitive Behavioral Therapy (CBT) is a therapy aimed at improving mental health using "action-directed" external stimuli or interventions. CBT is very effective in treating depression, anxiety, insomnia, obesity and many other related mental disorders.
Insomnia Cognitive Behavioral Therapy (CBTI) is a special therapy for CBT, primarily treating sleep habits and behaviors to improve sleep, alleviate sleep and preserve sleep.
CBTI the course of therapy is often guided by a therapist who first sifts through a questionnaire on the subject and then guides the subject through a series of specific tasks that are primarily around habit development to achieve better sleep. Subsequent therapy sessions with the therapist are held at a subsequent time, and the therapy may be adjusted based on the input of the subject for their performance and the effect of the task they perform on their sleep outcome.
Sleep sensing is the process of dividing its sleep into different stages on a sleeping subject using sensors, for example: wakefulness, deep sleep, light sleep, rapid Eye Movement (REM). In examples, classification may be done manually by an experienced sleeping technician, or this task may be done automatically using an artificial intelligence trained machine computerized model according to examples of the invention. Other statistics based on this sleep analysis may provide an indication of overnight. For example, total Sleep Time (TST), wake after falling asleep (WASO), sleep Efficiency (SE), sleep fall (SOL), etc.
Deep interaction is a novel sleep improvement technique concept, such as CBTI, that utilizes sensing of sleep subjects to improve the success rate of sleep therapy results. The concept consists of the following functional blocks, but modifications, variations and alternatives are possible.
Sleep sense/segment block: the sleep sense/segment block provides automatic sleep stage analysis and generates overnight statistics. It also helps personalize the sleep model by using feedback information from the user survey block to better reflect the user's sleep condition in the future night and improve over time. It also performs a night post-processing to adjust the sleep induction model to be more/less sensitive to night missed wake events.
CBTI blocks: block CBTI provides an interface for the user to provide suggested tasks to the user. This may be fully automated using machine generated content (via text, voice messages, or light notifications such as LED flashing patterns), or may be partially automated using information and insights provided to the therapist.
User survey block: the user investigation block makes a request by asking the user about the last night's experience and asking them about the estimation of sleep parameters such as total sleep time, fall asleep time, wake-up time, number of night wake-up times, etc. This feedback information of the user will be fed back to the sleep sense block for recalibration and adjustment of the model to provide a better personalized sleep model. The innovation here is that this data can be collected automatically by short messages, using application notifications, by a conversation chat robot, or otherwise.
Logic therapy block: the logic therapy block is to select the task provided based on analyzing the night of previous analysis of the subject, the latest sleep related activity, success or impact of previously attempted therapy advice, and the like. The result of this block is a CBTI task interaction guidance program that runs on the next night and adjusts the selection, intensity, and different trigger activations (after a specific event detected by the system or at a specific time). Over time, the technology makes long-term contact with the user, learning and adapting to the user's sleep patterns and habits. By measuring the effect of the therapy, the personalized therapy can be adjusted to be more influential over time.
Sensing of the sleep and sleep environment of the subject is performed by a variety of sensors, for example, sensors including, but not limited to, wireless sensors, light level sensors, motion sensors, acoustic sensors, microphones, and the like. In examples, sensing may be performed using any of the sensing techniques described herein or otherwise than the present specification. Further details of the present technology may be described with reference to the following figures.
FIG. 28 is a simplified diagram of a process of deep interaction with a human user that senses signals associated with sleep and active feedback, according to an example of the invention. As shown, the process includes devices that can be used to guide/motivate, measure user engagement, measure impact, and include information for learning. The method includes an engine that processes the information, creates signatures of history information and a plurality of sleep related states (whether the user is sleeping), and provides tasks or reactions to the person. In an example, the process uses active feedback to adjust actions and reactions to help optimize the sleep process.
FIG. 29 is a more detailed illustration of a deep interaction process according to an example of the invention. As shown, the process includes, among other things, a sense block, a context state block, an interaction block, a user response block, a measurement block, and a feedback loop that includes "learn from errors", "feedback and adapt" and "calibration state", and "customize content according to a particular user".
In an example, the sensing block includes hardware and software for detecting various activity (intensity and frequency) and spatial features, including ambient light illumination, temperature, and other information, based on time or other frequency. In an example, the sensing block can also track time of day, day of week, calendar, weather, or other external information. In an example, the sensing block includes a context tracker, a bounding box tracker, a vital sign tracker, and the like.
In an example, the process includes a learning process including learning a context state through a context tracker. The learning process maintains a history pattern from the sensed information. The history pattern may be a spatial "micro-location". These patterns may also include schedules, sleep/pressure, and actions. The learning process also includes an interaction process and related blocks. Further details of the learning process are described below.
As shown, an engine that processes the sensed information is used to determine the context state. In an example, the context state may include "attempting to fall asleep", "asleep", etc. Other contextual conditions include "attempting to climb up from the bed", "abnormal", "bathing", "dressing", etc.
In an example, the process includes an interaction block. The block provides an output to the user. The output may include an automatic night light, personal insight, positive presence signals, alerts, instruction for breathing or other activities, and instruction for starting a day of routine exercise, each of which may be output via audio and/or audio-visual. Of course, there can be other variations, modifications, and alternatives.
Fig. 30 is a simplified diagram illustrating feedback with respiratory exercise as a deep interaction process, according to an example of the present invention. As shown, one horizontal line represents a timeline from an earlier time on the left side to a later time on the right side. In an example, the method includes sensing a target area using a plurality of sensors, processing information from the sensors, and determining a "context state" as shown. Once the context state is determined, the method includes interacting with a process to output audio information, such as "hi, you appear to have been sleeping … …" when you are going to bed. The purpose of the output is to provide feedback to the user based on the historical information, helping the user improve sleep.
FIG. 31 is a simplified illustration of details illustrating a deep interaction process according to an example of the present invention. As shown, the process includes stages of sensing, learning, determining context status, and interacting with a user. Details of this process will be further described below by way of example.
FIG. 32 is a detailed illustration of a process showing deep interaction using ambient lighting, according to an example of the invention. In an example, the process includes techniques to learn environmental levels, activity classification, learn night mode, and provide output or advice over time. Details of this process will be further described below by way of example.
Examples:
To demonstrate the principles and operation of the present technology, we provide examples of implementing the present technology in hardware and software. These examples are illustrative only and those skilled in the art will recognize other variations, modifications, and alternatives.
How does the present technique detect the presence of a bed?
During an initial installation of the device in a user bedroom, the RF sensor of the device registers characteristics of the radar signal when someone is in the bed. Similarly, when a person is outside the bedroom bed, the device also records the characteristics of the radar signal. After initial training, the device will constantly compare the currently observed radar signal characteristics in real time and compare them with the characteristics of pre-recorded "in-bed" and "out-of-bed" scenes. The device then compares the similarity of the signals using statistical methods and determines whether the most likely observed situation in the bedroom is "in bed" or "out of bed". Such information about the user can monitor in real time whether the user is entering or leaving the bed and create interventions based on the user's status/scenario.
How does the present technique track sleep stages?
The present technology accumulates many recorded and monitored night sleeps of many people using RF sensors. The recorded night is recorded with a third party sleep monitoring device that provides estimated sleep stages (e.g., "fast eye movement period", "deep sleep", "shallow sleep", "awake state"). Using statistical models learned from sleep stage tags of third party devices, a new statistical model is generated that establishes the necessary correlation between the signal observed by the RF sensor and the sleep stage tags. This correlation is used to generate for each sleep stage a reciprocal set of signal features associated with that sleep stage. For example, the REM phase differs from other sleep phases due to rapid body movements, increased heart rate, and increased respiration. These modes all produce a small amount of vibration, i.e., RF signals, that can be captured by the RF sensor and observed through the device. The trained model will then be used to generate new night sleep stage estimates recorded in real time using the RF sensor in real time.
In an example, the model used by the present technique performs an association between the features of pre-recorded sleep stages and compares it to the features of pre-trained sleep stages. The device then compares the similarity of the signatures using statistical methods and determines the sleep stage that is most likely to be currently observed. Such information about the user can monitor in real time whether the user is awake or asleep and create interventions based on the user's sleep state-e.g., sound, lights, or text/recorded information.
In an example, the present technology provides real-time stimulation, triggered by sensing bedroom and location and user activity for a precise period of time. In an example, the present technology provides the ability to measure the impact of a particular stimulus (including intensity, options) on the user, and to give the impact of a particular situation (environmental condition, user history) -quantified stimulus-environment-user state combination. In an example, the present technology provides the ability to learn and adjust the stimulus based on previous interactions with the user.
In an example, the present method and system may also provide a predictive model to implement the following functions:
Emotion prediction:
It is possible to predict the emotional state (anxiety, valence) of the user and use it to provide personalized cognitive therapies. The emotional state is estimated based on some signals. Research literature shows that there is a strong correlation between mood pressure and heart rate, heart rate variability and sleep movements. All of these signals are monitored by the RF sensor, and when the signal rises an indication is created and indicates the pressure level.
Intent prediction:
The ability to predict an action of a user before that action actually occurs. In particular, a machine learning model is first trained with RF signals that result in a particular user behavior of interest. The model focuses on the previous user observation and creates an estimator of the impending action. For example, a user leaving the bed at night is a notable act. Within a few minutes before this, the user showed movement anxiety, vital signs and changes in sleep stages. The device then estimates the user's intent to get out of the bed before the user gets out of the bed.
Personalized sleep tracking:
sleep tracking is the estimation of the sleep stages of a user based on a pre-trained machine learning model. The model is generic and is applicable to user profiles (based on age, gender, BMI, etc.). Sleep tracking achieves personalization when sleep stage estimates also monitor sleep based on a history of a few nights before a particular user. The information contained in the user's previously monitored night may be self-correcting and adjusting to the model to better reflect the individual's unique sleep patterns. For example, the generic sleep stage model estimates that someone has fallen asleep at the beginning of the night, and understands that this estimate is erroneous based on user feedback. The model obtains feedback and adjusts the model to make better preparation for the next observed night, and thus is more sensitive to signals associated with the awake state, and is more likely to detect the awake phase at the beginning of the night.
Personalized therapy:
the therapy may be automated or may be partially automated. The automatic part of the therapy is to take into account the user situation and the analysis of the previous few late sleeps. However, once a therapy is provided, user feedback should gauge whether the proposed therapy was successful. Personalization of therapy is by adjusting therapy based on the effects of previously suggested therapy steps and intensity on the individual and is primarily concerned with the factors that affect the user. For example, the user is instructed to adjust the time to bed for 2 hours. If no improvement is found by monitoring the sleep quality index after one week, other therapies may be recommended or the time may be recommended to increase from 2 hours to 4 or 5 hours.
Detecting climacteric hectic fever:
Hot flashes occur with rapid changes in the user's auditory and respiratory signals. The device monitors the occurrence of such rapid changes with the signal from the RF sensor and indicates the detection of the hot flashes.
Having described various embodiments, examples and implementations, it should be apparent to those skilled in the relevant art that the foregoing is merely illustrative and not limiting. Many other arrangements of distributing functionality among the various functional elements of the illustrated embodiments or examples are possible. In alternative embodiments or examples, the functionality of any element may be implemented in various ways.
Moreover, in alternative embodiments or examples, the functions of multiple elements may be performed by fewer or a single element. Also, in some embodiments, any of the functional elements may perform less than or different than the operations described in the illustrated embodiments or examples. Moreover, functional elements that are shown as distinct for ease of illustration may be incorporated into other functional elements in a particular implementation. In addition, the ordering of functions or portions of functions may generally be changed. Certain functional elements, files, data structures, etc. may be described in the illustrated embodiment as being located in a particular or hub system memory. In other embodiments they may be located or distributed on a system or other platform that is common and/or remote. For example, any one or more of the data files or data structures described as being co-located with and "local" to a server or other computer may be located in one or more computer systems remote from the server. Furthermore, it will be appreciated by those skilled in the relevant art that the control and data flows between the functional elements and the various data structures may differ in many ways from those described above or in the documents cited herein. More specifically, intermediate functional elements may direct control or data flows, and the functions of the various elements may be combined, divided, or otherwise rearranged for parallel processing, or for other reasons. In addition, intermediate data structures for files may be used, and various of the described file data structures may be combined or otherwise arranged.
In other examples, the invention disclosed above may be advantageously combined or sub-combined. The architecture block diagrams and flowcharts are grouped for ease of understanding. However, it should be understood that in alternative embodiments of the invention, combinations of blocks, addition of new blocks, rearrangement of blocks, etc. may be considered.
The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the claims that follow.

Claims (54)

1. A method for improving sleep of a human user, the method comprising:
Sensing human activity associated with sleep of the human user;
processing information from the sensing with a processor;
Determining and outputting a cognitive behavioral therapy for the human user based on the processed information, wherein the cognitive behavioral therapy is configured to improve sleep of the human user;
Monitoring a response of the human user after performing the cognitive behavioral therapy;
adjusting the cognitive behavioral therapy based on the monitored response; and
Improving sleep of the human user.
2. The method of claim 1, wherein the cognitive behavioral therapy comprises an external stimulus or intervention.
3. The method of claim 1, further comprising: the human user is identified and the human activity is sensed only from the human user.
4. The method of claim 1, wherein the sensing comprises directing an RF signal at the human user.
5. The method of claim 1, wherein the sensing comprises sensing vital signs from the human user.
6. The method of claim 1, further comprising sensing an environmental condition.
7. The method of claim 1, wherein the sensing further comprises: sleep is sensed with a wireless sensor, light sensor, motion sensor, acoustic sensor or microphone.
8. A system for improving sleep of a human user, the system comprising:
a transmitter configured to transmit a wireless signal to the human user;
A receiver configured to receive at least a portion of the wireless signal reflected back from the human user;
At least one processor; and
At least one non-transitory computer readable medium comprising software configured to cause the at least one processor to:
a) Assessing sleep of the human user based on the reflected signals; and
B) Outputting a cognitive behavioral therapy to the human user, the cognitive behavioral therapy when executed causing sleep of the human user to be improved.
9. The system of claim 8, wherein the wireless signal comprises an RF signal.
10. The system of claim 8, wherein the human activity comprises movement of the human user or vital signs of the human user.
11. The system of claim 8, wherein the sleep is assessed based on a kinematic movement of the human user.
12. The system of claim 8, further comprising a sensor configured to sense an environmental condition.
13. The system of claim 8, wherein the software is configured to cause the at least one processor to determine device settings of a device that reduce the effect of the environmental condition on the human user's sleep, and the at least one processor is configured to send a command to the device to adjust the device according to the determined device settings.
14. A method for processing a signal from a human user related to sleep state, the method comprising:
Detecting a plurality of signals associated with an event using a plurality of sensing devices configured within proximity of the human user, associating the plurality of signals with sleep stages of the human user at a predetermined time;
receiving the plurality of signals into an input device, the input device coupled to an engine device;
Processing by parsing information associated with the plurality of signals using the engine device;
determining, using the engine device, a classification associated with the event;
Storing the classification associated with the event at the predetermined time;
continuing the steps of detecting, receiving, processing and storing for a plurality of other predetermined times from a first time corresponding to a beginning of a first process to a second time corresponding to an end of a second process to create sleep history data of the human user;
Processing the historical data using an interaction engine to identify a task to be output to the human user, the task being one of a plurality of tasks stored in a memory of a computing device; and
A logic therapy block is used to generate an output based on the task.
15. The method of claim 14, wherein the task is associated with content configured to be transmitted to the human user through one of a plurality of transmission events selected from a text message, a voice message, a light notification, or a mechanical vibration.
16. The method of claim 14, further comprising inputting data from the human user into the memory of the computing device, the data associated with a total sleep time, a fall-to-sleep time, a wake time, and a wake break between the first time and the second time; and transmitting the data to the engine device to update sleep history data.
17. The method of claim 14, wherein the plurality of sensing devices comprises an RF sensor, a light sensor, one or more microphones, a mechanical motion sensor, a temperature sensor, a humidity sensor, an image sensor, a pressure sensor, a depth sensor, or an optical sensor.
18. The method of claim 14, wherein the engine device comprises: a pre-trained model consisting of a plurality of statistical features, each of the plurality of statistical features associated with a different sleep stage, each of the plurality of statistical features associated with a selected set of sensors and with the classification; and a detector module configured to receive the incoming information streams of the plurality of signals from the selected sensor group and to perform statistical inference based on a plurality of currently observed signals and the pre-training model provided for the sleep history.
19. The method of claim 14, wherein the interaction engine comprises a plurality of pre-training therapies configured according to user age, gender, BMI, and one or more sleep quality indicators, the pre-training therapies configured to provide the tasks to the human user and configured to be adjusted in frequency and intensity based on the feedback and one or more objective sleep indicators being monitored.
20. The method of claim 19, wherein the interaction engine is configured to perform statistical inference to adjust the task by associating the sleep metrics with the feedback.
21. The method of claim 14, wherein logic therapy block is configured to determine the task based on the historical data.
22. The method of claim 14, wherein the plurality of sensing devices comprises one or more RF sensors.
23. The method of claim 14, further comprising continuing to perform steps for third through fourth times and continuing to perform steps for nth through mth times to form a plurality of historical data corresponding to four week times.
24. The method of claim 14, wherein the output is selected from the group consisting of: activation of an automatic night light, output with personal insight, positive presence signal, alarm, guided breathing exercise, or guided to begin daily exercise.
25. The method of claim 14, wherein the output comprises a digital cognitive behavioral therapy output selected from the group consisting of: an audio message transmitted to the human user, mechanical vibration to the human user, light emitted on the human user, a screen indication to the user, or changing an environmental setting such as light or temperature.
26. The method of claim 14, wherein the output comprises an audio dialog between the engine and the human user.
27. The method of claim 14, wherein the output is provided interactively with the human user or at a specified time.
28. The method of claim 24, wherein the output is automatically generated using the logical therapy block.
29. The method of claim 14, wherein the plurality of sensing devices comprises at least one RF sensor and an audio sensor.
30. The method of claim 14, wherein the plurality of signals comprises a motion signal, a vital organ signal, a heart rate, a respiratory rate, a spatial location of the human user, or a spatial configuration of the human user.
31. The method of claim 14, further comprising: a relaxation routine is initiated for the human user based on processing the historical data using the interaction engine, the historical data including at least information regarding heart rate and respiratory rate.
32. The method of claim 14, wherein the output includes information about a sleep window of the human user.
33. The method of claim 14, wherein the output includes information related to an incentive control of the human user.
34. The method of claim 14, wherein the output relates to an emotional state of the human user.
35. The method of claim 14, wherein the plurality of sensing devices comprises an RF sensor, an image capture device, a microphone, a motion sensor, a temperature sensor, a humidity sensor, an image sensor, a pressure sensor, a depth sensor, or an optical sensor.
36. A method for processing a signal from a human user related to sleep state, the method using information in the signal for digital cognitive behavioral therapy, the method comprising:
Detecting a plurality of signals associated with an event using a plurality of sensing devices configured within proximity of the human user, associating the plurality of signals with sleep stages of the human user at a predetermined time;
receiving the plurality of signals into an input device, the input device coupled to an engine device;
using the engine device to parse information associated with the plurality of signals;
determining, using the engine device, a classification associated with the event;
Storing the classification associated with the event at the predetermined time;
continuing the steps of detecting, receiving, processing and storing for a plurality of other predetermined times from a first time corresponding to a beginning of a first process to a second time corresponding to an end of a second process to create sleep history data of the human user;
Capturing, using one or more of the plurality of sensing devices, a plurality of current signals associated with a current event at a current time; and
The plurality of current signals are processed using the interaction data and the history data to identify a current task to output to the human user, and an output is generated based on the task using a logic therapy block.
37. The method of claim 36, wherein the task is associated with content configured to be transmitted to the human user through one of a plurality of transmission events selected from a text message, a voice message, a light notification, or a mechanical vibration.
38. The method of claim 36, further comprising inputting data from the human user into the memory of the computing device, the data associated with a total sleep time, a fall-to-sleep time, a wake time, and a wake break between the first time and the second time; and transmitting the data into the engine device to update the sleep history data.
39. The method of claim 36, wherein the plurality of sensing devices comprises an RF sensor, a light sensor, a microphone, a mechanical motion sensor, a temperature sensor, and a humidity sensor.
40. The method of claim 36, wherein the engine device comprises: a pre-trained model consisting of a plurality of statistical features, each of the plurality of statistical features associated with a different sleep stage, each of the plurality of statistical features associated with a selected set of sensors and with the classification; and a detector module configured to receive the incoming information streams of the plurality of signals from the selected sensor group and to perform statistical inference based on a plurality of currently observed signals and the pre-training model provided for the sleep history.
41. The method of claim 36, wherein the interaction engine comprises a plurality of pre-training therapies configured according to user age, gender, BMI, and one or more sleep quality indicators, the pre-training therapies configured to provide the tasks to the human user and configured to be adjusted in frequency and intensity based on the feedback and one or more objective sleep indicators being monitored.
42. The method of claim 38, wherein the interaction engine is configured to perform statistical inference to adjust the task by associating the sleep metrics with the feedback.
43. The method of claim 36, wherein logic therapy block is configured to determine the task based on the historical data.
44. The method of claim 36, wherein the plurality of sensing devices comprise one or more RF sensors.
45. The method of claim 36, further comprising continuing to perform steps for third through fourth times and continuing to perform steps for nth through mth times to form a plurality of historical data corresponding to four week times.
46. The method of claim 36, wherein the output is selected from the group consisting of: activation of an automatic night light, output with personal insight, positive presence signal, alarm, guided breathing exercise, or guided to begin daily exercise.
47. The method of claim 36, wherein the output comprises a digital cognitive behavioral therapy output selected from the group consisting of: an audio message transmitted to the human user, mechanical vibration to the human user, light emitted on the human user, a screen indication to the user, or changing an environmental setting such as light or temperature.
48. The method of claim 36, wherein the output is automatically generated using the logical therapy block.
49. The method of claim 36, wherein the plurality of sensing devices comprises at least one RF sensor and an audio sensor.
50. The method of claim 36, wherein the plurality of signals comprises a motion signal, a vital organ signal, a heart rate, a respiratory rate, a spatial location of the human user, or a spatial configuration of the human user.
51. The method of claim 36, further comprising: a relaxation routine is initiated for the human user based on processing the historical data using the interaction engine, the historical data including at least information regarding heart rate and respiratory rate.
52. The method of claim 36, wherein the output includes information about a sleep window of the human user.
53. The method of claim 36, wherein the output includes information related to an incentive control of the human user.
54. The method of claim 36, wherein the output relates to an emotional state of the human user.
CN202280066536.6A 2021-08-13 2022-08-09 System for improving sleep through feedback Pending CN118044225A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US17/401,737 2021-08-13
US17/401,737 US11997455B2 (en) 2019-02-11 2021-08-13 System and method for processing multi-directional signals and feedback to a user to improve sleep
PCT/US2022/039857 WO2023018731A1 (en) 2021-08-13 2022-08-09 System for improving sleep with feedback

Publications (1)

Publication Number Publication Date
CN118044225A true CN118044225A (en) 2024-05-14

Family

ID=85200317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280066536.6A Pending CN118044225A (en) 2021-08-13 2022-08-09 System for improving sleep through feedback

Country Status (2)

Country Link
CN (1) CN118044225A (en)
WO (1) WO2023018731A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11004567B2 (en) 2017-08-15 2021-05-11 Koko Home, Inc. System and method for processing wireless backscattered signal using artificial intelligence processing for activities of daily life
US11719804B2 (en) 2019-09-30 2023-08-08 Koko Home, Inc. System and method for determining user activities using artificial intelligence processing
US11184738B1 (en) 2020-04-10 2021-11-23 Koko Home, Inc. System and method for processing using multi core processors, signals, and AI processors from multiple sources to create a spatial heat map of selected region

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7524279B2 (en) * 2003-12-31 2009-04-28 Raphael Auphan Sleep and environment control method and system
EP2020919B1 (en) * 2006-06-01 2019-07-31 ResMed Sensor Technologies Limited Apparatus, system, and method for monitoring physiological signs
US8348840B2 (en) * 2010-02-04 2013-01-08 Robert Bosch Gmbh Device and method to monitor, assess and improve quality of sleep

Also Published As

Publication number Publication date
WO2023018731A1 (en) 2023-02-16

Similar Documents

Publication Publication Date Title
US11218800B2 (en) System and method for processing multi-directional audio and RF backscattered signals
US11948441B2 (en) System and method for state identity of a user and initiating feedback using multiple sources
US11971503B2 (en) System and method for determining user activities using multiple sources
US11143743B2 (en) System and method for processing multi-directional ultra wide band wireless backscattered signals
US11163052B2 (en) System and method for processing multi-directional frequency modulated continuous wave wireless backscattered signals
US11719804B2 (en) System and method for determining user activities using artificial intelligence processing
US11997455B2 (en) System and method for processing multi-directional signals and feedback to a user to improve sleep
US11175393B2 (en) System and method for processing multi-directional ultra wide band and frequency modulated continuous wave wireless backscattered signals
US11071473B2 (en) System and method for processing using multi-core processors, signals and AI processors from multiple sources
US11558717B2 (en) System and method for processing using multi-core processors, signals, and AI processors from multiple sources to create a spatial heat map of selected region
US11240635B1 (en) System and method for processing using multi-core processors, signals, and AI processors from multiple sources to create a spatial map of selected region
CN118044225A (en) System for improving sleep through feedback
US11776696B2 (en) System and method for processing wireless backscattered signal using artificial intelligence processing for activities of daily life
WO2020102813A1 (en) System and method for processing multi-directional wireless backscattered signals
JP2022191191A (en) Method, device, and system for sound sensing and radio sensing
US20230329574A1 (en) Smart home device using a single radar transmission mode for activity recognition of active users and vital sign monitoring of inactive users
Joudeh Exploiting Wi-Fi Channel State Information for Artificial Intelligence-Based Human Activity Recognition of Similar Dynamic Motions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination