WO2021102158A1 - Infant monitoring and soothing - Google Patents

Infant monitoring and soothing Download PDF

Info

Publication number
WO2021102158A1
WO2021102158A1 PCT/US2020/061314 US2020061314W WO2021102158A1 WO 2021102158 A1 WO2021102158 A1 WO 2021102158A1 US 2020061314 W US2020061314 W US 2020061314W WO 2021102158 A1 WO2021102158 A1 WO 2021102158A1
Authority
WO
WIPO (PCT)
Prior art keywords
infant
monitoring device
distress
state
audio
Prior art date
Application number
PCT/US2020/061314
Other languages
French (fr)
Inventor
James Alex DE RAADT ST. JAMES
Christine Alexandra Capota
Rodrigo Alexei Vasquez
Sara Beth Ulius-Sabel
Original Assignee
Bose Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bose Corporation filed Critical Bose Corporation
Publication of WO2021102158A1 publication Critical patent/WO2021102158A1/en

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/0202Child monitoring systems using a transmitter-receiver system carried by the parent and the child
    • G08B21/0205Specific application combined with child monitoring using a transmitter-receiver system
    • G08B21/0208Combination with audio or video communication, e.g. combination with "baby phone" function

Definitions

  • This disclosure relates to infant monitoring and soothing, and, more particularly, to an infant monitoring device that can detect when an infant is experiencing distress and automatically begins playing (“rendering”) audio designed to soothe the infant, without any intervention from caretakers, and related systems and methods.
  • an infant monitoring device includes a microphone, a speaker, a processor, and memory.
  • the memory stores computer-readable instructions, which, when executed, cause the processor to: detect a state of distress of an infant based on signals received from the microphone, select audio content for rendering based, at least in part, on the detected state of distress of the infant, and cause the speaker to render the selected audio content, thereby to soothe the infant.
  • Implementations may include one of the following features, or any combination thereof.
  • the memory stores a machine learning algorithm that is configured to detect the state of distress by categorizing sounds of the infant picked up by the microphone.
  • the infant monitoring device also includes communication hardware coupled to the processor and the instructions further cause the processor to transmit a live stream of audio picked up by the microphone to an other device via the communication hardware.
  • the processor is configured to transmit the live stream of the audio picked up by the microphone based on a determination that the state of distress meets one or more predetermined criteria.
  • the predetermined criteria correspond to preferences received from a user of the infant monitoring device.
  • the infant monitoring device includes communication hardware coupled to the processor and the instructions further cause the processor to transmit a notification to another device via the communication hardware, based on a determination that the state of distress meets one or more predetermined criteria.
  • the notification includes a text message.
  • the infant monitoring device includes one or more biometric sensors and/or environmental sensors.
  • the state of distress is determined based on readings from the one or more biometric sensors and/or environmental sensors.
  • the processer is configured to transmit a live stream of the audio picked up by the microphone or a notification to an other device based on a determination that readings from the one or more biometric and/or environmental sensors satisfies one or more predetermined criteria.
  • the predetermined criteria correspond to preferences received from a user of the infant monitoring device.
  • the infant monitoring device is configured to control one or more Internet-of-Things (loT) devices based on readings from the one or more biometric and/or environmental sensors.
  • LoT Internet-of-Things
  • the audio content is selected from the group consisting of broadband noise masking audio, recordings of a parent's voice, and/or algorithmically generated soothing content.
  • the audio content is stored locally in the memory, is retrieved from a remote location via a network connection or is generated via a content generation algorithm.
  • the audio content is generated via a content generation algorithm that is stored in the memory and is executed by the processor.
  • the audio content is generated via a content generation algorithm that is executed remotely and the audio content is transmitted to the infant monitoring device via a network connection.
  • the instructions cause the processor to adaptively update the detected state of distress of an infant based on changes in the signals received from the microphone, and change the selected audio content for rendering based, at least in part, on the update to the detected state of distress of the infant.
  • the infant monitoring device is configured to detect a user attending to the infant, and, in response, automatically stops the rendering of the audio content.
  • the infant monitoring device also includes a camera and the infant monitoring device is configured to detect the user attending to the infant via the camera using computer vision.
  • the infant monitoring device is configured to receive signals from an external sensor and detects the user attending to the infant based on the signals received from the external sensor.
  • the external sensor includes a pressure sensor.
  • a method of soothing an infant includes reading signals from a microphone in proximity to the infant, detecting a state of distress of an infant based on signals received from the microphone, selecting audio content for rendering based, at least in part, on the detected state of distress of the infant, and rendering the selected audio content, thereby to soothe the infant.
  • Implementations may include one of the above and/or below features, or any combination thereof.
  • the state of distress is detected via a machine learning algorithm that is configured to detect the state of duress by categorizing sounds of the infant picked up by the microphone.
  • the machine learning algorithm is executed on a device located proximal to the infant
  • the machine learning algorithm is executed on a network device that is located remotely from the infant.
  • the audio content includes broadband noise masking audio, recordings of a parent's voice, and/or algorithmically generated soothing content.
  • the audio content is generated via a content generation algorithm that executed on a device located proximal to the infant.
  • the audio content is generated via a content generation algorithm that is executed on a network device that is located remotely from the infant.
  • the content is selected via an audio content selection algorithm that is executed on a device located proximal to the infant.
  • the content is selected via an audio content selection algorithm that is executed on a network device located that is located remotely from the infant.
  • the method includes reading one or more biometric sensors and/or environmental sensors.
  • the state of distress is determined based on the readings from the one or more biometric sensors and/or environmental sensors.
  • the method includes controlling one or more Intemet-of- Things (loT) devices based on the readings from the one or more biometric sensors and/or environmental sensors.
  • LoT Intemet-of- Things
  • the method includes transmitting a live stream of audio picked up by the microphone to a parent's device.
  • the live stream of the audio picked up by the microphone is transmitted based on a determination that the state of distress meets one or more predetermined criteria.
  • the predetermined criteria correspond to preferences received from the parent.
  • the method includes transmitting a notification regarding the state of distress to a parent's device.
  • the notification is transmitted based on a determination that the state of distress meeting one or more predetermined criteria.
  • the predetermined criteria correspond to preferences received from the parent.
  • the step of detecting a state of distress of an infant based on signals received from the microphone includes continuously monitoring the microphone signal and adaptively update the detected state of distress of the infant based on changes in the signals received from the microphone over time, and changing the selected audio content for rendering based, at least in part, on the update to the detected state of distress of the infant.
  • the step of selecting audio content for rendering based, at least in part, on the detected state of distress of the infant includes changing the selected audio content for rendering based, at least in part, on updates to the detected state of distress of the infant over time.
  • the method includes detecting the presence of a parent attending to the infant, and, in response, automatically stopping the rendering of the audio content.
  • FIG. 1 is a front perspective view of an infant monitoring device.
  • FIG. 2 is a schematic of the components of an infant monitoring device in one example of the present disclosure.
  • FIG. 3 is an example system for monitoring and soothing a crying or fussy infant in one example of the present disclosure.
  • FIG. 4 is a flowchart showing a potential sequence of steps for monitoring and soothing a crying or fussy infant in one example of the present disclosure.
  • the proposed solution aims to soothe infants who are in or entering a state of distress by detecting their state and administering appropriate audio to soothe them. It may include a device near an infant's crib that senses when an infant is experiencing distress and smartly begins playing audio content to soothe the infant, without intervention from caretakers.
  • the solution may be comprised of hardware, software, and content components.
  • the hardware components may include a “monitor-like” device that is configured to sit near an infant and that is able to detect a state of the infant via various inputs. Inputs include metrics such as but not limited to heart rate (HR), breathing, movement, and audio analysis to determine the state of the infant. It is possible that the hardware also has features that a traditional baby monitor would, such as live video and/or audio and the ability to transmit those to a parent unit.
  • the hardware also has output, a speaker that administers audio such as broadband noise masking audio, algorithmically generated soothing content, or other audio (e.g., recordings of a parent's voice) that ultimately soothes the infant.
  • a parent may use a microphone provided on an infant monitoring device to record audio to be played back, e.g., via an integrated speaker, to soothe the infant.
  • the software component is where the intelligence of the system lies - it may leverage machine learning to determine a state of an infant and decide which audio should be played at what times. In particular, if an infant's cries and sounds are detected and analyzed, insight may be gained into the state of the infant for soothing.
  • the content component is what the infant hears.
  • the content may vary from broadband noise masking audio or recordings of a parent's voice, to algorithmically generated soothing content.
  • FIG. 1 is a front perspective view of an infant monitoring device 100.
  • the infant monitoring device 100 may include a housing 102, a microphone 104, a speaker 106, camera 107, sensors (generally “108"), a display screen 110, and buttons 112 or a touchscreen 114 for inputting information (e.g., parent settings) into the infant monitoring device 100.
  • sensors generally “108”
  • a display screen 110 for inputting information (e.g., parent settings) into the infant monitoring device 100.
  • buttons 112 or a touchscreen 114 for inputting information (e.g., parent settings) into the infant monitoring device 100.
  • a wide variety of forms may be utilized for the infant monitoring device, including a rectangular shape, an elongate cylindrical tower, or a flat square shape.
  • any suitable form factor may be utilized that may be suitable for being placed nearby a sleeping infant, such as on a nightstand or changing table.
  • the infant monitoring device 100 may be configured to be coupled or supported by a crib, such as by mechanically coupling to or hanging over and supported by the railing of the crib.
  • the infant monitoring device 100 may be configured to be wall mounted above or adjacent to a crib.
  • the housing 102 may be formed into a suitable shape from any rigid materials, including plastics, metals, wood, or composites.
  • the microphone may be any suitable microphone for detecting and sampling sounds within an infant's bedroom or sleep space.
  • the microphone 104 is configured to pick up sound in the local environment, in particular baby cries, which can then be used, with suitable software, to select appropriate audio to be output by the speaker 106 to help soothe a crying or fussy infant back to sleep. Audio picked up from the microphone 104 may also be streamed to a parental unit so that the parent can listen in on the infant.
  • the signal from the microphone i.e., the microphone signal
  • the sound extraction circuitry may remove sounds originating from the infant monitoring device from the microphone signal before it is streamed to the parental unit, e.g., so that the parent does not have to listen to the soothing audio from the speaker 106 while he/she is monitoring the infant.
  • Suitable sound extraction circuitry for this purpose is described in U.S. Patent No. 7,525,440, titled Person Monitoring, which issued on April 28, 2009. The complete disclosure of which is incorporated herein by reference.
  • the speaker 106 may include any suitable speaker system for generating sounds, as may be familiar to one of ordinary skill in the art.
  • the speaker 106 may include an upward or downward firing driver along with an acoustic deflector, to provide an omni-directional acoustic experience.
  • Such configurations may be helpful for providing non-directional, room-filling sounds to help soothe a disturbed infant back to sleep.
  • Omni-directional sound systems may be particularly helpful to achieve soothing sounds and a consistent listening experience throughout the room for soothing a fussy infant in the room.
  • any acceptable sound system for the speaker 106 may be employed for producing room-filling sounds, however.
  • the sensors 108 may include one or more of a camera, biometric sensors, and an environmental sensor.
  • a camera may be used to provide streaming video to a parental unit so that a parent can visually monitor the infant.
  • a camera may also be used, in combination with suitable software, for computer vision, e.g., to detect presence of a parent/adult attending to a fussy infant.
  • a camera may provide input to a presence detection algorithm, e.g., provided via machine learning, for detecting the presence of a parent or an adult body and cause the infant monitoring device 100 to stop or pause playback of soothing audio while the parent/adult is present.
  • Suitable Al-based presence (body) detection algorithms are available from IntelliVision, San Jose, CA.
  • the camera may include a night vision for nighttime use.
  • Biometric sensors may be used to measure an infant's biometrics including, e.g., heartrate (HR), breathing, and movement. Such sensors can include motion and radar sensors. In some cases, the camera and/or microphone may also be used to detect biometrics such as breathing and related motion.
  • Environmental sensors may include a temperature sensor, a humidity sensor, an ambient light sensor, C02 sensor, a volatile organic compounds (VOC) sensor, e.g., for detecting a soiled diaper.
  • Environmental sensor signals may inform a cry classification algorithm to help classify a state of an infant, e.g., a state of distress.
  • the environmental sensor signals may also be used by the infant monitoring device to trigger a notification to a parent, e.g., that C02 in the infants sleep space exceeds a threshold (safe) level, or that the infant needs changing.
  • a display screen 110 may be used to provide information gathered by the infant monitoring device 100 that may be of interest to a parent, such as how many times the infant awoke during a sleep period and/or how many times the infant was soothed back to sleep by the infant monitoring device 100.
  • the information may include biometric or sleep information about the infant, or environmental information about the infant's bedroom or sleep space.
  • the information may include recommendations to that the parent might follow to help the infant to sleep through the night, such as suggestions to adjust thermostat or humidifier settings to make a sleep space more comfortable or conducive to sleep.
  • the touchscreen 114 or buttons 112 may include any suitable means for delivering inputs to the infant monitoring device 100, including a tactile sensor coupled the housing 102 or detecting the presence of a user’s finger and for detecting pressure, such as when a virtual button on touchscreen 114 is being pressed by a user.
  • Virtual buttons may be displayed on the touchscreen 114 in a manner familiar to one of ordinary skill in the art to allow an operating system to accept input commands from a user, such as a parent of, or caregiver for, the infant being monitored.
  • the infant monitoring device 100 may be configured to accept input commands in a variety of ways and in a variety of contexts, by providing a programmable user interface that may present options and choices to a user via the touchscreen 114.
  • the touchscreen 114 may present a permanent display of fixed virtual buttons or include fixed physical buttons 112 for receiving inputs from a user.
  • the display screen 110 and a touchscreen 114 may not be necessary or may be reduced in function because a user’s smartphone or other external computing device may be used for linking with the infant monitoring device 100, displaying information from the infant monitoring device 100 and/or accepting inputs and delivering them to the infant monitoring device 100 to control its functions.
  • FIG. 2 provides an exemplary schematic of an infant monitoring device 100, showing its components.
  • the infant monitoring device 100 may include one or more main board(s) 200 including a processor 202, memory 204, and interconnects 206.
  • the main board 200 controls the operation of several other connected components, such as the microphone 104, an audio amplifier 208, the speaker 106, the display screen 110, and the buttons 112 or touchscreen 114 for inputting information in to the infant monitoring device 100.
  • Communications hardware 210 may include any wired or wireless communications means suitable for use with the infant monitoring device 100, such as WiFi, Bluetooth, LTE, USB, micro USB, or any suitable wired or wireless communications technologies known to one of ordinary skill in the art.
  • the main board 200 also receives information from one or more biometric sensors 108a as well as any number of environmental sensors 108b-e, for detecting environmental conditions, such as ambient light (108b), temperature (108c), humidity (108d), and air quality (108e).
  • the main board 200 also receives inputs based on a user’s interactions with a user interface 212, which may include voice activated commands detected by the microphone 104; various audio, alarm, and sleep control inputs received from the buttons 112 or touchscreen 114; or inputs received from a companion application running on a user's (e.g., a parent’s) smart phone or other external computing device.
  • the communications hardware 210 may also provide communications with external data sources, such as connected home services providing access to such things as lights, thermostat, external sensors, and any of the sensors 108.
  • External sensors may include, for example, a biometric sensor that sits underneath a mattress pad in an infant's crib, or a frictionless proximity sensor, e.g., via pressure system like a sensor that sits beneath or is integrated into a rug, to frictionlessly know when a parent or caregiver is attending to an infant.
  • the microphone 104 may be any suitable microphone for detecting a crying or fussy infant within a bedroom or sleep space.
  • the microphone 104 may be an arrayed microphone that is suitable for distinguishing between sounds produced by the infant monitoring device 100 and sounds produced externally within the infant's bedroom or sleep space.
  • the microphone 104 includes an arrayed microphone, it may include a plurality of omnidirectional microphones, directional microphone, or any mixture thereof, distributed about the infant monitoring device 100.
  • the microphone 104 may be coupled to the processor 202 for simultaneous processing of the signals from each individual microphone in a manner familiar to one of ordinary skill in the art in order to distinguish between sounds produced by the infant monitoring device 100 and other sounds within the room and to analyze any external noises for use with a state of distress classification algorithm and/or an audio content selection algorithm, as discussed below.
  • the microphone 104 may employ beamforming or other techniques to achieve directionality in a particular direction, for example, towards a sound to be analyzed, e.g., towards a sleeping infant.
  • the microphone 104 may be employed for monitoring the infant's sleep and for receiving spoken user interface commands.
  • the biometric sensor 108a remotely detects information about a nearby infant, including motion, respiration (breathing) rate, among other biometric indicators.
  • the biometric sensor 108a may be a contactless biometric sensor which may use an internal RF sensor for directing RF signals toward an infant being monitored, measuring the strength of the backscattered signal, and analyzing the backscattered signal to determine the state of various vital signs of the infant over time.
  • Other contactless biometric sensor techniques may include lasers for measuring minor skin deflections caused by an infant's hear rate and blood pressure; or image-based monitoring systems, whereby skin deflections caused by heartbeats and blood pressure may be observed and analyzed over time through a camera (such as camera 107).
  • the biometric sensor 108a may be configured to report detected biometric information to the processor 202 for storage in the memory 204 and to be analyzed for use in the various subroutines described herein.
  • the infant monitoring device 100 may employ a direct biometric sensor as is known to one of ordinary skill in the art.
  • a direct biometric sensor may include probes or contact pads, that may be disposed on or under the infant's body or within their mattress or sheets in order to mechanically detect biometric information, such as movement, respiration, hear rate, blood pressure, and body temperature, among others.
  • Such sensors may include accelerometers, other motions sensors, or mechanical sensors such as piezoelectric sensors or other vibration sensors.
  • a direct biometric sensor may include a blood oxygen sensor (or oximeter).
  • the oximeter may be a sensor that relies on transmissive pulse oximetry and/or reflectance pulse oximetry.
  • the oximeter is useful for detecting blood oxygen level in an infant and for detecting potential hypoxemia in an infant.
  • the biometric information detected by the direct biometric sensor may then be communicated to the infant monitoring device 100 using a wired or wireless connection in a manner known to one of ordinary skill in the art.
  • the processor 202 detects when an infant is in or is entering a state of distress by detecting their state, e.g., based on a signal(s) provided from the microphone 104 and/or the sensor(s) 108 (e.g., biometric sensor 108a and/or environmental sensor(s) 108b-e).
  • the processor 202 may execute a machine learning algorithm, e.g., stored in the memory 204, for determining a state of the infant from the microphone and/or sensor signals.
  • the machine learning algorithm may be executed on a processor on another device that is connected to the infant monitoring device 100, e.g., via a network connection, such as a cloud-based processor or a processor on a parent's smart phone.
  • the processor 202 may use the determined state of the infant to select appropriate audio content to soothe the infant.
  • Audio content may be selected via a look-up table (LUT) that associates an infant state with certain audio content, which may be stored in the memory 204 or retrieved from a network (LAN or WAN) resource.
  • the audio content may vary from broadband noise masking audio or recordings of a parent's voice, to algorithmically generated soothing content.
  • Algorithmically generated soothing content may be generated via an algorithm executed by the processor 202 and/or by an algorithm executed by processor on another device that is connected to the infant monitoring device 100, e.g., via a network connection, such as a cloud-based processor or a processor on a parent's smart phone.
  • the audio content may include recording of the parent's voice.
  • the processor 202 administers the audio, via the audio amplifier and the speaker 106, to soothe the infant without necessitating intervention from caretakers, such as parents or other caregivers.
  • the processor 202 may also send notifications, via the communications hardware 210, to a parental unit based on the determined state of the infant, as determined from the microphone signal, and/or based on signals from the biometric or environmental sensors.
  • the processor 202 may send notifications to the parental unit based on input received from the parent. For example, a parent may preselect, e.g., via a user interface on the infant monitoring unit or via the parent's smart phone, when he/she is to receive notifications.
  • the processor 202 may determine if one or more preselected conditions are met before a notification is sent.
  • the condition may include a state of the infant (e.g., a level of distress).
  • the condition may also be triggered if an environmental sensor indicates that the infant needs a diaper change.
  • the processor 202 may use changes in the determined state of the infant to adapt the audio content playback over time.
  • the system can provide parents the option to change the audio content features on demand, and/or to let the audio content playback adapt to changes in the infant without intervention, to maintain soothing benefit from the audio.
  • the processor 202 may selectively stream audio to the parental unit via the communications hardware 210.
  • the parent elect to receive an audio feed from the infant monitoring device 100 only when the volume of noise exceeds a predetermined threshold, when the noise persists for a predetermined duration of time, and/or when determined state of the infant meets a predetermined state.
  • FIG. 3 illustrates an example system 300 for monitoring and soothing a crying or fussy infant.
  • the system 300 detects a state of distress of the infant and administers appropriate audio to soothe them.
  • the system 300 may also selectively output indications of sounds of importance which a parent or caregiver would like to be notified.
  • the system 300 includes, inter alia, the infant monitoring device 100, an audio output device 302a, a bedside unit 304, and a smart device 306 (e.g., a smart phone).
  • the audio output device 302a outputs masking sounds and allows real-time audio to be piped through to the user.
  • the audio output device 302a is configured to simultaneously output masking sounds and real-time audio, a version of the real-time audio, or an alert.
  • the audio output device 302a is illustrated as a pair of in-ear audio sleepbuds, the audio output device may be any personal audio output device. Examples include wearable or non-wearable audio output device such as, for example, over-the-ear headphones, audio sleep mask, audio eyeglasses or frames, around-ear audio devices, open-ear audio devices (such as shoulder-worn or body-worn audio devices), audio wrist watches, speaker, portable bedside unit, or the like.
  • wearable or non-wearable audio output device such as, for example, over-the-ear headphones, audio sleep mask, audio eyeglasses or frames, around-ear audio devices, open-ear audio devices (such as shoulder-worn or body-worn audio devices), audio wrist watches, speaker, portable bedside unit, or the like.
  • the audio output device 302a includes at least one acoustic transducer (also known as driver or speaker) for outputting sound.
  • the acoustic transducer(s) may be configured to transmit audio through air and/or through bone (e.g., via bone conduction, such as through the bones of the skull).
  • the audio output device includes one or more microphones to detect sound/noise in the vicinity of the device to enable active noise reduction (ANR).
  • ANR active noise reduction
  • audio output device includes hardware and circuitry including processor(s)/processing system and memory configured to implement one or more sound management capabilities or other capabilities including, but not limited to, noise cancelling circuitry and/or noise masking circuitry and other sound processing circuitry.
  • the noise cancelling circuitry is configured to reduce unwanted ambient sounds external to the audio output device by using active noise cancelling.
  • the sound masking circuitry is configured to reduce distractions by playing masking sounds via the speakers of the audio output device.
  • the audio output device 302a is an Internet-of-Things (loT) device.
  • the audio output device 302a receives data, commands, and audio from a hub 308a.
  • the hub 308a sends and receives information from other devices in the system 300 and relays instructions to the audio output device 302a.
  • the hub 308a receives audio, commands, or data from the infant monitoring device 100, bedside unit 304, and/or software interface of a smart device (user device) 306 and transmits instructions to the audio output device 302a.
  • the audio output device 302a includes the processing circuity of the hub 308a and directly communicates with one or more of the other devices in the system 300.
  • one or more of the infant monitoring device 100, the bedside unit 304, and/or the software interface of the smart device 306 perform features of the hub 308a.
  • the infant monitoring device 100 is a monitoring unit that collects information regarding at least one of audio, video, motion, or the environment from a location that is remote to the audio output device 302a.
  • Audio refers to raw data collected from the infant monitoring device 100, data filtered based on user-set thresholds such as volume, duration or classification, an alert, or an algorithmic analysis of noises in the environment of the infant monitoring device 100.
  • the infant monitoring device 100 collects video and other data from the location remote to the user.
  • the infant monitoring device 100 is configured to collect biometric information associated with the child such as, for example, a breathing rate, respiration rate, or the child's temperature.
  • the infant monitoring device 100 is configured to detect movement of the child or characteristics of the room such as temperature, humidity, or carbon monoxide level of the room.
  • the infant monitoring device 100 is configured to soothe infants who are entering a state of distress by detecting their state and administering appropriate audio to soothe them.
  • the infant monitoring device 100 may detect the state of distress of the infant based on an analysis of signals (microphone signals) received from the onboard microphone 104.
  • the infant monitoring device 100 may feed the microphone signal into a machine learning algorithm that is configured to determine a state of distress of the infant from the audio provided from the mic.
  • the machine learning algorithm may be executed locally on the infant monitoring device 100 or the machine learning algorithm may be executed in the cloud 310 and the microphone signals may be sent to the cloud 310 for processing.
  • the infant monitoring device 100 may communicate with one or more connected home/ Internet-of-Things (loT) devices 312, such as a thermostat, lights, a humidifier, air purifier, and/or an aromatherapy device.
  • the infant monitoring device 100 may send notifications to the user/parent regarding readings from the loT devices 312, such as ambient temperature, light, or humidity conditions in the infant's sleep space.
  • the notification may include suggestions on how to make the sleep space more comfortable to the infant, such as by adjusting the temperature, lighting, or operation of a humidifier.
  • the infant monitoring device 100 may automatically adjust one or more environmental conditions via control of an loT device 312.
  • the infant monitoring device 100 may automatically adjust a connected thermostat if the temperature in the infant's sleep space is determined to be too warm.
  • a connected loT light could be controlled to adjust lighting in the infant's sleep space
  • a connected loT humidifier could be controlled to adjust humidity in the infant's sleep space
  • a connected loT air purifier could be controlled to improve air quality in the infant's sleep space.
  • the infant monitoring device 100 may activate a connected loT aromatherapy device to help soothe a fussy infant.
  • decisions to send notifications to a parent and/or control a connected home device may be based on biometric measurements of the infant.
  • the infant monitoring device 100 may receive signals from external sensors 314, such as a biometric sensor that sits beneath a mattress in the infant's crib or a pressure sensor that is integrated within or disposed beneath a rug, e.g., for detecting presence of a parent.
  • external sensors 314 such as a biometric sensor that sits beneath a mattress in the infant's crib or a pressure sensor that is integrated within or disposed beneath a rug, e.g., for detecting presence of a parent.
  • measurements from the external biometric sensor may be used to inform the selection of audio for soothing the infant.
  • the infant monitoring device 100 may be in communication with remote service, such as a remote care specialist (generally “316"), such as a night nurse 316a or physician 316b, that may provide advice, e.g., via text notifications to the parent.
  • a remote care specialist such as a night nurse 316a or physician 316b
  • the infant monitoring may deliver a live audio stream from the infant's bedroom or sleep space to a night nurse, who may them provide recommendations to the parent, e.g., via text to the parent's smart device 306, suggesting things that the parent might do to help soothe the infant, or to alleviate distress in the future.
  • the infant monitoring device 100 may alternatively, or additionally, send biometric or environmental sensor data to the remote service to help inform the advice received.
  • the infant monitoring device 100 is placed in an infant’s bedroom and the user of the audio output device 302a sleeps in a different room.
  • the infant monitoring device 100 engages in bidirectional communication with the bedside unit 304, the cloud 310, and the software interface running on a smart device 306.
  • the bedside unit 304 is a portable unit that is configured to receive audio from the infant monitoring device 100.
  • the bedside unit 304 is configured to receive video and/or other data from the infant monitoring device 100, the cloud 310, and/or software interface running on the smart device 306.
  • the bedside unit engages in bidirectional communication with the infant monitoring device 100, the cloud 310, software interface running on the smart device 306, and hub 308a.
  • the system 300 does not include a hub 308a for the audio output device 302a. Instead, the bedside unit 304 performs the functions of the hub 308a.
  • the bedside unit 304 includes a screen 318 for outputting a video stream transmitted from another device in the system, such as the infant monitoring device 100.
  • the screen 318 provides a user interface for the user to enter preferences regarding when to receive an audio stream or notifications from the infant monitoring device 100.
  • the smart device 306 uses a software interface to provide a user interface (via an application).
  • the smart device 306 engages in bidirectional communication with the bedside unit 304, infant monitoring device 100, the cloud 310, and hub 308a.
  • the user interface enables the user (e.g., a parent or caregiver) to input user preferences regarding when to receive an audio stream (e.g., live audio feed from the infant's room delivered to the audio output device 302a and/or bedside unit 304) or notifications (e.g., text notifications delivered to the bedside unit 304 or the smart device 306) from the infant monitoring device 100.
  • the user may elect to receive an audio stream only when the sound (e.g., the infant crying) exceeds a predetermined decibel value or exhibits certain tonal qualities for a configurable amount of time.
  • the user may elect to receive an audio stream from the infant monitoring device 100 only if the infant distress persists for a predetermined duration of time after the automated soothing audio playback starts; e.g., the parent only receives the audio stream of the distressed infant if the soothing audio does not effectively soothe the infant.
  • the user may elect to receive an audio stream or notification from the infant monitoring device 100 only if the state of infant distress meets or exceeds some predetermined threshold value or classification.
  • the user may elect to receive an audio stream or notification when the onboard environmental sensors indicate the infant needs a diaper change, and/or when the biometric sensors indicate an irregular heart rate or breathing rate, or a body temperature above or below a predetermined threshold.
  • the user may not have nighttime caregiving responsibilities on certain days of the week or may have a partner assume caregiving responsibilities at some point during the user’s sleep period.
  • the user may input a schedule including days of the week or hours during a sleep period for each day that he or she would like to be notified and/or receive an audio stream from the infant monitoring device 100.
  • the triggers (settings) that determine when the user will receive an audio stream or notification from the infant monitoring device 100 vary based on the day of the week or time of day.
  • the application may allow the user to enter preferences for the combination of day, time, and sounds of importance.
  • the user may set a mode for both the user's personal audio output device and also for the personal audio output device of another user in the system.
  • the ability for a user to set modes for both personal audio output devices 302a and 302b facilitates handoff during a sleep period. For example, when a second user assumes night-time care giving responsibilities, the second user may set the first user’s mode to an “off” or “sleep” state and set the second user's mode to an “aware” state. This may help ensure that at least one of multiple personal audio output devices in the system 300 are set to an “aware” state, especially during transitions from one primary caregiver to another.
  • the application allows the user to enter preferences regarding any combination of day, time, and sounds of importance for not only the user but also for another user of the system 300.
  • the application enables the user to enter preferences for the user of personal audio output devices 302a and 302b.
  • the software interface is run on the bedside unit 304, thereby eliminating the need for the smart device 306.
  • a user interface is provided on both the smart device 306 and the bedside unit 304.
  • the user interface running on the bedside unit 304 may offer fewer customization features as compared to the user interface running on the smart device 306.
  • the user has the option to set the application to an “aware" mode or a “sleep” or “off” mode on both the smart device 306 and the bedside unit 304.
  • Providing, at least, some user input options on the bedside unit 304 helps to reduce the need for a user to look at a bright screen of the smart device 306 before going to sleep.
  • the cloud 310 refers to a cloud/remote/network server where applications and data are maintained and made available using the Internet.
  • the user's preferences input using the software interface can be stored on the cloud 310.
  • the user input received via the software interface is transmitted by the smart device 306 to the cloud 310.
  • the cloud 310 maintains the user's preferences.
  • the smart device 306 provides one of the infant monitoring device 100 or bedside unit 304 with the preferences input by the user, and the infant monitoring device 100 or bedside unit 304 provides the cloud 310 with the user's preferences.
  • the cloud 310 determines the state of infant distress, selects audio for playback by the infant monitoring device 100 to soothe the infant, and when to output indications (an audio stream of the infant's sleep space or notification, e.g., an audible or text notification) to alert the user/parent of the state of distress.
  • the devices in the system 300 communicate with each other and in turn the cloud 310, the user’s preferences and any instructions to output an indication of a state of infant distress may be relayed between devices and the cloud before arriving at the intended location or device in the system 300.
  • the cloud 310 supports artificial intelligence (Al)-based capabilities.
  • the Al-based capabilities help classify sounds in the infant's bedroom or sleep space that may be indicative of infant distress.
  • the Al-based capabilities help determine which of multiple users to alert in response to a detected state of infant distress based on historical data and/or the type of detected sound.
  • the system 300 does not include the cloud 310.
  • the operations are performed off-line or on a local network.
  • One or more of the bedside unit 304 or infant monitoring device 100 maintain user preferences and process data to identify and classify sounds indicative of infant distress.
  • the bedside unit 304 or the infant monitoring device 100 have a computer chip programmed to cause the bedside unit 304, infant monitoring device 100, or combination of bedside unit 304 and infant monitoring device 100 to performing the actions for protecting a user's sleep by outputting masking sounds and selectively notifying the user when the infant is in a state of distress that necessitates the user's attention as described herein.
  • FIG. 4 shows an example of a method 400 for assessing an infant's state of distress and soothing.
  • an infant monitoring device is provided near an infant.
  • the infant monitoring device includes a microphone for detecting sounds within a bedroom or sleep space; a processor; and memory, which may store, among other things, a machine learning algorithm for detecting an infant's state of distress and an algorithm for selecting audio content (content selection algorithm) for playback based on the detected state of distress for execution by the processor, as described above; and optionally, one or more biometric sensors and/or environmental sensors.
  • the infant monitoring device is initialized.
  • Parent or caregiver (generally “user”) input may also be received (step 406).
  • the input may correspond to settings. For example, the user may select when they want to receive a live audio stream or notification from the infant monitoring device. The user may elect to receive a live audio stream and/or notification only under prescribed circumstances, thus allowing the user to avoid being disrupted when their attention is not necessary.
  • the input may be received via a user's bedside unit, via the user's smart phone, or via an interface on the infant monitoring unit.
  • the infant monitoring device may begin reading signals from the microphone (step 408) in order to determine sounds produced by the infant. Any detected sound information from the infant may be fed into a machine learning algorithm in order to categorize a state of distress of the infant (step 410). The state of distress is then fed into an audio selection algorithm that selects appropriate audio for soothing the infant based on the state of distress (step 412), and the selected audio content is rendered via the speaker of the infant monitoring device to soothe the infant (step 414).
  • the state of distress can also be compared to the user's settings to determine whether the current state of distress satisfies the criteria for alerting the user (step 416), and, if so, the audio from the microphone can be live streamed to a user's device (e.g., a smart phone, audio output device, or bedside unit) or a notification (e.g., text message) can be sent to a user's device (step 418).
  • a user's device e.g., a smart phone, audio output device, or bedside unit
  • a notification e.g., text message
  • the infant monitor may continuously monitor the signals from the microphone in order to adaptively update the detected state of distress and updates the selected audio content accordingly, i.e., based on changes in the detected state of distress.
  • the infant monitoring device may also begin taking biometric and/or environmental readings relevant to the infant and/or the infant's sleep space (step 420).
  • the biometric and/or environmental readings may be utilized to inform the machine learning algorithm to assist in assessing the state of distress of the infant, and/or to inform the content selection algorithm to aid in the selection of audio content for playback to soothe the infant.
  • the biometric and/or environmental readings may be compared to the user's settings to determine whether the current state of distress satisfies the criteria for alerting the user (step 416).
  • the infant monitoring device may determine whether a parent or caregiver is attending to the infant, and, if so, the infant monitoring device may be programmed to halt playback of the soothing audio (step 424). Presence detection may be performed by way of computer vision, e.g., using a camera provided on the infant monitoring device, and/or via an external pressure sensor that may be incorporated within or disposed beneath a rug near the infant in the infant’s bedroom or sleep space.
  • the state of distress can also be compared to device settings to determine whether the current state of distress satisfies the criteria for notifying a remote care specialist, such as a night nurse or physician (step 426), and, if so, the audio from the microphone, and/or readings from the biometric and/or environmental sensors may be transmitted to the remote care specialist (step 428).
  • the remote care specialist may send (e.g., via text message) suggestions to the parent for soothing the infant, or for making the infant's sleep space more conducive to sleep in the future.
  • the state of distress and/or the selected audio content could be passed to experts to train a model, and, at a future date, diagnose conditions based on the information provided.
  • the microphone readings, selected audio content, biometric readings, and/or environmental readings may be used to learn about infants in aggregate over time and/or about an individual infant over time. These learnings may be used to update the infant monitoring device, e.g., update the state of distress classification algorithm and/or the audio content selection algorithm.
  • aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “component,” “circuit,” “module” or “system.”
  • aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • each block in the flowchart or block diagrams may represent a module, segment or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures.
  • two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • Each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by special-purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
  • Implementations of the systems and methods described above comprise computer components and computer-implemented steps that will be apparent to those skilled in the art.
  • the computer-implemented steps may be stored as computer-executable instructions on a computer-readable medium such as, for example, floppy disks, hard disks, optical disks, Flash ROMS, nonvolatile ROM, and RAM.
  • the computer-executable instructions may be executed on a variety of processors such as, for example, microprocessors, digital signal processors, gate arrays, etc.
  • systems, methods and apparatuses outlined above may include various hardware and operating software, familiar to those of skill in the art, for running software programs as well as communicating with and operating any devices, including, for example, a biometric sensor, environmental sensors, a user interface, a computer network, a sound system, and any other internal or external devices.
  • Such computerized systems may also include memory and storage media, and other internal and external components which may be used for carrying out the operations of this disclosure.
  • such computer systems may include one or more processors for processing and controlling the operation of the computer system, thus, embodying the processes of this disclosure. To that end, the processor, associated hardware and communications systems may carry out the various examples presented herein.

Abstract

An infant monitoring device includes a microphone, a speaker, a processor, and memory. The memory stores computer-readable instructions, which, when executed, cause the processor to: detect a state of distress of an infant based on signals received from the microphone, select audio content for rendering based, at least in part, on the detected state of distress of the infant, and cause the speaker to render the selected audio content, thereby to soothe the infant.

Description

INFANT MONITORING AND SOOTHING
BACKGROUND
[0001] This disclosure relates to infant monitoring and soothing, and, more particularly, to an infant monitoring device that can detect when an infant is experiencing distress and automatically begins playing (“rendering") audio designed to soothe the infant, without any intervention from caretakers, and related systems and methods.
SUMMARY
[0002] All examples and features mentioned below can be combined in any technically possible way.
[0003] In one aspect, an infant monitoring device includes a microphone, a speaker, a processor, and memory. The memory stores computer-readable instructions, which, when executed, cause the processor to: detect a state of distress of an infant based on signals received from the microphone, select audio content for rendering based, at least in part, on the detected state of distress of the infant, and cause the speaker to render the selected audio content, thereby to soothe the infant.
[0004] Implementations may include one of the following features, or any combination thereof.
[0005] In some implementations, the memory stores a machine learning algorithm that is configured to detect the state of distress by categorizing sounds of the infant picked up by the microphone.
[0006] In certain implementations, the infant monitoring device also includes communication hardware coupled to the processor and the instructions further cause the processor to transmit a live stream of audio picked up by the microphone to an other device via the communication hardware. [0007] In some cases, the processor is configured to transmit the live stream of the audio picked up by the microphone based on a determination that the state of distress meets one or more predetermined criteria.
[0008] In certain cases, the predetermined criteria correspond to preferences received from a user of the infant monitoring device.
[0009] In some examples, the infant monitoring device includes communication hardware coupled to the processor and the instructions further cause the processor to transmit a notification to another device via the communication hardware, based on a determination that the state of distress meets one or more predetermined criteria.
[0010] In certain examples, the notification includes a text message.
[0011] In some implementations, the infant monitoring device includes one or more biometric sensors and/or environmental sensors.
[0012] In certain implementations, the state of distress is determined based on readings from the one or more biometric sensors and/or environmental sensors.
[0013] In some cases, the processer is configured to transmit a live stream of the audio picked up by the microphone or a notification to an other device based on a determination that readings from the one or more biometric and/or environmental sensors satisfies one or more predetermined criteria.
[0014] In certain cases, the predetermined criteria correspond to preferences received from a user of the infant monitoring device.
[0015] In some examples, the infant monitoring device is configured to control one or more Internet-of-Things (loT) devices based on readings from the one or more biometric and/or environmental sensors.
[0016] In certain examples, the audio content is selected from the group consisting of broadband noise masking audio, recordings of a parent's voice, and/or algorithmically generated soothing content. [0017] In some implementations, the audio content is stored locally in the memory, is retrieved from a remote location via a network connection or is generated via a content generation algorithm.
[0018] In certain implementations, the audio content is generated via a content generation algorithm that is stored in the memory and is executed by the processor.
[0019] In some cases, the audio content is generated via a content generation algorithm that is executed remotely and the audio content is transmitted to the infant monitoring device via a network connection.
[0020] In certain cases, the instructions cause the processor to adaptively update the detected state of distress of an infant based on changes in the signals received from the microphone, and change the selected audio content for rendering based, at least in part, on the update to the detected state of distress of the infant.
[0021] In some examples, the infant monitoring device is configured to detect a user attending to the infant, and, in response, automatically stops the rendering of the audio content.
[0022] In certain examples, the infant monitoring device also includes a camera and the infant monitoring device is configured to detect the user attending to the infant via the camera using computer vision.
[0023] In some implementations, the infant monitoring device is configured to receive signals from an external sensor and detects the user attending to the infant based on the signals received from the external sensor.
[0024] In certain implementations, the external sensor includes a pressure sensor.
[0025] In another aspect, a method of soothing an infant includes reading signals from a microphone in proximity to the infant, detecting a state of distress of an infant based on signals received from the microphone, selecting audio content for rendering based, at least in part, on the detected state of distress of the infant, and rendering the selected audio content, thereby to soothe the infant. [0026] Implementations may include one of the above and/or below features, or any combination thereof.
[0027] In some implementations, the state of distress is detected via a machine learning algorithm that is configured to detect the state of duress by categorizing sounds of the infant picked up by the microphone.
[0028] In certain implementations, the machine learning algorithm is executed on a device located proximal to the infant
[0029] In some cases, the machine learning algorithm is executed on a network device that is located remotely from the infant.
[0030] In certain cases, the audio content includes broadband noise masking audio, recordings of a parent's voice, and/or algorithmically generated soothing content.
[0031] In some examples, the audio content is generated via a content generation algorithm that executed on a device located proximal to the infant.
[0032] In certain examples, the audio content is generated via a content generation algorithm that is executed on a network device that is located remotely from the infant.
[0033] In some implementations, the content is selected via an audio content selection algorithm that is executed on a device located proximal to the infant.
[0034] In certain implementations, the content is selected via an audio content selection algorithm that is executed on a network device located that is located remotely from the infant.
[0035] In some cases, the method includes reading one or more biometric sensors and/or environmental sensors.
[0036] In certain cases, the state of distress is determined based on the readings from the one or more biometric sensors and/or environmental sensors. [0037] In some examples, the method includes controlling one or more Intemet-of- Things (loT) devices based on the readings from the one or more biometric sensors and/or environmental sensors.
[0038] In certain examples, the method includes transmitting a live stream of audio picked up by the microphone to a parent's device.
[0039] In some implementations, the live stream of the audio picked up by the microphone is transmitted based on a determination that the state of distress meets one or more predetermined criteria.
[0040] In certain implementations, the predetermined criteria correspond to preferences received from the parent.
[0041] In some cases, the method includes transmitting a notification regarding the state of distress to a parent's device.
[0042] In certain cases, the notification is transmitted based on a determination that the state of distress meeting one or more predetermined criteria.
[0043] In some examples, the predetermined criteria correspond to preferences received from the parent.
[0044] In certain examples, the step of detecting a state of distress of an infant based on signals received from the microphone, includes continuously monitoring the microphone signal and adaptively update the detected state of distress of the infant based on changes in the signals received from the microphone over time, and changing the selected audio content for rendering based, at least in part, on the update to the detected state of distress of the infant. And, the step of selecting audio content for rendering based, at least in part, on the detected state of distress of the infant includes changing the selected audio content for rendering based, at least in part, on updates to the detected state of distress of the infant over time.
[0045] In some implementations, the method includes detecting the presence of a parent attending to the infant, and, in response, automatically stopping the rendering of the audio content. BRIEF DESCRIPTION OF THE DRAWINGS
[0046] FIG. 1 is a front perspective view of an infant monitoring device.
[0047] FIG. 2 is a schematic of the components of an infant monitoring device in one example of the present disclosure.
[0048] FIG. 3 is an example system for monitoring and soothing a crying or fussy infant in one example of the present disclosure.
[0049] FIG. 4 is a flowchart showing a potential sequence of steps for monitoring and soothing a crying or fussy infant in one example of the present disclosure.
DETAILED DESCRIPTION
[0050] When an infant is crying or fussy, parents, caretakers, and infants alike experience stress. A broad range of solutions for soothing an infant are known, many of which require significant involvement, time, and trial-and-error from parents and caretakers. The ways in which infants are currently soothed have weakness in that they are inefficient and are largely reactive rather than proactive. Regarding efficiency, it can take considerable time and effort to soothe an infant. When an infant is in distress, caretakes often embark on a lengthy trial-and-error period, and the solution can take a lot of time. The middle of the night, in particular, is a time when soothing an infant is taking time away from a parent’s rest. Regarding lack of proactivity, it is possible that in some circumstances an infant may experience an “escalation period" in which they go from discomfort to distress. During this time, there are missed opportunities to soothe a baby and de-escalate the discomfort before it becomes acute.
[0051] The proposed solution aims to soothe infants who are in or entering a state of distress by detecting their state and administering appropriate audio to soothe them. It may include a device near an infant's crib that senses when an infant is experiencing distress and smartly begins playing audio content to soothe the infant, without intervention from caretakers.
[0052] The solution may be comprised of hardware, software, and content components. The hardware components may include a “monitor-like” device that is configured to sit near an infant and that is able to detect a state of the infant via various inputs. Inputs include metrics such as but not limited to heart rate (HR), breathing, movement, and audio analysis to determine the state of the infant. It is possible that the hardware also has features that a traditional baby monitor would, such as live video and/or audio and the ability to transmit those to a parent unit. The hardware also has output, a speaker that administers audio such as broadband noise masking audio, algorithmically generated soothing content, or other audio (e.g., recordings of a parent's voice) that ultimately soothes the infant. For example, in some cases, a parent may use a microphone provided on an infant monitoring device to record audio to be played back, e.g., via an integrated speaker, to soothe the infant. [0053] The software component is where the intelligence of the system lies - it may leverage machine learning to determine a state of an infant and decide which audio should be played at what times. In particular, if an infant's cries and sounds are detected and analyzed, insight may be gained into the state of the infant for soothing.
[0054] The content component is what the infant hears. The content may vary from broadband noise masking audio or recordings of a parent's voice, to algorithmically generated soothing content.
[0055] FIG. 1 is a front perspective view of an infant monitoring device 100. As shown, the infant monitoring device 100 may include a housing 102, a microphone 104, a speaker 106, camera 107, sensors (generally “108"), a display screen 110, and buttons 112 or a touchscreen 114 for inputting information (e.g., parent settings) into the infant monitoring device 100. A wide variety of forms may be utilized for the infant monitoring device, including a rectangular shape, an elongate cylindrical tower, or a flat square shape. However, as one of ordinary skill in the art will appreciate, any suitable form factor may be utilized that may be suitable for being placed nearby a sleeping infant, such as on a nightstand or changing table. In some cases, the infant monitoring device 100 may be configured to be coupled or supported by a crib, such as by mechanically coupling to or hanging over and supported by the railing of the crib. Alternatively, or additionally, the infant monitoring device 100 may be configured to be wall mounted above or adjacent to a crib. The housing 102 may be formed into a suitable shape from any rigid materials, including plastics, metals, wood, or composites.
[0056] The microphone may be any suitable microphone for detecting and sampling sounds within an infant's bedroom or sleep space. The microphone 104 is configured to pick up sound in the local environment, in particular baby cries, which can then be used, with suitable software, to select appropriate audio to be output by the speaker 106 to help soothe a crying or fussy infant back to sleep. Audio picked up from the microphone 104 may also be streamed to a parental unit so that the parent can listen in on the infant. The signal from the microphone (i.e., the microphone signal) may also be used as input to monitored sound extraction circuitry that attenuates, e.g., cancels, noise from other sources in the infant's bedroom or sleep space other than noises coming from the infant. For example, the sound extraction circuitry may remove sounds originating from the infant monitoring device from the microphone signal before it is streamed to the parental unit, e.g., so that the parent does not have to listen to the soothing audio from the speaker 106 while he/she is monitoring the infant. Suitable sound extraction circuitry for this purpose is described in U.S. Patent No. 7,525,440, titled Person Monitoring, which issued on April 28, 2009. The complete disclosure of which is incorporated herein by reference.
[0057] The speaker 106 may include any suitable speaker system for generating sounds, as may be familiar to one of ordinary skill in the art. In some examples, the speaker 106 may include an upward or downward firing driver along with an acoustic deflector, to provide an omni-directional acoustic experience. Such configurations may be helpful for providing non-directional, room-filling sounds to help soothe a disturbed infant back to sleep. Omni-directional sound systems may be particularly helpful to achieve soothing sounds and a consistent listening experience throughout the room for soothing a fussy infant in the room. As one of ordinary skill in the art will appreciate, any acceptable sound system for the speaker 106 may be employed for producing room-filling sounds, however.
[0058] The sensors 108 may include one or more of a camera, biometric sensors, and an environmental sensor. A camera may be used to provide streaming video to a parental unit so that a parent can visually monitor the infant. A camera may also be used, in combination with suitable software, for computer vision, e.g., to detect presence of a parent/adult attending to a fussy infant. For example, a camera may provide input to a presence detection algorithm, e.g., provided via machine learning, for detecting the presence of a parent or an adult body and cause the infant monitoring device 100 to stop or pause playback of soothing audio while the parent/adult is present. Suitable Al-based presence (body) detection algorithms are available from IntelliVision, San Jose, CA. The camera may include a night vision for nighttime use.
[0059] Biometric sensors may be used to measure an infant's biometrics including, e.g., heartrate (HR), breathing, and movement. Such sensors can include motion and radar sensors. In some cases, the camera and/or microphone may also be used to detect biometrics such as breathing and related motion. [0060] Environmental sensors may include a temperature sensor, a humidity sensor, an ambient light sensor, C02 sensor, a volatile organic compounds (VOC) sensor, e.g., for detecting a soiled diaper. Environmental sensor signals may inform a cry classification algorithm to help classify a state of an infant, e.g., a state of distress. The environmental sensor signals may also be used by the infant monitoring device to trigger a notification to a parent, e.g., that C02 in the infants sleep space exceeds a threshold (safe) level, or that the infant needs changing.
[0061] In some examples, a display screen 110 may be used to provide information gathered by the infant monitoring device 100 that may be of interest to a parent, such as how many times the infant awoke during a sleep period and/or how many times the infant was soothed back to sleep by the infant monitoring device 100. The information may include biometric or sleep information about the infant, or environmental information about the infant's bedroom or sleep space. In some examples, the information may include recommendations to that the parent might follow to help the infant to sleep through the night, such as suggestions to adjust thermostat or humidifier settings to make a sleep space more comfortable or conducive to sleep.
[0062] The touchscreen 114 or buttons 112 may include any suitable means for delivering inputs to the infant monitoring device 100, including a tactile sensor coupled the housing 102 or detecting the presence of a user’s finger and for detecting pressure, such as when a virtual button on touchscreen 114 is being pressed by a user. Virtual buttons may be displayed on the touchscreen 114 in a manner familiar to one of ordinary skill in the art to allow an operating system to accept input commands from a user, such as a parent of, or caregiver for, the infant being monitored. In this manner, the infant monitoring device 100 may be configured to accept input commands in a variety of ways and in a variety of contexts, by providing a programmable user interface that may present options and choices to a user via the touchscreen 114. In other examples, the touchscreen 114 may present a permanent display of fixed virtual buttons or include fixed physical buttons 112 for receiving inputs from a user.
[0063] In some examples, the display screen 110 and a touchscreen 114 may not be necessary or may be reduced in function because a user’s smartphone or other external computing device may be used for linking with the infant monitoring device 100, displaying information from the infant monitoring device 100 and/or accepting inputs and delivering them to the infant monitoring device 100 to control its functions.
[0064] FIG. 2 provides an exemplary schematic of an infant monitoring device 100, showing its components. As shown, the infant monitoring device 100 may include one or more main board(s) 200 including a processor 202, memory 204, and interconnects 206. The main board 200 controls the operation of several other connected components, such as the microphone 104, an audio amplifier 208, the speaker 106, the display screen 110, and the buttons 112 or touchscreen 114 for inputting information in to the infant monitoring device 100. Communications hardware 210 may include any wired or wireless communications means suitable for use with the infant monitoring device 100, such as WiFi, Bluetooth, LTE, USB, micro USB, or any suitable wired or wireless communications technologies known to one of ordinary skill in the art. The main board 200 also receives information from one or more biometric sensors 108a as well as any number of environmental sensors 108b-e, for detecting environmental conditions, such as ambient light (108b), temperature (108c), humidity (108d), and air quality (108e). The main board 200 also receives inputs based on a user’s interactions with a user interface 212, which may include voice activated commands detected by the microphone 104; various audio, alarm, and sleep control inputs received from the buttons 112 or touchscreen 114; or inputs received from a companion application running on a user's (e.g., a parent’s) smart phone or other external computing device. The communications hardware 210 may also provide communications with external data sources, such as connected home services providing access to such things as lights, thermostat, external sensors, and any of the sensors 108. External sensors may include, for example, a biometric sensor that sits underneath a mattress pad in an infant's crib, or a frictionless proximity sensor, e.g., via pressure system like a sensor that sits beneath or is integrated into a rug, to frictionlessly know when a parent or caregiver is attending to an infant.
[0065] The microphone 104 may be any suitable microphone for detecting a crying or fussy infant within a bedroom or sleep space. In some examples, the microphone 104 may be an arrayed microphone that is suitable for distinguishing between sounds produced by the infant monitoring device 100 and sounds produced externally within the infant's bedroom or sleep space. In examples where the microphone 104 includes an arrayed microphone, it may include a plurality of omnidirectional microphones, directional microphone, or any mixture thereof, distributed about the infant monitoring device 100. The microphone 104 may be coupled to the processor 202 for simultaneous processing of the signals from each individual microphone in a manner familiar to one of ordinary skill in the art in order to distinguish between sounds produced by the infant monitoring device 100 and other sounds within the room and to analyze any external noises for use with a state of distress classification algorithm and/or an audio content selection algorithm, as discussed below. The microphone 104 may employ beamforming or other techniques to achieve directionality in a particular direction, for example, towards a sound to be analyzed, e.g., towards a sleeping infant. The microphone 104 may be employed for monitoring the infant's sleep and for receiving spoken user interface commands.
[0066] The biometric sensor 108a remotely detects information about a nearby infant, including motion, respiration (breathing) rate, among other biometric indicators.
In some examples, the biometric sensor 108a may be a contactless biometric sensor which may use an internal RF sensor for directing RF signals toward an infant being monitored, measuring the strength of the backscattered signal, and analyzing the backscattered signal to determine the state of various vital signs of the infant over time. Other contactless biometric sensor techniques may include lasers for measuring minor skin deflections caused by an infant's hear rate and blood pressure; or image-based monitoring systems, whereby skin deflections caused by heartbeats and blood pressure may be observed and analyzed over time through a camera (such as camera 107). The biometric sensor 108a may be configured to report detected biometric information to the processor 202 for storage in the memory 204 and to be analyzed for use in the various subroutines described herein.
[0067] Alternatively, or additionally, the infant monitoring device 100 may employ a direct biometric sensor as is known to one of ordinary skill in the art. A direct biometric sensor may include probes or contact pads, that may be disposed on or under the infant's body or within their mattress or sheets in order to mechanically detect biometric information, such as movement, respiration, hear rate, blood pressure, and body temperature, among others. Such sensors may include accelerometers, other motions sensors, or mechanical sensors such as piezoelectric sensors or other vibration sensors. In other examples, a direct biometric sensor may include a blood oxygen sensor (or oximeter). The oximeter may be a sensor that relies on transmissive pulse oximetry and/or reflectance pulse oximetry. The oximeter is useful for detecting blood oxygen level in an infant and for detecting potential hypoxemia in an infant. The biometric information detected by the direct biometric sensor may then be communicated to the infant monitoring device 100 using a wired or wireless connection in a manner known to one of ordinary skill in the art.
[0068] In some examples, the processor 202 detects when an infant is in or is entering a state of distress by detecting their state, e.g., based on a signal(s) provided from the microphone 104 and/or the sensor(s) 108 (e.g., biometric sensor 108a and/or environmental sensor(s) 108b-e). In that regard, the processor 202 may execute a machine learning algorithm, e.g., stored in the memory 204, for determining a state of the infant from the microphone and/or sensor signals. Alternatively, or additionally, the machine learning algorithm may be executed on a processor on another device that is connected to the infant monitoring device 100, e.g., via a network connection, such as a cloud-based processor or a processor on a parent's smart phone.
[0069] The processor 202 may use the determined state of the infant to select appropriate audio content to soothe the infant. Audio content may be selected via a look-up table (LUT) that associates an infant state with certain audio content, which may be stored in the memory 204 or retrieved from a network (LAN or WAN) resource. The audio content may vary from broadband noise masking audio or recordings of a parent's voice, to algorithmically generated soothing content. Algorithmically generated soothing content may be generated via an algorithm executed by the processor 202 and/or by an algorithm executed by processor on another device that is connected to the infant monitoring device 100, e.g., via a network connection, such as a cloud-based processor or a processor on a parent's smart phone. In some cases, the audio content may include recording of the parent's voice.
[0070] Once the audio content is selected, the processor 202 administers the audio, via the audio amplifier and the speaker 106, to soothe the infant without necessitating intervention from caretakers, such as parents or other caregivers. The processor 202 may also send notifications, via the communications hardware 210, to a parental unit based on the determined state of the infant, as determined from the microphone signal, and/or based on signals from the biometric or environmental sensors. In some cases, the processor 202 may send notifications to the parental unit based on input received from the parent. For example, a parent may preselect, e.g., via a user interface on the infant monitoring unit or via the parent's smart phone, when he/she is to receive notifications. The processor 202 may determine if one or more preselected conditions are met before a notification is sent. The condition may include a state of the infant (e.g., a level of distress). The condition may also be triggered if an environmental sensor indicates that the infant needs a diaper change.
[0071] The processor 202 may use changes in the determined state of the infant to adapt the audio content playback over time. The system can provide parents the option to change the audio content features on demand, and/or to let the audio content playback adapt to changes in the infant without intervention, to maintain soothing benefit from the audio.
[0072] In some cases, the processor 202 may selectively stream audio to the parental unit via the communications hardware 210. For example, the parent elect to receive an audio feed from the infant monitoring device 100 only when the volume of noise exceeds a predetermined threshold, when the noise persists for a predetermined duration of time, and/or when determined state of the infant meets a predetermined state.
[0073] FIG. 3 illustrates an example system 300 for monitoring and soothing a crying or fussy infant. The system 300 detects a state of distress of the infant and administers appropriate audio to soothe them. The system 300 may also selectively output indications of sounds of importance which a parent or caregiver would like to be notified.
[0074] The system 300 includes, inter alia, the infant monitoring device 100, an audio output device 302a, a bedside unit 304, and a smart device 306 (e.g., a smart phone). The audio output device 302a outputs masking sounds and allows real-time audio to be piped through to the user. In aspects, the audio output device 302a is configured to simultaneously output masking sounds and real-time audio, a version of the real-time audio, or an alert.
[0075] While the audio output device 302a is illustrated as a pair of in-ear audio sleepbuds, the audio output device may be any personal audio output device. Examples include wearable or non-wearable audio output device such as, for example, over-the-ear headphones, audio sleep mask, audio eyeglasses or frames, around-ear audio devices, open-ear audio devices (such as shoulder-worn or body-worn audio devices), audio wrist watches, speaker, portable bedside unit, or the like.
[0076] The audio output device 302a includes at least one acoustic transducer (also known as driver or speaker) for outputting sound. The acoustic transducer(s) may be configured to transmit audio through air and/or through bone (e.g., via bone conduction, such as through the bones of the skull). In an aspect, the audio output device includes one or more microphones to detect sound/noise in the vicinity of the device to enable active noise reduction (ANR). In aspects audio output device includes hardware and circuitry including processor(s)/processing system and memory configured to implement one or more sound management capabilities or other capabilities including, but not limited to, noise cancelling circuitry and/or noise masking circuitry and other sound processing circuitry. The noise cancelling circuitry is configured to reduce unwanted ambient sounds external to the audio output device by using active noise cancelling.
The sound masking circuitry is configured to reduce distractions by playing masking sounds via the speakers of the audio output device.
[0077] In an aspect, the audio output device 302a is an Internet-of-Things (loT) device. The audio output device 302a receives data, commands, and audio from a hub 308a. The hub 308a sends and receives information from other devices in the system 300 and relays instructions to the audio output device 302a. As described below, the hub 308a receives audio, commands, or data from the infant monitoring device 100, bedside unit 304, and/or software interface of a smart device (user device) 306 and transmits instructions to the audio output device 302a. In aspects, the audio output device 302a includes the processing circuity of the hub 308a and directly communicates with one or more of the other devices in the system 300. In yet other aspects, one or more of the infant monitoring device 100, the bedside unit 304, and/or the software interface of the smart device 306 perform features of the hub 308a.
[0078] The infant monitoring device 100 is a monitoring unit that collects information regarding at least one of audio, video, motion, or the environment from a location that is remote to the audio output device 302a. Audio refers to raw data collected from the infant monitoring device 100, data filtered based on user-set thresholds such as volume, duration or classification, an alert, or an algorithmic analysis of noises in the environment of the infant monitoring device 100.
[0079] In addition to collecting audio, in an example, the infant monitoring device 100 collects video and other data from the location remote to the user. In aspects, the infant monitoring device 100 is configured to collect biometric information associated with the child such as, for example, a breathing rate, respiration rate, or the child's temperature. In aspects, the infant monitoring device 100 is configured to detect movement of the child or characteristics of the room such as temperature, humidity, or carbon monoxide level of the room.
[0080] As mentioned above, the infant monitoring device 100 is configured to soothe infants who are entering a state of distress by detecting their state and administering appropriate audio to soothe them. The infant monitoring device 100 may detect the state of distress of the infant based on an analysis of signals (microphone signals) received from the onboard microphone 104. In that regard, the infant monitoring device 100 may feed the microphone signal into a machine learning algorithm that is configured to determine a state of distress of the infant from the audio provided from the mic. The machine learning algorithm may be executed locally on the infant monitoring device 100 or the machine learning algorithm may be executed in the cloud 310 and the microphone signals may be sent to the cloud 310 for processing.
[0081] The infant monitoring device 100 may communicate with one or more connected home/ Internet-of-Things (loT) devices 312, such as a thermostat, lights, a humidifier, air purifier, and/or an aromatherapy device. In some cases, the infant monitoring device 100 may send notifications to the user/parent regarding readings from the loT devices 312, such as ambient temperature, light, or humidity conditions in the infant's sleep space. The notification may include suggestions on how to make the sleep space more comfortable to the infant, such as by adjusting the temperature, lighting, or operation of a humidifier. In some cases, the infant monitoring device 100 may automatically adjust one or more environmental conditions via control of an loT device 312. For example, the infant monitoring device 100 may automatically adjust a connected thermostat if the temperature in the infant's sleep space is determined to be too warm. Likewise, a connected loT light could be controlled to adjust lighting in the infant's sleep space, a connected loT humidifier could be controlled to adjust humidity in the infant's sleep space, or a connected loT air purifier could be controlled to improve air quality in the infant's sleep space. The infant monitoring device 100 may activate a connected loT aromatherapy device to help soothe a fussy infant. In some cases, decisions to send notifications to a parent and/or control a connected home device may be based on biometric measurements of the infant.
[0082] The infant monitoring device 100 may receive signals from external sensors 314, such as a biometric sensor that sits beneath a mattress in the infant's crib or a pressure sensor that is integrated within or disposed beneath a rug, e.g., for detecting presence of a parent. As mentioned above, measurements from the external biometric sensor may be used to inform the selection of audio for soothing the infant.
[0083] The infant monitoring device 100 may be in communication with remote service, such as a remote care specialist (generally “316"), such as a night nurse 316a or physician 316b, that may provide advice, e.g., via text notifications to the parent. For example, the infant monitoring may deliver a live audio stream from the infant's bedroom or sleep space to a night nurse, who may them provide recommendations to the parent, e.g., via text to the parent's smart device 306, suggesting things that the parent might do to help soothe the infant, or to alleviate distress in the future. In some cases, the infant monitoring device 100 may alternatively, or additionally, send biometric or environmental sensor data to the remote service to help inform the advice received.
[0084] In an example, the infant monitoring device 100 is placed in an infant’s bedroom and the user of the audio output device 302a sleeps in a different room. The infant monitoring device 100 engages in bidirectional communication with the bedside unit 304, the cloud 310, and the software interface running on a smart device 306. [0085] The bedside unit 304 is a portable unit that is configured to receive audio from the infant monitoring device 100. In addition to receiving audio, in an example, the bedside unit 304 is configured to receive video and/or other data from the infant monitoring device 100, the cloud 310, and/or software interface running on the smart device 306. The bedside unit engages in bidirectional communication with the infant monitoring device 100, the cloud 310, software interface running on the smart device 306, and hub 308a. In aspects, the system 300 does not include a hub 308a for the audio output device 302a. Instead, the bedside unit 304 performs the functions of the hub 308a. In aspects, the bedside unit 304 includes a screen 318 for outputting a video stream transmitted from another device in the system, such as the infant monitoring device 100. In aspects, as described below, the screen 318 provides a user interface for the user to enter preferences regarding when to receive an audio stream or notifications from the infant monitoring device 100.
[0086] The smart device 306 uses a software interface to provide a user interface (via an application). The smart device 306 engages in bidirectional communication with the bedside unit 304, infant monitoring device 100, the cloud 310, and hub 308a.
[0087] The user interface enables the user (e.g., a parent or caregiver) to input user preferences regarding when to receive an audio stream (e.g., live audio feed from the infant's room delivered to the audio output device 302a and/or bedside unit 304) or notifications (e.g., text notifications delivered to the bedside unit 304 or the smart device 306) from the infant monitoring device 100. The user may elect to receive an audio stream only when the sound (e.g., the infant crying) exceeds a predetermined decibel value or exhibits certain tonal qualities for a configurable amount of time. In an example, the user may elect to receive an audio stream from the infant monitoring device 100 only if the infant distress persists for a predetermined duration of time after the automated soothing audio playback starts; e.g., the parent only receives the audio stream of the distressed infant if the soothing audio does not effectively soothe the infant. In an example, the user may elect to receive an audio stream or notification from the infant monitoring device 100 only if the state of infant distress meets or exceeds some predetermined threshold value or classification. The user may elect to receive an audio stream or notification when the onboard environmental sensors indicate the infant needs a diaper change, and/or when the biometric sensors indicate an irregular heart rate or breathing rate, or a body temperature above or below a predetermined threshold.
[0088] In another example, the user may not have nighttime caregiving responsibilities on certain days of the week or may have a partner assume caregiving responsibilities at some point during the user’s sleep period. The user may input a schedule including days of the week or hours during a sleep period for each day that he or she would like to be notified and/or receive an audio stream from the infant monitoring device 100. In aspects, the triggers (settings) that determine when the user will receive an audio stream or notification from the infant monitoring device 100 vary based on the day of the week or time of day. In that regard, the application may allow the user to enter preferences for the combination of day, time, and sounds of importance.
[0089] In an example, the user may set a mode for both the user's personal audio output device and also for the personal audio output device of another user in the system. The ability for a user to set modes for both personal audio output devices 302a and 302b facilitates handoff during a sleep period. For example, when a second user assumes night-time care giving responsibilities, the second user may set the first user’s mode to an “off” or “sleep” state and set the second user's mode to an “aware" state. This may help ensure that at least one of multiple personal audio output devices in the system 300 are set to an “aware" state, especially during transitions from one primary caregiver to another. In aspects, the application allows the user to enter preferences regarding any combination of day, time, and sounds of importance for not only the user but also for another user of the system 300. For example, the application enables the user to enter preferences for the user of personal audio output devices 302a and 302b.
[0090] In aspects, the software interface is run on the bedside unit 304, thereby eliminating the need for the smart device 306. In an aspect, a user interface is provided on both the smart device 306 and the bedside unit 304. The user interface running on the bedside unit 304 may offer fewer customization features as compared to the user interface running on the smart device 306. In an aspect, the user has the option to set the application to an “aware" mode or a “sleep" or “off" mode on both the smart device 306 and the bedside unit 304. Providing, at least, some user input options on the bedside unit 304 helps to reduce the need for a user to look at a bright screen of the smart device 306 before going to sleep.
[0091 ] The cloud 310 refers to a cloud/remote/network server where applications and data are maintained and made available using the Internet. The user's preferences input using the software interface can be stored on the cloud 310.
[0092] In an example, the user input received via the software interface is transmitted by the smart device 306 to the cloud 310. The cloud 310 maintains the user's preferences. In another example, the smart device 306 provides one of the infant monitoring device 100 or bedside unit 304 with the preferences input by the user, and the infant monitoring device 100 or bedside unit 304 provides the cloud 310 with the user's preferences. Based on the user’s preferences and real-time sounds collected by the infant monitoring device 100, the cloud 310 determines the state of infant distress, selects audio for playback by the infant monitoring device 100 to soothe the infant, and when to output indications (an audio stream of the infant's sleep space or notification, e.g., an audible or text notification) to alert the user/parent of the state of distress. Because the devices in the system 300 communicate with each other and in turn the cloud 310, the user’s preferences and any instructions to output an indication of a state of infant distress may be relayed between devices and the cloud before arriving at the intended location or device in the system 300.
[0093] In an example, the cloud 310 supports artificial intelligence (Al)-based capabilities. The Al-based capabilities help classify sounds in the infant's bedroom or sleep space that may be indicative of infant distress. In aspects, the Al-based capabilities help determine which of multiple users to alert in response to a detected state of infant distress based on historical data and/or the type of detected sound.
[0094] In an aspect, the system 300 does not include the cloud 310. When the cloud 310 is not accessed, the operations are performed off-line or on a local network. One or more of the bedside unit 304 or infant monitoring device 100 maintain user preferences and process data to identify and classify sounds indicative of infant distress. In an example, the bedside unit 304 or the infant monitoring device 100 have a computer chip programmed to cause the bedside unit 304, infant monitoring device 100, or combination of bedside unit 304 and infant monitoring device 100 to performing the actions for protecting a user's sleep by outputting masking sounds and selectively notifying the user when the infant is in a state of distress that necessitates the user's attention as described herein.
[0095] FIG. 4 shows an example of a method 400 for assessing an infant's state of distress and soothing. At step 402, an infant monitoring device is provided near an infant. The infant monitoring device includes a microphone for detecting sounds within a bedroom or sleep space; a processor; and memory, which may store, among other things, a machine learning algorithm for detecting an infant's state of distress and an algorithm for selecting audio content (content selection algorithm) for playback based on the detected state of distress for execution by the processor, as described above; and optionally, one or more biometric sensors and/or environmental sensors. In step 404, the infant monitoring device is initialized.
[0096] Parent or caregiver (generally “user") input may also be received (step 406). The input may correspond to settings. For example, the user may select when they want to receive a live audio stream or notification from the infant monitoring device. The user may elect to receive a live audio stream and/or notification only under prescribed circumstances, thus allowing the user to avoid being disrupted when their attention is not necessary. The input may be received via a user's bedside unit, via the user's smart phone, or via an interface on the infant monitoring unit.
[0097] Once initialized, the infant monitoring device may begin reading signals from the microphone (step 408) in order to determine sounds produced by the infant. Any detected sound information from the infant may be fed into a machine learning algorithm in order to categorize a state of distress of the infant (step 410). The state of distress is then fed into an audio selection algorithm that selects appropriate audio for soothing the infant based on the state of distress (step 412), and the selected audio content is rendered via the speaker of the infant monitoring device to soothe the infant (step 414). The state of distress can also be compared to the user's settings to determine whether the current state of distress satisfies the criteria for alerting the user (step 416), and, if so, the audio from the microphone can be live streamed to a user's device (e.g., a smart phone, audio output device, or bedside unit) or a notification (e.g., text message) can be sent to a user's device (step 418). As indicated by loop 430, the infant monitor may continuously monitor the signals from the microphone in order to adaptively update the detected state of distress and updates the selected audio content accordingly, i.e., based on changes in the detected state of distress.
[0098] In some cases, the infant monitoring device may also begin taking biometric and/or environmental readings relevant to the infant and/or the infant's sleep space (step 420). The biometric and/or environmental readings may be utilized to inform the machine learning algorithm to assist in assessing the state of distress of the infant, and/or to inform the content selection algorithm to aid in the selection of audio content for playback to soothe the infant. Alternatively, or additionally, the biometric and/or environmental readings may be compared to the user's settings to determine whether the current state of distress satisfies the criteria for alerting the user (step 416).
[0099] In step 422, the infant monitoring device may determine whether a parent or caregiver is attending to the infant, and, if so, the infant monitoring device may be programmed to halt playback of the soothing audio (step 424). Presence detection may be performed by way of computer vision, e.g., using a camera provided on the infant monitoring device, and/or via an external pressure sensor that may be incorporated within or disposed beneath a rug near the infant in the infant’s bedroom or sleep space.
[00100] In some cases, the state of distress can also be compared to device settings to determine whether the current state of distress satisfies the criteria for notifying a remote care specialist, such as a night nurse or physician (step 426), and, if so, the audio from the microphone, and/or readings from the biometric and/or environmental sensors may be transmitted to the remote care specialist (step 428). In some cases, the remote care specialist may send (e.g., via text message) suggestions to the parent for soothing the infant, or for making the infant's sleep space more conducive to sleep in the future.
[00101] In some implementations, the state of distress and/or the selected audio content could be passed to experts to train a model, and, at a future date, diagnose conditions based on the information provided. [00102] In some cases, the microphone readings, selected audio content, biometric readings, and/or environmental readings may be used to learn about infants in aggregate over time and/or about an individual infant over time. These learnings may be used to update the infant monitoring device, e.g., update the state of distress classification algorithm and/or the audio content selection algorithm.
[00103] In the preceding, reference is made to aspects presented in this disclosure. However, the scope of the present disclosure is not limited to specific described aspects. Aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “component," “circuit,” “module" or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
[00104] The flowchart and block diagrams in the Figures illustrate the architecture, functionality and operation of possible implementations of systems, methods and computer program products according to various aspects. In this regard, each block in the flowchart or block diagrams may represent a module, segment or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. Each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by special-purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
[00105] Implementations of the systems and methods described above comprise computer components and computer-implemented steps that will be apparent to those skilled in the art. For example, it should be understood by one of skill in the art that the computer-implemented steps may be stored as computer-executable instructions on a computer-readable medium such as, for example, floppy disks, hard disks, optical disks, Flash ROMS, nonvolatile ROM, and RAM. Furthermore, it should be understood by one of skill in the art that the computer-executable instructions may be executed on a variety of processors such as, for example, microprocessors, digital signal processors, gate arrays, etc. For ease of exposition, not every step or element of the systems and methods described above is described herein as part of a computer system, but those skilled in the art will recognize that each step or element may have a corresponding computer system or software component. Such computer system and/or software components are therefore enabled by describing their corresponding steps or elements (that is, their functionality), and are within the scope of the disclosure.
[00106] One of skill in the art will appreciate that the systems, methods and apparatuses outlined above may include various hardware and operating software, familiar to those of skill in the art, for running software programs as well as communicating with and operating any devices, including, for example, a biometric sensor, environmental sensors, a user interface, a computer network, a sound system, and any other internal or external devices. Such computerized systems may also include memory and storage media, and other internal and external components which may be used for carrying out the operations of this disclosure. Moreover, such computer systems may include one or more processors for processing and controlling the operation of the computer system, thus, embodying the processes of this disclosure. To that end, the processor, associated hardware and communications systems may carry out the various examples presented herein.
[00107] While the disclosed subject matter is described herein in terms of certain exemplary implementations, those skilled in the art will recognize that various modifications and improvements can be made to the disclosed subject matter without departing from the scope thereof. As such, the particular features claimed below and disclosed above can be combined with each other in other manners within the scope of the disclosed subject matter such that the disclosed subject matter should be recognized as also specifically directed to other implementations having any other possible permutations and combinations. It will be apparent to those skilled in the art that various modifications and variations can be made in the systems and methods of the disclosed subject matter without departing from the spirit or scope of the disclosed subject matter. Thus, it is intended that the disclosed subject matter include modifications and variations that are within the scope of the appended claims and their equivalents. Accordingly, other implementations are within the scope of the following claims.

Claims

What is claimed is:
1. An infant monitoring device comprising: a microphone; a speaker; a processor; and memory storing computer-readable instructions, which, when executed, cause the processor to: detect a state of distress of an infant based on signals received from the microphone, select audio content for rendering based, at least in part, on the detected state of distress of the infant, and cause the speaker to render the selected audio content, thereby to soothe the infant.
2. The infant monitoring device of claim 1 , wherein the memory stores a machine learning algorithm that is configured to detect the state of distress by categorizing sounds of the infant picked up by the microphone.
3. The infant monitoring device of claim 1 , further comprising communication hardware coupled to the processor, wherein the instructions further cause the processor to transmit a live stream of audio picked up by the microphone to an other device via the communication hardware.
4. The infant monitoring device of claim 3, wherein the processor is configured to transmit the live stream of the audio picked up by the microphone based on a determination that the state of distress meets one or more predetermined criteria.
5. The infant monitoring device of claim 4, wherein the predetermined criteria correspond to preferences received from a user of the infant monitoring device.
6. The infant monitoring device of claim 1 , further comprising communication hardware coupled to the processor, wherein the instructions further cause the processor to transmit a notification to another device via the communication hardware, based on a determination that the state of distress meets one or more predetermined criteria.
7. The infant monitoring device of claim 6, wherein the notification comprises a text message.
8. The infant monitoring device of claim 1 , further comprising one or more biometric sensors and/or environmental sensors.
9. The infant monitoring device of claim 8, wherein the state of distress is determined based on readings from the one or more biometric sensors and/or environmental sensors.
10. The infant monitoring device of claim 8, wherein the processer is configured to transmit a live stream of the audio picked up by the microphone or a notification to an other device based on a determination that readings from the one or more biometric and/or environmental sensors satisfies one or more predetermined criteria.
11. The infant monitoring device of claim 10, wherein the predetermined criteria correspond to preferences received from a user of the infant monitoring device.
12. The infant monitoring device of claim 8, wherein the infant monitoring device is configured to control one or more Internet-of-Things (loT) devices based on readings from the one or more biometric and/or environmental sensors.
13. The infant monitoring device of claim 1 , wherein the audio content is selected from the group consisting of broadband noise masking audio, recordings of a parent's voice, and/or algorithmically generated soothing content.
14. The infant monitoring device of claim 1 , wherein the audio content is stored locally in the memory, is retrieved from a remote location via a network connection or is generated via a content generation algorithm.
15. The infant monitoring device of claim 1 , wherein the audio content is generated via a content generation algorithm that is stored in the memory and is executed by the processor.
16. The infant monitoring device of claim 1, wherein the audio content is generated via a content generation algorithm that is executed remotely and the audio content is transmitted to the infant monitoring device via a network connection.
17. The infant monitoring device of claim 1, wherein the instructions cause the processor to adaptively update the detected state of distress of an infant based on changes in the signals received from the microphone, and change the selected audio content for rendering based, at least in part, on the update to the detected state of distress of the infant.
18. The infant monitoring device of claim 1, wherein the infant monitoring device is configured to detect a user attending to the infant, and, in response, automatically stops the rendering of the audio content.
19. The infant monitoring device of claim 18, further comprising a camera, wherein the infant monitoring device is configured to detect the user attending to the infant via the camera using computer vision.
20. The infant monitoring device of claim 18, wherein the infant monitoring device is configured to receive signals from an external sensor and detects the user attending to the infant based on the signals received from the external sensor.
21. The infant monitoring device of claim 20, wherein the external sensor comprises a pressure sensor.
22. A method of soothing an infant comprising: reading signals from a microphone in proximity to the infant, detecting a state of distress of an infant based on signals received from the microphone, selecting audio content for rendering based, at least in part, on the detected state of distress of the infant, and rendering the selected audio content, thereby to soothe the infant.
23. The method of claim 22, wherein the state of distress is detected via a machine learning algorithm that is configured to detect the state of duress by categorizing sounds of the infant picked up by the microphone.
24. The method of claim 23, wherein the machine learning algorithm is executed on a device located proximal to the infant
25. The method of claim 23, wherein the machine learning algorithm is executed on a network device that is located remotely from the infant.
26. The method of claim 22, wherein the audio content comprises broadband noise masking audio, recordings of a parent's voice, and/or algorithmically generated soothing content.
27. The method of claim 26, wherein the audio content is generated via a content generation algorithm that executed on a device located proximal to the infant.
28. The method of claim 26, wherein the audio content is generated via a content generation algorithm that is executed on a network device that is located remotely from the infant.
29. The method of claim 22, wherein the content is selected via an audio content selection algorithm that is executed on a device located proximal to the infant.
30. The method of claim 22, wherein the content is selected via an audio content selection algorithm that is executed on a network device located that is located remotely from the infant.
31. The method of claim 22, further comprising reading one or more biometric sensors and/or environmental sensors.
32. The method of claim 31, wherein the state of distress is determined based on the readings from the one or more biometric sensors and/or environmental sensors.
33. The method of claim 31, further comprising controlling one or more Internet-of-Things (loT) devices based on the readings from the one or more biometric sensors and/or environmental sensors.
34. The method of claim 22, further comprising transmitting a live stream of audio picked up by the microphone to a parent's device.
35. The method of claim 34, wherein the live stream of the audio picked up by the microphone is transmitted based on a determination that the state of distress meets one or more predetermined criteria.
36. The method of claim 35, wherein the predetermined criteria correspond to preferences received from the parent.
37. The method of claim 22, further comprising transmitting a notification regarding the state of distress to a parent's device.
38. The method of claim 37, wherein the notification is transmitted based on a determination that the state of distress meeting one or more predetermined criteria.
39. The method of claim 38, wherein the predetermined criteria correspond to preferences received from the parent.
40. The method of claim 22, wherein the step of detecting a state of distress of an infant based on signals received from the microphone, comprises continuously monitoring the microphone signal and adaptively update the detected state of distress of the infant based on changes in the signals received from the microphone over time, and changing the selected audio content for rendering based, at least in part, on the update to the detected state of distress of the infant, and wherein the step of selecting audio content for rendering based, at least in part, on the detected state of distress of the infant comprises changing the selected audio content for rendering based, at least in part, on updates to the detected state of distress of the infant over time.
41. The method of claim 22, further comprising detecting the presence of a parent attending to the infant, and, in response, automatically stopping the rendering of the audio content.
PCT/US2020/061314 2019-11-20 2020-11-19 Infant monitoring and soothing WO2021102158A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962938188P 2019-11-20 2019-11-20
US62/938,188 2019-11-20

Publications (1)

Publication Number Publication Date
WO2021102158A1 true WO2021102158A1 (en) 2021-05-27

Family

ID=73835806

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/061314 WO2021102158A1 (en) 2019-11-20 2020-11-19 Infant monitoring and soothing

Country Status (1)

Country Link
WO (1) WO2021102158A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114061072A (en) * 2021-10-21 2022-02-18 青岛海尔空调器有限总公司 Air conditioner infant auxiliary nursing control method and device and air conditioner
US20230102445A1 (en) * 2021-09-28 2023-03-30 Sadie Griffith Toy-Shaped Wireless Baby Monitor

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1872818A1 (en) * 2006-06-20 2008-01-02 Future Acoustic LLP Electronic baby-soothing device
US7525440B2 (en) 2005-06-01 2009-04-28 Bose Corporation Person monitoring
US20170072162A1 (en) * 2015-09-11 2017-03-16 International Business Machines Corporation Comforting system with active learning
GB2571125A (en) * 2018-02-19 2019-08-21 Chestnut Mobile Ltd Infant monitor apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7525440B2 (en) 2005-06-01 2009-04-28 Bose Corporation Person monitoring
EP1872818A1 (en) * 2006-06-20 2008-01-02 Future Acoustic LLP Electronic baby-soothing device
US20170072162A1 (en) * 2015-09-11 2017-03-16 International Business Machines Corporation Comforting system with active learning
GB2571125A (en) * 2018-02-19 2019-08-21 Chestnut Mobile Ltd Infant monitor apparatus

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230102445A1 (en) * 2021-09-28 2023-03-30 Sadie Griffith Toy-Shaped Wireless Baby Monitor
CN114061072A (en) * 2021-10-21 2022-02-18 青岛海尔空调器有限总公司 Air conditioner infant auxiliary nursing control method and device and air conditioner

Similar Documents

Publication Publication Date Title
US9538959B2 (en) System and method for human monitoring
US10517527B2 (en) Sleep quality scoring and improvement
CN109936999B (en) Sleep assessment using a home sleep system
KR102423752B1 (en) Apparatus and Method for assisting sound sleep
EP2120712B1 (en) Arrangement and method to wake up a sleeping subject at an advantageous time instant associated with natural arousal
US20170258398A1 (en) Child monitoring system
CN105807674B (en) Intelligent wearable device capable of controlling audio terminal and control method thereof
US20120299732A1 (en) Baby monitor for use by the deaf
US10617364B2 (en) System and method for snoring detection using low power motion sensor
US20190231256A1 (en) Apparatus and associated methods for adjusting a user's sleep
US11141556B2 (en) Apparatus and associated methods for adjusting a group of users' sleep
KR20180083188A (en) Environment control method using wearable device with bio signal measurement and IoT(Internet of Things)
WO2021102158A1 (en) Infant monitoring and soothing
CN110575139A (en) Sleep monitoring method and equipment
JP2019512331A (en) Timely triggering of measurement of physiological parameters using visual context
WO2016195805A1 (en) Predicting infant sleep patterns and deriving infant models based on observations associated with infant
WO2021064557A1 (en) Systems and methods for adjusting electronic devices
CN112401566A (en) Intelligent infant care system and method
GB2549099A (en) Monitor and system for monitoring
US20210190351A1 (en) System and method for alerting a caregiver based on the state of a person in need
WO2019207570A1 (en) An autonomous intelligent mattress for an infant
US10098594B2 (en) Portable monitoring device, system and method for monitoring an individual
JP7296626B2 (en) Information processing device and program
JP7254345B2 (en) Information processing device and program
CN116997973A (en) Intelligent infant monitoring system and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20824831

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20824831

Country of ref document: EP

Kind code of ref document: A1