WO2019165271A1 - Adapting media content to a sensed state of a user - Google Patents

Adapting media content to a sensed state of a user Download PDF

Info

Publication number
WO2019165271A1
WO2019165271A1 PCT/US2019/019241 US2019019241W WO2019165271A1 WO 2019165271 A1 WO2019165271 A1 WO 2019165271A1 US 2019019241 W US2019019241 W US 2019019241W WO 2019165271 A1 WO2019165271 A1 WO 2019165271A1
Authority
WO
WIPO (PCT)
Prior art keywords
predefined
detecting
motion data
identifying
heart rate
Prior art date
Application number
PCT/US2019/019241
Other languages
French (fr)
Inventor
Andreja Djokovic
Zachary Norman
Nanea Reeves
Joseph ULDRIKS
Walter John Greenleaf
Original Assignee
TRIPP, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TRIPP, Inc. filed Critical TRIPP, Inc.
Priority to EP19757015.3A priority Critical patent/EP3755210A4/en
Publication of WO2019165271A1 publication Critical patent/WO2019165271A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1102Ballistocardiography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • A61B5/02444Details of sensor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/0816Measuring devices for examining respiratory frequency
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/113Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb occurring during breathing
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • A61B5/7207Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal of noise induced by motion artifacts
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • A61B5/7207Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal of noise induced by motion artifacts
    • A61B5/721Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal of noise induced by motion artifacts using a separate sensor to detect motion or using motion information derived from signals other than the physiological signal to be measured
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1605Handling requests for interconnection or transfer for access to memory bus based on arbitration
    • G06F13/161Handling requests for interconnection or transfer for access to memory bus based on arbitration with latency improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2562/00Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
    • A61B2562/02Details of sensors specially adapted for in-vivo measurements
    • A61B2562/0219Inertial sensors, e.g. accelerometers, gyroscopes, tilt switches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user

Definitions

  • This disclosure relates generally to a media content system, and more specifically, to a media device that provides content based on a detected state of a user.
  • a media processing device adapts content based on a breathing rate detected based on captured audio.
  • the media processing device presents first content on a display device.
  • a microphone captures ambient audio.
  • a frequency domain transformation is performed on a current block of the ambient audio to generate a frequency spectrum of the current block.
  • the frequency spectrum is filtered to generate a filtered frequency spectrum limited to a predefined frequency range associated with breathing noise.
  • One or more peak frequencies in the filtered frequency spectrum are identified.
  • a breath is detected by identifying a pattern of peak frequencies across a range of blocks that meets predefined criteria.
  • the breathing rate is determined based on the detected breath and a history of previously detected breaths.
  • Second content is presented on the display device based on the detected breathing rate falling within a predefined range.
  • a media processing device adapts content based on a breathing rate detected based on motion data.
  • the media processing device presents first content on a display device.
  • Motion data is obtained from an inertial measurement device.
  • the motion data is filtered by applying a smoothing function to the motion data to generate smoothed motion data.
  • a breath is detected based on identifying that the smoothed motion data includes movement constrained to one or more predefined amplitude ranges associated with breathing movement over a predefined time window.
  • a breathing rate is identified based on the detected breath.
  • Second content is presented on the display device responsive to the detected breathing rate falling within a predefined breathing rate range.
  • a media processing device adapts content based on a heart rate detected based on motion data.
  • the media processing device presents first content on a display device.
  • Motion data is obtained from an inertial measurement device.
  • the motion data is filtered by applying a smoothing function to the motion data to generate smoothed motion data.
  • a heart beat is detected based on identifying that the smoothed motion data includes movement within a predefined amplitude range.
  • a heart rate is identified based on the detected heart beat.
  • the second content is presented on the display device responsive to the detected heart rate falling within a predefined heart rate range.
  • Embodiment may include a method, a non-transitory computer-readable storage medium, and a computer device for performing the above described processes.
  • Figure (or“FIG.”) 1 illustrates an example embodiment of a media system.
  • FIG. 2 illustrates an example embodiment of a media processing device.
  • FIG. 3 illustrates an example embodiment of a process for detecting a breathing rate based on audio.
  • FIG. 4 illustrates an example embodiment of a process for detecting a breathing rate based on motion data.
  • FIG. 5 illustrates an example embodiment of a process for detecting a heart rate based on motion data.
  • FIG. 1 is a block diagram of a media system 100, according to one embodiment.
  • the media system 100 includes a network 120, a media server 130, and a plurality of media processing devices 110.
  • different and/or additional components may be included in the media content system 100.
  • the media processing device 110 comprises a computer device for processing and presenting media content such as audio, images, video, or a combination thereof.
  • the media processing device 110 may furthermore detect various inputs including voluntary user inputs (e.g., input via a controller, voice command, body movement, or other convention control mechanism) and various biometric inputs (e.g., breathing patterns, heart rate, etc.).
  • the media processing device 110 may control the presentation of the media content in response to the inputs.
  • the media processing device 110 may comprise, for example, a mobile device, a tablet, a laptop computer, or a desktop computer.
  • the media processing device 110 may include a head-mounted display such as a virtual reality headset or an augmented reality headset. An embodiment of a media processing device 110 is described in further detail below with respect to FIG. 2.
  • the media server 130 comprises one or more computing devices for delivering media content to the media processing devices 110 via the network 120.
  • the media server 130 may stream media content to the media processing devices 110 to enable the media processing devices 110 to present the media content in real-time or near real-time.
  • the media server 130 may enable the media processing devices 110 to download media content to be stored on the media processing devices 110 and played back locally at a later time.
  • the network 120 may include any combination of local area and/or wide area networks, using both wired and/or wireless communication systems.
  • the network 120 uses standard communications technologies and/or protocols. In some embodiments, all or some of the
  • communication links of the network 120 may be encrypted using any suitable technique.
  • Various components of the media system 100 of FIG. 1 such as the media server 130 and the media processing devices 110 can each include one or more processors and a non-transitory computer- readable storage medium storing instructions therein that when executed cause the one or more processors to carry out the functions attributed to the respective devices described herein.
  • FIG. 2 is a block diagram illustrating an embodiment of a media processing device 110.
  • the media processing device 110 comprises a processor 250, a storage medium 260, input/output devices 270, and sensors 280.
  • Alternative embodiments may include additional or different components.
  • the input/output devices 270 include various input and output devices for receiving inputs to the media processing device 110 and providing outputs from the media processing device 110.
  • the input/output devices 270 may include a display 272, an audio output device 274, a user input device 276, and a communication device 278.
  • the display 272 comprises an electronic device for presenting images or video content such as an LED display panel, an LCD display panel, or other type of display.
  • the audio output device 274 may include one or more integrated speakers or a port for connecting one or more external speakers to play audio associated with the presented media content.
  • the user input device can comprise any device for receiving user inputs such as a touchscreen interface, a game controller, a keyboard, a mouse, a joystick, a voice command controller, a gesture recognition controller, or other input device.
  • the communication device 278 comprises an interface for receiving and transmitting wired or wireless communications with external devices (e.g., via the network 120 or via a direct connection).
  • the communication device 278 may comprise one or more wired ports such as a USB port, an HDMI port, an Ethernet port, etc. or one or more wireless ports for communicating according to a wireless protocol such as Bluetooth, Wireless USB, Near Field
  • the sensors 280 capture various sensor data that can be provided as additional inputs to the media processing device 110.
  • the sensors 240 may include a microphone 282 and an inertial measurement unit (IMU) 284.
  • the microphone 282 captures ambient audio by converting sound into an electrical signal that can be stored or processed by the media processing device 110.
  • the IMU 284 comprises an electronic device for sensing movement and orientation.
  • the IMU 284 may comprise a gyroscope for sensing orientation or angular velocity and an accelerometer for sensing acceleration.
  • the IMU 284 may furthermore process data obtained by direct sensing to convert the measurements into other useful data, such as computing a velocity or position from acceleration data.
  • the IMU 284 may be integrated with the media processing device 110.
  • the IMU 284 may be communicatively coupled to the media processing device 110 but physically separate from it so that the IMU 284 could be mounted in a desired position on the user’s body (e.g., on the head or wrist).
  • the storage medium 260 (e.g., a non-transitory computer-readable storage medium) stores instructions executable by the processor 250 for carrying out functions attributed to the media processing device 110 described herein.
  • the storage medium 260 includes a content presentation module 262, an input processing module 264, and a biometric sensing module 266.
  • the content presentation module 262 presents media content via the display 272 and the audio output device 274.
  • the content presentation module 262 may adapt its content based on information received from the input processing module 264 and the biometric sensing module 266.
  • the input processing module 262 processes inputs received via the user input device 276 and provides processed input data that may control the output of the content presentation module 262.
  • the biometric sensing module 266 obtains sensor data from the sensors 280 such as audio data and IMU data (e.g., accelerometer data, gyro data, or other inertial measurements).
  • the biometric sensing module 266 processes the sensor data to derive biometric information such as a breathing rate and heart rate.
  • breathing rate may be detected from audio data, from IMU data, or from a combination thereof as will be described in further detail below.
  • heart rate may be detected based on IMU data as will be described in further detail below.
  • biometric data can be utilized to determine a mind and body state of the user that may not be apparent from voluntary inputs alone. For example, a measure of relaxation may be determined in an automated way based on the detected heart rate and breathing pattern. This state information may be utilized to automatically adapt the presentation of content based on the user’s detected mind and body state.
  • an interactive media application may guide a user through a meditation experience.
  • a user may be taken through a first exercise to guide the user towards a target breathing rate and heart rate indicative of a particular state of relaxation.
  • the user’s breathing and heart rate can be detected throughout the experience and the content may be updated to move on to a subsequent exercise once the target breathing rate and heart rate are achieved.
  • the content may be updated to provide the user with an alternative exercise (e.g., a simpler exercise).
  • the breathing rate and heart rate may be provided to the user as real-time feedback via a graphical user interface, or the information may be logged to provide to the user as feedback after completing the experience.
  • heart rate and breathing rate can be used in other types of lifestyle
  • avatars in a virtual room may display emotions consistent with the user’s detected mind and body state derived from the detected breathing rate and heart rate. Similarly, other characters in the game may interact with the user’s character based on the detected state.
  • the detected breathing rate and heart rate may be used as a diagnostic tool to identify medical conditions such as shortness of breath or symptoms of a heart condition.
  • real-time emoticons may be generated that reflect the user’s state.
  • the user may be given a test that measures emotional response (derived from the heart rate and breathing rate) to particular inputs that may be useful to identify potential dating matches.
  • the user’s state may be utilized to provide feedback to music or movie creators to gauge how users react when listening to music or watching a movie.
  • FIG. 3 illustrates an embodiment of a process for detecting a breathing rate based on audio data.
  • the media processing device 110 captures 302 a block of ambient audio data (e.g., using the
  • an audio block represents a sequence of audio samples within a limited time range.
  • audio may be captured at a rate of approximately 48,000 samples per second and may be grouped into blocks at a rate of approximately 60-90 blocks per second.
  • a frequency domain transformation is performed 304 on the block to generate a frequency spectrum for the block.
  • a Fast Fourier Transform is performed on the block to generate the frequency spectrum.
  • the frequency spectrum for a given block may comprise a plurality of discrete frequency bins (e.g., 1024 bins) covering the range of, for example, 0 - 24 kHz such that each frequency bin corresponds to a different sub-range and each frequency bin represents an amplitude of the frequency components within the sub-range.
  • the media processing device 110 may filter 306 the frequency spectrum.
  • the filtering may include applying a band-limiting filter to limit the frequency spectrum to the typical frequency ranges found in an audio capture of breathing.
  • the media processing device 110 may filter the frequency spectrum to limit it to a range of approximately 4kHz - 5kHz (e.g., by discarding bins outside of this frequency range).
  • the filtering may include applying a noise filter to the block.
  • an adaptive noise spectrum is generated that is updated at each block as a weighted combination of a cumulative noise spectrum and the frequency spectrum for the current block.
  • the noise spectrum Noise for a block i may be represented as:
  • Noisei a FFT'i + (1— a) Noise ⁇
  • FFT’i the pre-noise filtered frequency spectrum for the block i
  • the noise spectrum may be subtracted from the frequency spectrum to remove the noise:
  • the media processing device 110 may average the amplitudes of the frequency peaks within the block to generate an overall frequency peak amplitude. Furthermore, additional smoothing filter may be applied by averaging the current overall frequency peak amplitude with overall frequency peak amplitudes for prior blocks. This processing results in a sequence of smoothed frequency peak amplitudes over a sequence of blocks. The media processing device 110 identifies 310 a peak frequency pattern across multiple blocks that substantially matches an expected pattern corresponding to inhales and exhales.
  • the media processing device 110 scans audio using a sliding time window of predefined length (e.g., a window of 0.2 seconds) to identify windows in which at least a threshold percentage (e.g., 80%) of the smoothed frequency peak amplitudes are within one or more predefined amplitude ranges (e.g., amplitude ranges consistent with human inhales or exhales).
  • a threshold percentage e.g., 80%
  • Time windows meeting the above criteria and occurring within a predefined time range of each other e.g., a time range consistent with normal human breathing rate
  • a pattern of time windows are identified in which the amplitude ranges of the smoothed frequency peaks alternately correspond to the ranges associated with inhales and exhales.
  • the media processing device 110 identifies first time windows in which the amplitudes of the smoothed frequency peaks correspond to a first amplitude range associated with an inhale, identifies second time windows in which the amplitudes of the smoothed frequency peaks correspond to a second amplitude range associated with an exhale, and detects an alternating pattern of inhales and exhales.
  • a pair of time windows meeting the above criteria for an inhale and exhale are detected as a breath.
  • the breathing rate may then be determined 312 based on a rate of the detected breaths that meet the above criteria.
  • the breathing rate may be based on an average time or a median time between breaths meeting the above criteria and that is within a predefined reasonable range of expected breathing rates.
  • FIG. 4 illustrates an embodiment of a process for determining a breathing rate from IMU data.
  • the media processing device 110 obtains 402 IMU data (e.g., from the IMU 284 in a head-mounted sensor) that represents the change in angle and change in position relative to a prior time block (e.g., at a rate of 60-90 blocks per second).
  • the position may comprise a three-dimensional position vector representing a change in (x, y, z) coordinates of the position.
  • the angle may comprise a three- dimensional angle vector representing a change in orientation in three-dimensional space.
  • the media processing device 110 applies 404 filters to the obtained IMU data. For example, a smoothing filter may be applied by combining the current angle and position values with prior values. For example:
  • Aqi gAQ'i + (1 - g)Aq ⁇ -1
  • Ax' t is the pre-filtered change in position for the block i
  • Aq' t is the pre-filtered change in angle for the block i
  • A6 L is the smoothed change in angle for the block
  • the media processing device 110 identifies 406 a window of smoothed IMU data meeting predefined criteria for a detected breath.
  • the media processing device identifies a window in which the smoothed change in position values and/or the smoothed change in angle values is within a predefined expected range over the time window to identify breaths. For example, movements outside of respective predefined ranges for the smoothed change in position and smoothed change in angle values may be filtered out (e.g., discarded).
  • the predefined range associated with the smoothed change in position may correspond to the expected vertical movement amplitude indicative of a breath (e.g., 0.8 - 10 millimeters in the vertical direction).
  • the predefined range associated with the smoothed change in angle may correspond to an expected change in pitch or rotation about the x-axis (i.e., a left-right axis parallel with a width of the human body) indicative of a breath. Breaths may be detected based on the filtered and smoothed IMU data. For example, for each window of a sliding time window (e.g., 0.2 second windows), the filtered and smoothed IMU data is analyzed to detect a window in which IMU data is in within the respective ranges for both position and angle. In an embodiment, the lower bound of the range may decrease from the beginning of the time window to the end of the time window to correspond to an expecting slowing of the head movement towards the end of an inhale.
  • the predefined amplitude range for vertical position may be set to 0.8 - 10 millimeters at the beginning of the time window and decrease to 0.4 - 10 millimeters at the end of the time window.
  • a breath is dedicated when both the position and angle data meet the above criteria for a given time window.
  • a breath may be dedicated when either one of the position or angle data meet the above criteria for a given time window.
  • the IMU 284 may obtain only one of the position or angle data without necessarily obtaining both.
  • the media processing device 110 may separately detect inhales and exhales and only detect a breath when both are detected in relative time proximity.
  • the respective ranges for comparing the IMU data may be different than the ranges for detecting an inhale.
  • the predefined range for the smoothed change in position may comprise a positive value in the vertical direction to represent upward movement of the head.
  • the predefined range for the smoothed change in position may comprise a negative value in the vertical direction to represent downward movement of the head.
  • the predefined range for the smoothed change in angle may comprise a positive value about the x-axis to represent front-to-back rotation of the head.
  • the predefined range for the smoothed change in angle may comprise a negative about the x-axis to represent back-to-front rotation of the head.
  • the lower bound of the predefined range may increase from the beginning of the time window to the end of the time window. The breathing rate may then be determined 408 based on the detected breaths over a range of time windows (e.g., based on an average or median time period between detected breaths).
  • a combination of IMU data and audio data may be used to identify the breathing patterns.
  • the media processing device 110 may concurrently run the processes of FIG. 3 and FIG. 4 to determine when either one of the processes detect a breath. If a breath is detected with the IMU data but not the audio data, the amplitude audio range may be extended by a predefined percentage (e.g., 15%). If a breath is then detected based on the audio data, the breath detection is confirmed. Otherwise, no breath is detected.
  • FIG. 5 illustrates an example embodiment of a process for detecting a heart rate using IMU data.
  • the media processing device 110 obtains 502 IMU data similar to detecting the breathing rate described above.
  • the media processing device 110 applies 504 filtering to the IMU data.
  • the filtering may include the same smoothing filter used when identifying breathing as described above, to obtain smoothed change in position Ax t and smoothed change in angle Aqi for each time block i.
  • the obtained values may be additionally filtered to discard movements determined to be related to breathing.
  • the media processing device 110 identifies 506 a window of smoothed IMU data meeting predefined criteria for a detected heart beat.
  • the media processing device 110 identifies a window in which the smoothed change in position and/or the smoothed change in angle values are within a predefined expected range that is indicative of a heart beat.
  • the media processing device 110 may filter the smoothed position data to filter out changes in position outside the range of 0.1 - 1 millimeter in the vertical direction.
  • the media processing device 110 may then detect heart beats from the filtered data.
  • the media processing device 110 may detect a heart beat when both the smoothed change in position and the smoothed change in angle are within their respective predefined ranges.
  • a heart beat may be dedicated when either one of the position or angle data meet the above criteria.
  • the IMU 284 may obtain only one of the position or angle data without necessarily obtaining both.
  • the heart rate is determined 508 based on the timing of the detected heart beats.
  • the media processing device 110 may enforce a smoothing function on the detected heart rate to reduce erroneous detections.
  • the media processing device 110 may identify a sequence of time differences between consecutive detected heart beats. The values may be compared against a baseline heart rate (e.g., a previously determined heart rate for a preceding time period) to determine if the time differences are within a predefined threshold percentage (e.g., 10%) of the baseline heart rate. For example, if the time differences are determined to be (0.85 seconds, 0.84 seconds, 0.87 seconds,
  • the media processing device 110 determines that the data likely does not correspond to heart rate because heart rate generally does not change that quickly. However, if a heart rate within the threshold range of the previously determined value is not detected within an allowed time window and the data is self-consistent with a different heart rate, it may be determined that the previously detected heart rate was erroneous and the heart rate may be reset based on the current data. Generally, the media processing device 110 may identify a heart rate corresponding to beats occurring at a rate of approximately 45-120 cycles per minute, which is the typical range of heart rates.
  • the breathing rate and heart rate may be combined (optionally with other inputs or biometric data) to generate an overall metric representing a state of the user.
  • the metric may represent a measure of the user’s relaxation state.
  • the overall metric may be generated, for example, as a weighted combination of the heart rate and breathing rate.
  • all or parts of the processes of FIGs. 3-5 may be performed on the media server 130 instead of on the media processing devices 110.
  • the media processing device 110 may transmit the sensed audio data and IMU data to the media server 130, and the media server 130 may calculate the breathing rate and heart rate based on the received data.
  • Coupled along with its derivatives.
  • the term“coupled” as used herein is not necessarily limited to two or more elements being in direct physical or electrical contact. Rather, the term“coupled” may also encompass two or more elements that are not in direct contact with each other, but yet still co-operate or interact with each other.
  • any reference to“one embodiment” or“an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment.
  • the appearances of the phrase“in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Human Computer Interaction (AREA)
  • Pathology (AREA)
  • Physiology (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Signal Processing (AREA)
  • Cardiology (AREA)
  • Psychiatry (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • Dentistry (AREA)
  • Pulmonology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dermatology (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • User Interface Of Digital Computer (AREA)
  • Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)

Abstract

A media system controls presentation of media content based on a detected state of a user. The user state may be detected based on biometric inputs that may be derived from IMU and/or microphone data. The biometric inputs may include a heart rate detected from IMU data representing motion of a head-mounted media processing device, a breathing rate detected from IMU data representing motion of a head-mounted media processing device, a breathing rate detected based on microphone data, or a combination thereof.

Description

ADAPTING MEDIA CONTENT TO A SENSED STATE OF A USER
INVENTORS:
ANDREJA DJOKOVIC ZACHARY NORMAN NANEA REEVES JOSEPH UILDRIKS WALTER JOHN GREENLEAF
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Patent Application No. 62/634,120 filed on February 22, 2018, the contents of which are incorporated by reference herein.
BACKGROUND
TECHNICAL FIELD
[0002] This disclosure relates generally to a media content system, and more specifically, to a media device that provides content based on a detected state of a user.
DESCRIPTION OF THE RELATED ART
[0003] In media content systems, it is often desirable to adapt the content served to a particular user in order to provide the user with an interactive experience. Conventionally, interactive content adapts to the user based on deliberate inputs provided by the user. For example, a user may provide inputs through a controller, voice commands, head movements, or gestures. However, such conventional media content systems cannot always adapt to the user’s mind and body state because this state may not necessarily be reflected in the user’s deliberate inputs. Furthermore, requiring the user to actively provide too many inputs may degrade the user’s overall experience.
SUMMARY
[0004] In a first embodiment, a media processing device adapts content based on a breathing rate detected based on captured audio. The media processing device presents first content on a display device. A microphone captures ambient audio. A frequency domain transformation is performed on a current block of the ambient audio to generate a frequency spectrum of the current block. The frequency spectrum is filtered to generate a filtered frequency spectrum limited to a predefined frequency range associated with breathing noise. One or more peak frequencies in the filtered frequency spectrum are identified. A breath is detected by identifying a pattern of peak frequencies across a range of blocks that meets predefined criteria. The breathing rate is determined based on the detected breath and a history of previously detected breaths. Second content is presented on the display device based on the detected breathing rate falling within a predefined range.
[0005] In a second embodiment, a media processing device adapts content based on a breathing rate detected based on motion data. The media processing device presents first content on a display device. Motion data is obtained from an inertial measurement device. The motion data is filtered by applying a smoothing function to the motion data to generate smoothed motion data. A breath is detected based on identifying that the smoothed motion data includes movement constrained to one or more predefined amplitude ranges associated with breathing movement over a predefined time window. A breathing rate is identified based on the detected breath. Second content is presented on the display device responsive to the detected breathing rate falling within a predefined breathing rate range.
[0006] In a third embodiment, a media processing device adapts content based on a heart rate detected based on motion data. The media processing device presents first content on a display device. Motion data is obtained from an inertial measurement device. The motion data is filtered by applying a smoothing function to the motion data to generate smoothed motion data. A heart beat is detected based on identifying that the smoothed motion data includes movement within a predefined amplitude range.
A heart rate is identified based on the detected heart beat. The second content is presented on the display device responsive to the detected heart rate falling within a predefined heart rate range.
[0007] Embodiment may include a method, a non-transitory computer-readable storage medium, and a computer device for performing the above described processes.
BRIEF DESCRIPTIONS OF THE DRAWINGS
[0008] The disclosed embodiments have other advantages and features which will be more readily apparent from the following detailed description of the invention and the appended claims, when taken in conjunction with the accompanying drawings, in which:
[0009] Figure (or“FIG.”) 1 illustrates an example embodiment of a media system.
[0010] FIG. 2 illustrates an example embodiment of a media processing device. [0011] FIG. 3 illustrates an example embodiment of a process for detecting a breathing rate based on audio.
[0012] FIG. 4 illustrates an example embodiment of a process for detecting a breathing rate based on motion data.
[0013] FIG. 5 illustrates an example embodiment of a process for detecting a heart rate based on motion data.
DETAILED DESCRIPTION
[0014] The figures and the following description relate to preferred embodiments by way of illustration only. It should be noted that from the following discussion, alternative embodiments of the structures and methods disclosed herein will be readily recognized as viable alternatives that may be employed without departing from the principles of what is claimed.
[0015] Reference will now be made in detail to several embodiments, examples of which are illustrated in the accompanying figures. It is noted that wherever practicable similar or like reference numbers may be used in the figures and may indicate similar or like functionality. The figures depict
embodiments of the disclosed system (or method) for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein.
[0016] FIG. 1 is a block diagram of a media system 100, according to one embodiment. The media system 100 includes a network 120, a media server 130, and a plurality of media processing devices 110. In alternative configurations, different and/or additional components may be included in the media content system 100.
[0017] The media processing device 110 comprises a computer device for processing and presenting media content such as audio, images, video, or a combination thereof. The media processing device 110 may furthermore detect various inputs including voluntary user inputs (e.g., input via a controller, voice command, body movement, or other convention control mechanism) and various biometric inputs (e.g., breathing patterns, heart rate, etc.). The media processing device 110 may control the presentation of the media content in response to the inputs. The media processing device 110 may comprise, for example, a mobile device, a tablet, a laptop computer, or a desktop computer. In an embodiment, the media processing device 110 may include a head-mounted display such as a virtual reality headset or an augmented reality headset. An embodiment of a media processing device 110 is described in further detail below with respect to FIG. 2.
[0018] The media server 130 comprises one or more computing devices for delivering media content to the media processing devices 110 via the network 120. For example, the media server 130 may stream media content to the media processing devices 110 to enable the media processing devices 110 to present the media content in real-time or near real-time. Alternatively, the media server 130 may enable the media processing devices 110 to download media content to be stored on the media processing devices 110 and played back locally at a later time.
[0019] The network 120 may include any combination of local area and/or wide area networks, using both wired and/or wireless communication systems. In one embodiment, the network 120 uses standard communications technologies and/or protocols. In some embodiments, all or some of the
communication links of the network 120 may be encrypted using any suitable technique.
[0020] Various components of the media system 100 of FIG. 1 such as the media server 130 and the media processing devices 110 can each include one or more processors and a non-transitory computer- readable storage medium storing instructions therein that when executed cause the one or more processors to carry out the functions attributed to the respective devices described herein.
[0021] FIG. 2 is a block diagram illustrating an embodiment of a media processing device 110. In the illustrated embodiment, the media processing device 110 comprises a processor 250, a storage medium 260, input/output devices 270, and sensors 280. Alternative embodiments may include additional or different components.
[0022] The input/output devices 270 include various input and output devices for receiving inputs to the media processing device 110 and providing outputs from the media processing device 110. In an embodiment, the input/output devices 270 may include a display 272, an audio output device 274, a user input device 276, and a communication device 278. The display 272 comprises an electronic device for presenting images or video content such as an LED display panel, an LCD display panel, or other type of display. The audio output device 274 may include one or more integrated speakers or a port for connecting one or more external speakers to play audio associated with the presented media content.
The user input device can comprise any device for receiving user inputs such as a touchscreen interface, a game controller, a keyboard, a mouse, a joystick, a voice command controller, a gesture recognition controller, or other input device. The communication device 278 comprises an interface for receiving and transmitting wired or wireless communications with external devices (e.g., via the network 120 or via a direct connection). For example, the communication device 278 may comprise one or more wired ports such as a USB port, an HDMI port, an Ethernet port, etc. or one or more wireless ports for communicating according to a wireless protocol such as Bluetooth, Wireless USB, Near Field
Communication (NFC), etc.
[0023] The sensors 280 capture various sensor data that can be provided as additional inputs to the media processing device 110. For example, the sensors 240 may include a microphone 282 and an inertial measurement unit (IMU) 284. The microphone 282 captures ambient audio by converting sound into an electrical signal that can be stored or processed by the media processing device 110. The IMU 284 comprises an electronic device for sensing movement and orientation. For example, the IMU 284 may comprise a gyroscope for sensing orientation or angular velocity and an accelerometer for sensing acceleration. The IMU 284 may furthermore process data obtained by direct sensing to convert the measurements into other useful data, such as computing a velocity or position from acceleration data. In an embodiment, the IMU 284 may be integrated with the media processing device 110.
Alternatively, the IMU 284 may be communicatively coupled to the media processing device 110 but physically separate from it so that the IMU 284 could be mounted in a desired position on the user’s body (e.g., on the head or wrist).
[0024] The storage medium 260 (e.g., a non-transitory computer-readable storage medium) stores instructions executable by the processor 250 for carrying out functions attributed to the media processing device 110 described herein. In an embodiment, the storage medium 260 includes a content presentation module 262, an input processing module 264, and a biometric sensing module 266. The content presentation module 262 presents media content via the display 272 and the audio output device 274. The content presentation module 262 may adapt its content based on information received from the input processing module 264 and the biometric sensing module 266. The input processing module 262 processes inputs received via the user input device 276 and provides processed input data that may control the output of the content presentation module 262. The biometric sensing module 266 obtains sensor data from the sensors 280 such as audio data and IMU data (e.g., accelerometer data, gyro data, or other inertial measurements). The biometric sensing module 266 processes the sensor data to derive biometric information such as a breathing rate and heart rate. For example, breathing rate may be detected from audio data, from IMU data, or from a combination thereof as will be described in further detail below. Furthermore, heart rate may be detected based on IMU data as will be described in further detail below. These types of biometric data can be utilized to determine a mind and body state of the user that may not be apparent from voluntary inputs alone. For example, a measure of relaxation may be determined in an automated way based on the detected heart rate and breathing pattern. This state information may be utilized to automatically adapt the presentation of content based on the user’s detected mind and body state.
[0025] In an example application, an interactive media application may guide a user through a meditation experience. In this example experience, a user may be taken through a first exercise to guide the user towards a target breathing rate and heart rate indicative of a particular state of relaxation. The user’s breathing and heart rate can be detected throughout the experience and the content may be updated to move on to a subsequent exercise once the target breathing rate and heart rate are achieved. Alternatively, if the user is having difficulty achieving the target breathing rate and heart rate, the content may be updated to provide the user with an alternative exercise (e.g., a simpler exercise).
Furthermore, the breathing rate and heart rate (or an overall relaxation state metric) may be provided to the user as real-time feedback via a graphical user interface, or the information may be logged to provide to the user as feedback after completing the experience.
[0026] In other examples, heart rate and breathing rate can be used in other types of lifestyle
applications that provide feedback to the user relating to a detected rest state, level of focus, or other information. In gaming applications, for example, avatars in a virtual room may display emotions consistent with the user’s detected mind and body state derived from the detected breathing rate and heart rate. Similarly, other characters in the game may interact with the user’s character based on the detected state. In medical applications, the detected breathing rate and heart rate may be used as a diagnostic tool to identify medical conditions such as shortness of breath or symptoms of a heart condition. In dating or social applications, real-time emoticons may be generated that reflect the user’s state. Furthermore, the user may be given a test that measures emotional response (derived from the heart rate and breathing rate) to particular inputs that may be useful to identify potential dating matches. In other applications, the user’s state may be utilized to provide feedback to music or movie creators to gauge how users react when listening to music or watching a movie.
[0027] FIG. 3 illustrates an embodiment of a process for detecting a breathing rate based on audio data. The media processing device 110 captures 302 a block of ambient audio data (e.g., using the
microphone 282). Here, an audio block represents a sequence of audio samples within a limited time range. For example, audio may be captured at a rate of approximately 48,000 samples per second and may be grouped into blocks at a rate of approximately 60-90 blocks per second. A frequency domain transformation is performed 304 on the block to generate a frequency spectrum for the block. For example, in one embodiment, a Fast Fourier Transform (FFT) is performed on the block to generate the frequency spectrum. The frequency spectrum for a given block may comprise a plurality of discrete frequency bins (e.g., 1024 bins) covering the range of, for example, 0 - 24 kHz such that each frequency bin corresponds to a different sub-range and each frequency bin represents an amplitude of the frequency components within the sub-range. The media processing device 110 may filter 306 the frequency spectrum. Here, the filtering may include applying a band-limiting filter to limit the frequency spectrum to the typical frequency ranges found in an audio capture of breathing. For example, in one embodiment, the media processing device 110 may filter the frequency spectrum to limit it to a range of approximately 4kHz - 5kHz (e.g., by discarding bins outside of this frequency range). Additionally, the filtering may include applying a noise filter to the block. For example, in one embodiment, an adaptive noise spectrum is generated that is updated at each block as a weighted combination of a cumulative noise spectrum and the frequency spectrum for the current block. For example, the noise spectrum Noise for a block i may be represented as:
Noisei = a FFT'i + (1— a) Noise^ where FFT’i is the pre-noise filtered frequency spectrum for the block i, and a is a filtering parameter such that 0 < a < 1 (e.g., a = 0.05). The noise spectrum may be subtracted from the frequency spectrum to remove the noise:
FFTt = FFT[ - Noiset
[0028] where 7) is the noise filtered frequency spectrum.
[0029] The media processing device 110 identifies 308 one or more peak frequencies in the noise filtered frequency spectrum of the block. For example, in one embodiment, a single peak frequency is identified for each block that corresponds to the frequency having the largest amplitude within the block. Alternatively, a set of peak frequencies may be identified for each block corresponding to the N frequencies having the largest amplitudes for each block (e.g., N= 3). In another embodiment, the peak frequencies may correspond to any frequencies having amplitudes exceeding a predefined threshold amplitude. In this case, the number of peak frequencies in each frame may be variable, and some frames may not include any peak frequencies. In embodiments where multiple frequency peaks are identified in a block, the media processing device 110 may average the amplitudes of the frequency peaks within the block to generate an overall frequency peak amplitude. Furthermore, additional smoothing filter may be applied by averaging the current overall frequency peak amplitude with overall frequency peak amplitudes for prior blocks. This processing results in a sequence of smoothed frequency peak amplitudes over a sequence of blocks. The media processing device 110 identifies 310 a peak frequency pattern across multiple blocks that substantially matches an expected pattern corresponding to inhales and exhales. For example, in one embodiment, the media processing device 110 scans audio using a sliding time window of predefined length (e.g., a window of 0.2 seconds) to identify windows in which at least a threshold percentage (e.g., 80%) of the smoothed frequency peak amplitudes are within one or more predefined amplitude ranges (e.g., amplitude ranges consistent with human inhales or exhales). Time windows meeting the above criteria and occurring within a predefined time range of each other (e.g., a time range consistent with normal human breathing rate) are then identified. In an embodiment, a pattern of time windows are identified in which the amplitude ranges of the smoothed frequency peaks alternately correspond to the ranges associated with inhales and exhales. For example, the media processing device 110 identifies first time windows in which the amplitudes of the smoothed frequency peaks correspond to a first amplitude range associated with an inhale, identifies second time windows in which the amplitudes of the smoothed frequency peaks correspond to a second amplitude range associated with an exhale, and detects an alternating pattern of inhales and exhales. In this case, a pair of time windows meeting the above criteria for an inhale and exhale are detected as a breath. The breathing rate may then be determined 312 based on a rate of the detected breaths that meet the above criteria. For example, the breathing rate may be based on an average time or a median time between breaths meeting the above criteria and that is within a predefined reasonable range of expected breathing rates.
[0030] FIG. 4 illustrates an embodiment of a process for determining a breathing rate from IMU data. The media processing device 110 obtains 402 IMU data (e.g., from the IMU 284 in a head-mounted sensor) that represents the change in angle and change in position relative to a prior time block (e.g., at a rate of 60-90 blocks per second). Here, the position may comprise a three-dimensional position vector representing a change in (x, y, z) coordinates of the position. The angle may comprise a three- dimensional angle vector representing a change in orientation in three-dimensional space. The media processing device 110 applies 404 filters to the obtained IMU data. For example, a smoothing filter may be applied by combining the current angle and position values with prior values. For example:
Axt = bAc'ί + (1 - b)Acί-1
Aqi = gAQ'i + (1 - g)Aqί-1 where Ax' t is the pre-filtered change in position for the block i , Axt is the smoothed change in position for the block b is a first filtering parameter (e.g., b = 0.7), Aq' t is the pre-filtered change in angle for the block i , A6L is the smoothed change in angle for the block and y is a second filtering parameter (e.g., y = 0.7). The media processing device 110 then identifies 406 a window of smoothed IMU data meeting predefined criteria for a detected breath. For example, the media processing device identifies a window in which the smoothed change in position values and/or the smoothed change in angle values is within a predefined expected range over the time window to identify breaths. For example, movements outside of respective predefined ranges for the smoothed change in position and smoothed change in angle values may be filtered out (e.g., discarded). Here, the predefined range associated with the smoothed change in position may correspond to the expected vertical movement amplitude indicative of a breath (e.g., 0.8 - 10 millimeters in the vertical direction). Similarly, the predefined range associated with the smoothed change in angle may correspond to an expected change in pitch or rotation about the x-axis (i.e., a left-right axis parallel with a width of the human body) indicative of a breath. Breaths may be detected based on the filtered and smoothed IMU data. For example, for each window of a sliding time window (e.g., 0.2 second windows), the filtered and smoothed IMU data is analyzed to detect a window in which IMU data is in within the respective ranges for both position and angle. In an embodiment, the lower bound of the range may decrease from the beginning of the time window to the end of the time window to correspond to an expecting slowing of the head movement towards the end of an inhale. For example, the predefined amplitude range for vertical position may be set to 0.8 - 10 millimeters at the beginning of the time window and decrease to 0.4 - 10 millimeters at the end of the time window. In an embodiment, a breath is dedicated when both the position and angle data meet the above criteria for a given time window. In alternative embodiments, a breath may be dedicated when either one of the position or angle data meet the above criteria for a given time window. In some embodiments, the IMU 284 may obtain only one of the position or angle data without necessarily obtaining both.
[0031] In an embodiment, instead of only detecting inhales, the media processing device 110 may separately detect inhales and exhales and only detect a breath when both are detected in relative time proximity. To detect an exhale, the respective ranges for comparing the IMU data may be different than the ranges for detecting an inhale. For example, when detecting an inhale, the predefined range for the smoothed change in position may comprise a positive value in the vertical direction to represent upward movement of the head. When detecting an exhale, the predefined range for the smoothed change in position may comprise a negative value in the vertical direction to represent downward movement of the head. Furthermore, when detecting an inhale, the predefined range for the smoothed change in angle may comprise a positive value about the x-axis to represent front-to-back rotation of the head. When detecting an exhale, the predefined range for the smoothed change in angle may comprise a negative about the x-axis to represent back-to-front rotation of the head. Furthermore, when detecting an exhale, the lower bound of the predefined range may increase from the beginning of the time window to the end of the time window. The breathing rate may then be determined 408 based on the detected breaths over a range of time windows (e.g., based on an average or median time period between detected breaths).
[0032] In another alternative embodiment, a combination of IMU data and audio data may be used to identify the breathing patterns. For example, the media processing device 110 may concurrently run the processes of FIG. 3 and FIG. 4 to determine when either one of the processes detect a breath. If a breath is detected with the IMU data but not the audio data, the amplitude audio range may be extended by a predefined percentage (e.g., 15%). If a breath is then detected based on the audio data, the breath detection is confirmed. Otherwise, no breath is detected.
[0033] FIG. 5 illustrates an example embodiment of a process for detecting a heart rate using IMU data. The media processing device 110 obtains 502 IMU data similar to detecting the breathing rate described above. The media processing device 110 applies 504 filtering to the IMU data. Here, the filtering may include the same smoothing filter used when identifying breathing as described above, to obtain smoothed change in position Axt and smoothed change in angle Aqi for each time block i. In this process, the obtained values may be additionally filtered to discard movements determined to be related to breathing. The media processing device 110 identifies 506 a window of smoothed IMU data meeting predefined criteria for a detected heart beat. For example, the media processing device 110 identifies a window in which the smoothed change in position and/or the smoothed change in angle values are within a predefined expected range that is indicative of a heart beat. Here, the media processing device 110 may filter the smoothed position data to filter out changes in position outside the range of 0.1 - 1 millimeter in the vertical direction. The media processing device 110 may then detect heart beats from the filtered data. For example, the media processing device 110 may detect a heart beat when both the smoothed change in position and the smoothed change in angle are within their respective predefined ranges. In alternative embodiments, a heart beat may be dedicated when either one of the position or angle data meet the above criteria. In some embodiments, the IMU 284 may obtain only one of the position or angle data without necessarily obtaining both.
[0034] The heart rate is determined 508 based on the timing of the detected heart beats. In an embodiment, the media processing device 110 may enforce a smoothing function on the detected heart rate to reduce erroneous detections. Here, the media processing device 110 may identify a sequence of time differences between consecutive detected heart beats. The values may be compared against a baseline heart rate (e.g., a previously determined heart rate for a preceding time period) to determine if the time differences are within a predefined threshold percentage (e.g., 10%) of the baseline heart rate. For example, if the time differences are determined to be (0.85 seconds, 0.84 seconds, 0.87 seconds,
... } and the previously detected heart rate corresponds to 0.4 second time periods, the media processing device 110 determines that the data likely does not correspond to heart rate because heart rate generally does not change that quickly. However, if a heart rate within the threshold range of the previously determined value is not detected within an allowed time window and the data is self-consistent with a different heart rate, it may be determined that the previously detected heart rate was erroneous and the heart rate may be reset based on the current data. Generally, the media processing device 110 may identify a heart rate corresponding to beats occurring at a rate of approximately 45-120 cycles per minute, which is the typical range of heart rates.
[0035] In an embodiment, the breathing rate and heart rate may be combined (optionally with other inputs or biometric data) to generate an overall metric representing a state of the user. For example, the metric may represent a measure of the user’s relaxation state. The overall metric may be generated, for example, as a weighted combination of the heart rate and breathing rate.
[0036] In other alternative embodiments, all or parts of the processes of FIGs. 3-5 may be performed on the media server 130 instead of on the media processing devices 110. For example, in one embodiment, the media processing device 110 may transmit the sensed audio data and IMU data to the media server 130, and the media server 130 may calculate the breathing rate and heart rate based on the received data.
[0037] Throughout this specification, some embodiments have used the expression“coupled” along with its derivatives. The term“coupled” as used herein is not necessarily limited to two or more elements being in direct physical or electrical contact. Rather, the term“coupled” may also encompass two or more elements that are not in direct contact with each other, but yet still co-operate or interact with each other.
[0038] Likewise, as used herein, the terms“comprises,”“comprising,”“includes,”“including,”“has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
[0039] In addition, use of the“a” or“an” are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the invention. This description should be read to include one or at least one and the singular also includes the plural unless it is obvious that it is meant otherwise.
[0040] Finally, as used herein any reference to“one embodiment” or“an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase“in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
[0041] Upon reading this disclosure, those of skill in the art will appreciate still additional alternative structural and functional designs for the described embodiments as disclosed from the principles herein. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that the disclosed embodiments are not limited to the precise construction and components disclosed herein. Various modifications, changes and variations, which will be apparent to those skilled in the art, may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the scope defined in the appended claims.

Claims

1. A method for adapting content based on a detected heart rate, the method comprising:
presenting first content on a display device;
obtaining motion data from an inertial measurement device;
filtering the motion data to apply a smoothing function to the motion data to generate smoothed motion data;
detecting a heart beat based on identifying that the smoothed motion data includes movement within a predefined amplitude range;
identifying a heart rate based on the detected heart beat; and
presenting second content of the display device responsive to the detected heart rate falling within a predefined heart rate range.
2. The method of claim 1, wherein detecting the heart beat comprises identifying that the smoothed motion data includes a change in vertical position of the inertial measurement device within the predefined amplitude range.
3. The method of claim 1, wherein detecting the heart beat comprises identifying that the smoothed motion data comprises a change in pitch of the inertial measurement device about an axis parallel to a width of a human body within the predefined amplitude range.
4. The method of claim 1, wherein detecting the heart beat comprises identifying that the smoothed motion data includes a change in vertical position of the inertial measurement device within a first predefined amplitude range and identifying that the smoothed motion data comprises a change in pitch of the inertial measurement device about an axis parallel to a width of a human body within a second predefined amplitude range.
5. The method of claim 1, wherein identifying the heart rate comprises:
obtaining a baseline heart rate;
determining a sequence of time differences between consecutive detected heart beats;
determining if the sequence of time differences corresponds to an average rate within a
predefined percentage of the baseline heart rate; and
responsive to the sequence of time differences corresponding to the average rate being within the predefined percentage of the baseline heart rate, determining the heart rate from the average rate for a current time period.
6. The method of claim 1, wherein identifying the heart rate comprises: obtaining a baseline heart rate;
determining a sequence of time differences between consecutive detected heart beats;
determining if the sequence of time differences corresponds to an average rate within a
predefined percentage of the baseline heart rate; and
responsive to the sequence of time differences corresponding to the average rate not being
within the predefined percentage of the baseline heart rate, determining that the heart rate corresponds to the baseline heart rate for a current time period.
7. The method of claim 6, further comprising:
determining that an average rate of heart beats corresponding to the baseline heart rate is not detected within a predefined allowed time window; and
determining that the average rate of heart beats is consistent with a different heart rate; and resetting the heart rate to correspond to the different heart rate.
8. A non-transitory computer-readable storage medium storing instructions for adapting content based on a detected heart rate, the instructions when executed by a processor causing the processor to perform steps including:
presenting first content on a display device;
obtaining motion data from an inertial measurement device;
filtering the motion data to apply a smoothing function to the motion data to generate smoothed motion data;
detecting a heart beat based on identifying that the smoothed motion data includes movement within a predefined amplitude range;
identifying a heart rate based on the detected heart beat; and
presenting second content of the display device responsive to the detected heart rate falling
within a predefined heart rate range.
9. The non-transitory computer-readable storage medium of claim 8, wherein detecting the heart beat comprises identifying that the smoothed motion data includes a change in vertical position of the inertial measurement device within the predefined amplitude range.
10. The non-transitory computer-readable storage medium of claim 8, wherein detecting the heart beat comprises identifying that the smoothed motion data comprises a change in pitch of the inertial measurement device about an axis parallel to a width of a human body within the predefined amplitude range.
11. The non-transitory computer-readable storage medium of claim 8, wherein detecting the heart beat comprises identifying that the smoothed motion data includes a change in vertical position of the inertial measurement device within a first predefined amplitude range and identifying that the smoothed motion data comprises a change in pitch of the inertial measurement device about an axis parallel to a width of a human body within a second predefined amplitude range.
12. The non-transitory computer-readable storage medium of claim 8, wherein identifying the heart rate comprises:
obtaining a baseline heart rate;
determining a sequence of time differences between consecutive detected heart beats;
determining if the sequence of time differences corresponds to an average rate within a
predefined percentage of the baseline heart rate; and
responsive to the sequence of time differences corresponding to the average rate being within the predefined percentage of the baseline heart rate, determining the heart rate from the average rate for a current time period.
13. The non-transitory computer-readable storage medium of claim 8, wherein identifying the heart rate comprises:
obtaining a baseline heart rate;
determining a sequence of time differences between consecutive detected heart beats;
determining if the sequence of time differences corresponds to an average rate within a
predefined percentage of the baseline heart rate; and
responsive to the sequence of time differences corresponding to the average rate not being
within the predefined percentage of the baseline heart rate, determining that the heart rate corresponds to the baseline heart rate for a current time period.
14. The non-transitory computer-readable storage medium of claim 13, further comprising:
determining that an average rate of heart beats corresponding to the baseline heart rate is not detected within a predefined allowed time window; and
determining that the average rate of heart beats is consistent with a different heart rate; and resetting the heart rate to correspond to the different heart rate.
15. A computer system comprising:
a processor; and a non-transitory computer-readable storage medium storing instructions for adapting content based on a detected heart rate, the instructions when executed by the processor causing the processor to perform steps including:
presenting first content on a display device;
obtaining motion data from an inertial measurement device;
filtering the motion data to apply a smoothing function to the motion data to generate smoothed motion data;
detecting a heart beat based on identifying that the smoothed motion data includes movement within a predefined amplitude range;
identifying a heart rate based on the detected heart beat; and
presenting second content of the display device responsive to the detected heart rate falling within a predefined heart rate range.
16. The computer system of claim 15, wherein detecting the heart beat comprises identifying that the smoothed motion data includes a change in vertical position of the inertial measurement device within the predefined amplitude range.
17. The computer system of claim 15, wherein detecting the heart beat comprises identifying that the smoothed motion data comprises a change in pitch of the inertial measurement device about an axis parallel to a width of a human body within the predefined amplitude range.
18. The computer system of claim 15, wherein detecting the heart beat comprises identifying that the smoothed motion data includes a change in vertical position of the inertial measurement device within a first predefined amplitude range and identifying that the smoothed motion data comprises a change in pitch of the inertial measurement device about an axis parallel to a width of a human body within a second predefined amplitude range.
19. The computer system of claim 15, wherein identifying the heart rate comprises:
obtaining a baseline heart rate;
determining a sequence of time differences between consecutive detected heart beats;
determining if the sequence of time differences corresponds to an average rate within a
predefined percentage of the baseline heart rate; and
responsive to the sequence of time differences corresponding to the average rate being within the predefined percentage of the baseline heart rate, determining the heart rate from the average rate for a current time period.
20. The computer system of claim 15, wherein identifying the heart rate comprises:
obtaining a baseline heart rate;
determining a sequence of time differences between consecutive detected heart beats;
determining if the sequence of time differences corresponds to an average rate within a
predefined percentage of the baseline heart rate; and
responsive to the sequence of time differences corresponding to the average rate not being
within the predefined percentage of the baseline heart rate, determining that the heart rate corresponds to the baseline heart rate for a current time period.
21. A method for adapting content based on a detected breathing rate, the method comprising:
presenting first content on a display device;
capturing ambient audio using a microphone;
performing a frequency domain transformation on a current block of the ambient audio to
generate a frequency spectrum of the current block;
filtering the frequency spectrum to generate a filtered frequency spectrum limited to a
predefined frequency range associated with breathing noise;
identifying one or more peak frequencies in the filtered frequency spectrum;
detecting a breath by identifying a pattern of peak frequencies across a range of blocks that meets predefined criteria;
determining the breathing rate based on the detected breath and a history of previously detected breaths; and
presenting second content of the display device based on the detected breathing rate falling
within a predefined range.
22. The method of claim 21, wherein detecting the breath comprises:
identifying time windows of the captured ambient audio in which at least a threshold percentage of peak frequency amplitudes of the peak frequencies are within a predefined amplitude range.
23. The method of claim 21, wherein detecting the breath comprises:
identifying first time windows of the captured ambient audio in which at least a first threshold percentage of peak frequency amplitudes of the peak frequencies are within a first predefined amplitude range corresponding to an inhale; identifying second time windows of the captured ambient audio in which at least a second
threshold percentage of peak frequency amplitudes of the peak frequencies are within a second predefined amplitude range corresponding to an exhale; and
detecting one of the first time windows and one of the second time windows occurring within a predefined time proximity.
24. The method of claim 21, wherein determining the breathing rate comprises detecting an average time between detected breaths that is within a predefined expected breathing rate range.
25. The method of claim 21, wherein filtering the frequency spectrum further comprises:
generating an adaptive noise spectrum as a weighted combination of a cumulative noise
spectrum and the frequency spectrum for the current block.
26. The method of claim 21, wherein identifying one or more peak frequencies in the filtered frequency spectrum comprises:
applying a smoothing filter to the one or more peak frequencies by averaging amplitudes of the one or more peak frequencies for the current block with amplitudes of peak frequencies for previous blocks.
27. The method of claim 21, wherein presenting the second content based on the detected breathing rate falling within a predefined range comprises:
detecting a heart rate;
combining the heart rate with the detected breathing rate to generate a relaxation state metric; and
selecting the second content based on the relaxation state metric.
28. A non-transitory computer-readable storage medium storing instruction for adapting content based on a detected breathing rate, the instructions when executed by a processor causing the processor to perform steps including:
presenting first content on a display device;
capturing ambient audio using a microphone;
performing a frequency domain transformation on a current block of the ambient audio to
generate a frequency spectrum of the current block;
filtering the frequency spectrum to generate a filtered frequency spectrum limited to a
predefined frequency range associated with breathing noise;
identifying one or more peak frequencies in the filtered frequency spectrum; detecting a breath by identifying a pattern of peak frequencies across a range of blocks that meets predefined criteria;
determining the breathing rate based on the detected breath and a history of previously detected breaths; and
presenting second content of the display device based on the detected breathing rate falling within a predefined range.
29. The non-transitory computer-readable storage medium of claim 28, wherein detecting the breath comprises:
identifying time windows of the captured ambient audio in which at least a threshold percentage of peak frequency amplitudes of the peak frequencies are within a predefined amplitude range.
30. The non-transitory computer-readable storage medium of claim 28, wherein detecting the breath comprises:
identifying first time windows of the captured ambient audio in which at least a first threshold percentage of peak frequency amplitudes of the peak frequencies are within a first predefined amplitude range corresponding to an inhale;
identifying second time windows of the captured ambient audio in which at least a second
threshold percentage of peak frequency amplitudes of the peak frequencies are within a second predefined amplitude range corresponding to an exhale; and
detecting one of the first time windows and one of the second time windows occurring within a predefined time proximity.
31. The non-transitory computer-readable storage medium of claim 28, wherein determining the
breathing rate comprises detecting an average time between detected breaths that is within a predefined expected breathing rate range.
32. The non-transitory computer-readable storage medium of claim 28, wherein filtering the frequency spectrum further comprises:
generating an adaptive noise spectrum as a weighted combination of a cumulative noise
spectrum and the frequency spectrum for the current block.
33. The non-transitory computer-readable storage medium of claim 28, wherein identifying one or more peak frequencies in the filtered frequency spectrum comprises: applying a smoothing filter to the one or more peak frequencies by averaging amplitudes of the one or more peak frequencies for the current block with amplitudes of peak frequencies for previous blocks.
34. The non-transitory computer-readable storage medium of claim 28, wherein presenting the second content based on the detected breathing rate falling within a predefined range comprises:
detecting a heart rate;
combining the heart rate with the detected breathing rate to generate a relaxation state metric; and
selecting the second content based on the relaxation state metric.
35. A computer system comprising:
a processor; and
a non-transitory computer-readable storage medium storing instruction for adapting content based on a detected breathing rate, the instructions when executed by the processor causing the processor to perform steps including:
presenting first content on a display device;
capturing ambient audio using a microphone;
performing a frequency domain transformation on a current block of the ambient audio to generate a frequency spectrum of the current block;
filtering the frequency spectrum to generate a filtered frequency spectrum limited to a predefined frequency range associated with breathing noise;
identifying one or more peak frequencies in the filtered frequency spectrum;
detecting a breath by identifying a pattern of peak frequencies across a range of blocks that meets predefined criteria;
determining the breathing rate based on the detected breath and a history of previously detected breaths; and
presenting second content of the display device based on the detected breathing rate falling within a predefined range.
36. The computer system of claim 35, wherein detecting the breath comprises:
identifying time windows of the captured ambient audio in which at least a threshold percentage of peak frequency amplitudes of the peak frequencies are within a predefined amplitude range.
37. The computer system of claim 35, wherein detecting the breath comprises:
identifying first time windows of the captured ambient audio in which at least a first threshold percentage of peak frequency amplitudes of the peak frequencies are within a first predefined amplitude range corresponding to an inhale;
identifying second time windows of the captured ambient audio in which at least a second
threshold percentage of peak frequency amplitudes of the peak frequencies are within a second predefined amplitude range corresponding to an exhale; and
detecting one of the first time windows and one of the second time windows occurring within a predefined time proximity.
38. The computer system of claim 35, wherein determining the breathing rate comprises detecting an average time between detected breaths that is within a predefined expected breathing rate range.
39. The computer system of claim 35, wherein filtering the frequency spectrum further comprises: generating an adaptive noise spectrum as a weighted combination of a cumulative noise
spectrum and the frequency spectrum for the current block.
40. The computer system of claim 35, wherein identifying one or more peak frequencies in the filtered frequency spectrum comprises:
applying a smoothing filter to the one or more peak frequencies by averaging amplitudes of the one or more peak frequencies for the current block with amplitudes of peak frequencies for previous blocks.
41. A method for adapting content based on a detected breathing rate, the method comprising:
presenting first content on a display device;
obtaining motion data from an inertial measurement device;
filtering the motion data to apply a smoothing function to the motion data to generate smoothed motion data;
detecting a breath based on identifying that the smoothed motion data includes movement
constrained to one or more predefined amplitude ranges associated with breathing movement over a predefined time window;
identifying a breathing rate based on the detected breath; and
presenting second content of the display device responsive to the detected breathing rate falling within a predefined breathing rate range.
42. The method of claim 41, wherein detecting the breath comprises identifying that the smoothed motion data includes a change in vertical position of the inertial measurement device within the one or more predefined amplitude ranges.
43. The method of claim 42, wherein the one or more predefined amplitude ranges comprises a first range at a beginning of the time window and decreases to a second range at an end of the time window.
44. The method of claim 41, wherein detecting the breath comprises identifying that the smoothed
motion data comprises a change in pitch of the inertial measurement device about an axis parallel to a width of a human body within the one or more predefined amplitude ranges.
45. The method of claim 41, wherein detecting the breath comprises identifying that the smoothed
motion data includes a change in vertical position of the inertial measurement device within a first predefined amplitude range of the one or more predefined amplitude ranges and identifying that the smoothed motion data comprises a change in pitch of the inertial measurement device about an axis parallel to a width of a human body within a second predefined amplitude range of the one or more predefined amplitude ranges.
46. The method of claim 41, wherein detecting the breath comprises:
detecting an inhale by detecting a positive change in the smoothed motion data within the one or more predefined amplitude ranges; and
detecting an exhale by detecting a negative change in the smoothed motion data within the one or more predefined amplitude ranges that occurs within a predefined time proximity to the inhale.
47. The method of claim 41, wherein detecting the breath further comprises:
obtaining ambient audio from a microphone;
detecting a predicted occurrence of the breath based on the ambient audio; and
detecting the breath in response to the predicted occurrence of the breath based on the ambient audio substantially coinciding in time with the smoothed motion data including the movement constrained to the one or more predefined amplitude ranges.
48. A non-transitory computer-readable storage medium storing instructions for adapting content based on a detected breathing rate, the instructions when executed by a processor causing the processor to perform steps including: presenting first content on a display device;
obtaining motion data from an inertial measurement device;
filtering the motion data to apply a smoothing function to the motion data to generate smoothed motion data;
detecting a breath based on identifying that the smoothed motion data includes movement
constrained to one or more predefined amplitude ranges associated with breathing movement over a predefined time window;
identifying a breathing rate based on the detected breath; and
presenting second content of the display device responsive to the detected breathing rate falling within a predefined breathing rate range.
49. The non-transitory computer-readable storage medium of claim 48, wherein detecting the breath comprises identifying that the smoothed motion data includes a change in vertical position of the inertial measurement device within the one or more predefined amplitude ranges.
50. The non-transitory computer-readable storage medium of claim 49, wherein the one or more
predefined amplitude ranges comprises a first range at a beginning of the time window and decreases to a second range at an end of the time window.
51. The non-transitory computer-readable storage medium of claim 48, wherein detecting the breath comprises identifying that the smoothed motion data comprises a change in pitch of the inertial measurement device about an axis parallel to a width of a human body within the one or more predefined amplitude ranges.
52. The non-transitory computer-readable storage medium of claim 48, wherein detecting the breath comprises identifying that the smoothed motion data includes a change in vertical position of the inertial measurement device within a first predefined amplitude range of the one or more predefined amplitude ranges and identifying that the smoothed motion data comprises a change in pitch of the inertial measurement device about an axis parallel to a width of a human body within a second predefined amplitude range of the one or more predefined amplitude ranges.
53. The non-transitory computer-readable storage medium of claim 48, wherein detecting the breath comprises:
detecting an inhale by detecting a positive change in the smoothed motion data within the one or more predefined amplitude ranges; and detecting an exhale by detecting a negative change in the smoothed motion data within the one or more predefined amplitude ranges that occurs within a predefined time proximity to the inhale.
54. The non-transitory computer-readable storage medium of claim 48, wherein detecting the breath further comprises:
obtaining ambient audio from a microphone;
detecting a predicted occurrence of the breath based on the ambient audio; and
detecting the breath in response to the predicted occurrence of the breath based on the ambient audio substantially coinciding in time with the smoothed motion data indicating the movement constrained to the one or more predefined amplitude ranges.
55. A computer system comprising:
a processor; and
a non-transitory computer-readable storage medium storing instructions for adapting content based on a detected breathing rate, the instructions when executed by the processor causing the processor to perform steps including:
presenting first content on a display device;
obtaining motion data from an inertial measurement device;
filtering the motion data to apply a smoothing function to the motion data to generate smoothed motion data;
detecting a breath based on identifying that the smoothed motion data includes
movement constrained to one or more predefined amplitude ranges associated with breathing movement over a predefined time window;
identifying a breathing rate based on the detected breath; and
presenting second content of the display device responsive to the detected breathing rate falling within a predefined breathing rate range.
56. The computer system of claim 55, wherein detecting the breath comprises identifying that the smoothed motion data includes a change in vertical position of the inertial measurement device within the one or more predefined amplitude ranges.
57. The computer system of claim 56, wherein the one or more predefined amplitude ranges comprises a first range at a beginning of the time window and decreases to a second range at an end of the time window.
58. The computer system of claim 55, wherein detecting the breath comprises identifying that the smoothed motion data comprises a change in pitch of the inertial measurement device about an axis parallel to a width of a human body within the one or more predefined amplitude ranges.
59. The computer system of claim 55, wherein detecting the breath comprises identifying that the
smoothed motion data includes a change in vertical position of the inertial measurement device within a first predefined amplitude range of the one or more predefined amplitude ranges and identifying that the smoothed motion data comprises a change in pitch of the inertial
measurement device about an axis parallel to a width of a human body within a second predefined amplitude range of the one or more predefined amplitude ranges.
60. The computer system of claim 55, wherein detecting the breath comprises:
detecting an inhale by detecting a positive change in the smoothed motion data within the one or more predefined amplitude ranges; and
detecting an exhale by detecting a negative change in the smoothed motion data within the one or more predefined amplitude ranges that occurs within a predefined time proximity to the inhale.
PCT/US2019/019241 2018-02-22 2019-02-22 Adapting media content to a sensed state of a user WO2019165271A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP19757015.3A EP3755210A4 (en) 2018-02-22 2019-02-22 Adapting media content to a sensed state of a user

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862634120P 2018-02-22 2018-02-22
US62/634,120 2018-02-22

Publications (1)

Publication Number Publication Date
WO2019165271A1 true WO2019165271A1 (en) 2019-08-29

Family

ID=67617837

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/019241 WO2019165271A1 (en) 2018-02-22 2019-02-22 Adapting media content to a sensed state of a user

Country Status (3)

Country Link
US (2) US11294464B2 (en)
EP (1) EP3755210A4 (en)
WO (1) WO2019165271A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2583117B (en) * 2019-04-17 2021-06-30 Sonocent Ltd Processing and visualising audio signals
US20220192622A1 (en) * 2020-12-18 2022-06-23 Snap Inc. Head-wearable apparatus for breathing analysis
US20230149611A1 (en) * 2021-11-18 2023-05-18 Fresenius Medical Care Holdings, Inc. Wetness detector with integrated inertial measurement unit configured for use in a dialysis system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010067297A1 (en) 2008-12-11 2010-06-17 Koninklijke Philips Electronics N.V. Method and apparatus for the analysis of ballistocardiogram signals
US20170020398A1 (en) * 2015-07-22 2017-01-26 Quicklogic Corporation Heart rate monitor
US20170027523A1 (en) 2012-06-22 2017-02-02 Fitbit, Inc. Wearable heart rate monitor

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11253572A (en) * 1998-03-09 1999-09-21 Csk Corp Practicing device for health improvement
CN101437442B (en) 2006-03-06 2011-11-16 森赛奥泰克公司 Ultra wideband monitoring systems and antennas
US8831732B2 (en) 2010-04-29 2014-09-09 Cyberonics, Inc. Method, apparatus and system for validating and quantifying cardiac beat data quality
US20130338460A1 (en) * 2012-06-18 2013-12-19 David Da He Wearable Device for Continuous Cardiac Monitoring
US9872968B2 (en) 2013-04-17 2018-01-23 Sri International Biofeedback virtual reality sleep assistant
JP6347097B2 (en) * 2013-10-07 2018-06-27 セイコーエプソン株式会社 Portable device and heartbeat arrival time measurement control method
US20150313484A1 (en) * 2014-01-06 2015-11-05 Scanadu Incorporated Portable device with multiple integrated sensors for vital signs scanning
US20150265161A1 (en) 2014-03-19 2015-09-24 Massachusetts Institute Of Technology Methods and Apparatus for Physiological Parameter Estimation
US20160029968A1 (en) 2014-08-04 2016-02-04 Analog Devices, Inc. Tracking slow varying frequency in a noisy environment and applications in healthcare
US10105092B2 (en) 2015-11-16 2018-10-23 Eight Sleep Inc. Detecting sleeping disorders
US10188345B2 (en) * 2016-02-12 2019-01-29 Fitbit, Inc. Method and apparatus for providing biofeedback during meditation exercise
US10722182B2 (en) * 2016-03-28 2020-07-28 Samsung Electronics Co., Ltd. Method and apparatus for heart rate and respiration rate estimation using low power sensor
US10426411B2 (en) * 2016-06-29 2019-10-01 Samsung Electronics Co., Ltd. System and method for providing a real-time signal segmentation and fiducial points alignment framework
EP3652744A4 (en) * 2017-07-13 2020-07-08 Smileyscope Pty. Ltd. Virtual reality apparatus
US10667724B2 (en) * 2017-08-09 2020-06-02 Samsung Electronics Co., Ltd. System and method for continuous background heartrate and heartbeat events detection using a motion sensor
US10617311B2 (en) * 2017-08-09 2020-04-14 Samsung Electronics Co., Ltd. System and method for real-time heartbeat events detection using low-power motion sensor
CN111148467A (en) * 2017-10-20 2020-05-12 明菲奥有限公司 System and method for analyzing behavior or activity of an object
US11600365B2 (en) * 2017-12-12 2023-03-07 Vyaire Medical, Inc. Nasal and oral respiration sensor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010067297A1 (en) 2008-12-11 2010-06-17 Koninklijke Philips Electronics N.V. Method and apparatus for the analysis of ballistocardiogram signals
US20170027523A1 (en) 2012-06-22 2017-02-02 Fitbit, Inc. Wearable heart rate monitor
US20170020398A1 (en) * 2015-07-22 2017-01-26 Quicklogic Corporation Heart rate monitor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3755210A4

Also Published As

Publication number Publication date
EP3755210A4 (en) 2021-06-23
EP3755210A1 (en) 2020-12-30
US11294464B2 (en) 2022-04-05
US20190258315A1 (en) 2019-08-22
US20220221935A1 (en) 2022-07-14

Similar Documents

Publication Publication Date Title
US20220221935A1 (en) Adapting Media Content to a Sensed State of a User
US11389084B2 (en) Electronic device and method of controlling same
EP3868293B1 (en) System and method for monitoring pathological breathing patterns
US9557814B2 (en) Biometric interface for a handheld device
Zhao et al. Towards low-cost sign language gesture recognition leveraging wearables
Burba et al. Unobtrusive measurement of subtle nonverbal behaviors with the Microsoft Kinect
KR101693951B1 (en) Method for recognizing gestures and gesture detector
CN106796452B (en) Head-mounted display apparatus and its control method, computer-readable medium
JP2015207285A (en) System and method for producing computer control signals from breath attributes
CN104023802B (en) Use the control of the electronic installation of neural analysis
CN107174824B (en) Special effect information processing method and device, electronic equipment and storage medium
KR20160039298A (en) Authenticated gesture recognition
TW201446216A (en) Optical heartrate tracking
TW201322143A (en) Optical input device, input detection method and method applied to the optical input device
US20220276707A1 (en) Brain-computer interface
WO2018009844A1 (en) Methods and apparatus to determine objects to present in virtual reality environments
CN107430856A (en) Information processing system and information processing method
KR101553484B1 (en) Apparatus for detecting hand motion and method thereof
KR20190058289A (en) Detecting respiratory rates in audio using an adaptive low-pass filter
US11543892B2 (en) Touch pressure input for devices
KR20220140498A (en) human interface system
US11093044B2 (en) Method for detecting input using audio signal, and electronic device therefor
KR102361994B1 (en) Battle game system
Yang et al. MAF: Exploring Mobile Acoustic Field for Hand-to-Face Gesture Interactions
KR20200042076A (en) Digital Breathing Stethoscope Method Using Skin Image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19757015

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019757015

Country of ref document: EP

Effective date: 20200922