WO2019136395A1 - System and method for monitoring life signs of a person - Google Patents

System and method for monitoring life signs of a person Download PDF

Info

Publication number
WO2019136395A1
WO2019136395A1 PCT/US2019/012568 US2019012568W WO2019136395A1 WO 2019136395 A1 WO2019136395 A1 WO 2019136395A1 US 2019012568 W US2019012568 W US 2019012568W WO 2019136395 A1 WO2019136395 A1 WO 2019136395A1
Authority
WO
WIPO (PCT)
Prior art keywords
signals
breathing
person
indicative
camera
Prior art date
Application number
PCT/US2019/012568
Other languages
French (fr)
Inventor
Eric Gregory White
David Robert Abrams
Federico Guerrero-Reyes
Original Assignee
Miku, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/239,501 external-priority patent/US20190139389A1/en
Application filed by Miku, Inc. filed Critical Miku, Inc.
Priority to CA3087705A priority Critical patent/CA3087705A1/en
Priority to CN201980011948.8A priority patent/CN111937047A/en
Priority to MX2020007058A priority patent/MX2020007058A/en
Publication of WO2019136395A1 publication Critical patent/WO2019136395A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0015Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
    • A61B5/0022Monitoring a patient using a global network, e.g. telephone networks, internet
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • A61B5/02416Detecting, measuring or recording pulse rate or heart rate using photoplethysmograph signals, e.g. generated by infrared radiation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/0507Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  using microwaves or terahertz waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/0816Measuring devices for examining respiratory frequency
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1113Local tracking of patients, e.g. in a hospital or private home
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0407Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis
    • G08B21/0415Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis detecting absence of activity per se
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0438Sensor means for detecting
    • G08B21/0492Sensor dual technology, i.e. two or more technologies collaborate to extract unsafe condition, e.g. video tracking and RFID tracking
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/04Babies, e.g. for SIDS detection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2562/00Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
    • A61B2562/02Details of sensors specially adapted for in-vivo measurements
    • A61B2562/0204Acoustic sensors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2562/00Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
    • A61B2562/02Details of sensors specially adapted for in-vivo measurements
    • A61B2562/0228Microwave sensors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2576/00Medical imaging apparatus involving image processing or analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4818Sleep apnoea
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0438Sensor means for detecting
    • G08B21/0476Cameras to detect unsafe condition, e.g. video cameras
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Definitions

  • the present invention relates to monitoring equipment that can monitor the life signs of a person as that person sleeps . More
  • the present invention relates to monitoring equipment that monitors life signs using low-energy radar, cameras, and/or microphones.
  • monitoring systems that are designed to monitor various life signs. For example, in an intensive care unit of a hospital, patients are attached to heart rate monitors, blood pressure monitors, blood oxygen monitors and the like. Should any of these monitors detect a condition outside an acceptable threshold, an alarm is sounded.
  • the various sensors are typically wired sensors that are attached directly to the body. This makes the sensors very accurate and resistant to interfering signal noise from outside sources.
  • Monitoring devices are also used in a variety of ways outside of a hospital . For instance, parents often use baby monitors to monitor their children when they sleep. Such monitoring typically occurs from the time the child is an infant until the child is old enough to not need a crib. The monitoring is performed for many reasons. Infants are susceptible to Sudden Infant Death Syndrome (SIDS) . As infants grow and begin to move, they also face dangers from accidental strangulation and choking. Once the child is old enough to stand and climb, the child faces dangers from falling and entrapment. Monitoring is also used on adults, such as those who have sleep apnea or those who have a high risk of mortality due to disease or age.
  • SIDS Sudden Infant Death Syndrome
  • Monitoring is also used on adults, such as those who have sleep apnea or those who have a high risk of mortality due to disease or age.
  • the most common wireless monitoring system is a camera and microphone system, commonly referred to as a baby monitor. These devices are placed in the room and are directed toward a crib or bed. The baby monitor transmits images of the crib or bed, along with any detected audio signals to a remote receiver. A person viewing the display of the receiver can view any movement in the crib or bed and can hear if the occupant of the crib or bed is crying or making any sounds of distress .
  • the disadvantages of a traditional baby monitor system are obvious .
  • the baby monitor only detects movement and sound. If an infant has a SIDS event, there may be no movement or sound. Likewise, if an adult passes away while sleeping, there may be no movement or sound.
  • Radar-based monitoring systems also have some disadvantages . Even if a directional antenna is used, radar energy propagates from the antenna in all directions . This creates an omni-directional area of coverage. As a consequence, the radar system can detect movement from objects, pets, and people well away from the crib or bed being monitored.
  • Movement from non-targets objects, pets and people can be wrongly interpreted as movement within the crib or bed by the monitoring system. Accordingly, if a person stops breathing, the falsely detected movements can delay or prevent the danger from being detected .
  • monitoring systems have been developed that are hybrids of traditional camera baby monitors and low-energy radar monitors .
  • Such prior art systems monitor a person in a crib or bed with both a camera and a radar transceiver.
  • the outputs of the camera system and the radar system are not cross-correlated. Rather, if the radar system detects an alarm condition, the camera system is merely there to see if the alarm is a false alarm. If the person being monitored stops breathing and the radar fails to detect the
  • the signals captured by a radar system and/or a camera system that contain relevant data can easily be washed out by noise and signals that contain irrelevant data. For instance, the chest movement of a sleeping infant wrapped in a tight blanket are very small. Detecting such movements using low energy radar and/or a camera is difficult. The movements caused by breathing are buried in signals caused by body movements, signals caused by
  • monitoring system that can monitor a person in a crib or bed by detecting even the smallest movement caused by breathing.
  • a need also exists for such a system that can separate useful signals from noise and irrelevant signals to produce a more reliable system with less false alarms .
  • a need also exists for such a system that can analyze signals in real time without having to perform signal analysis at a remote location.
  • the present invention is a system and method for wirelessly monitoring a person.
  • the system can detect breathing, or the lack thereof, in a subject person, such as an infant in a crib or an adult with sleep apnea.
  • the system and method can detect breathing using radar signals, camera signals and/or
  • radar signals are directed toward an area in which the subject person is sleeping.
  • the radar signals reflect from the subject person, therein creating reflected radar signals .
  • Contained within the reflected radar signals is data that references the rhythmic movements of breathing and/or the beating heart .
  • a camera is directed toward the area in which the subject person is sleeping.
  • the camera detects movements of the subject person. Contained within the detected movements are movements caused by rhythmic breathing and/or the beating heart .
  • At least one microphone also monitors the area of the subject person.
  • the microphone detects sounds made by the subject person. Contained within the detected sounds are the sounds caused by rhythmic breathing .
  • the reflected radar signals, the signals from the camera, and the signals from the microphone are fused to determine if the subject person is moving, and if not moving if the person is breathing or not- breathing . An alarm is generated should the
  • reflected radar signals, the camera signals and the sound signals all simultaneously indicate no movement and no breathing of the subject person.
  • FIG. 1 shows an exemplary embodiment of the present invention monitoring system
  • FIG. 2 shows a schematic of the monitoring unit used by the present invention monitoring system
  • FIG. 3 shows a logic diagram that illustrates the operations performed within the monitoring unit
  • FIG. 4 is a block diagram that shows
  • FIG. 5 shows a screen and indicates regions that can be segmented by a user
  • FIG. 6 is a block diagram that shows
  • FIG. 7 is a block diagram that shows
  • FIG. 8 is a block diagram that shows
  • FIG. 9 shows an exemplary screen produced by a computing device that is used to interface with the monitoring unit of Fig. 2 and a user. DETAILED DESCRIPTION FOR CARRYING OUT THE INVENTION
  • the present invention monitoring system can be used in many institutional settings, such as hospitals and nursing homes, the system is particularly well suited for in-home use.
  • monitoring system is selected for the purposes of description and illustration that shows the present invention being used in a home to monitor a person in a bed or crib.
  • the illustrated embodiment is merely exemplary and should not be considered a limitation when interpreting the scope of the appended claims .
  • the monitoring system 10 includes a monitoring unit 12.
  • the monitoring unit 12 is placed in a room and is directed toward a subject person 14, such as a child in a crib or an adult in bed.
  • the monitoring unit 12 can actively emit light 16, radar signals 18 and audio signals 20.
  • the light 16 emitted are preferably in the infrared spectrum so as not to be visible to the subject person 14.
  • the emitted radar signals 18 are low-energy signals that are harmless to the subject person 14 and any other sensitive electronic equipment, such as a pacemaker.
  • the emitted audio signals 20 are audible to the subject person 14 being monitored.
  • the audio signals 20 can be music, an alarm or the transmitted voice of another person.
  • the monitoring unit 12 receives light 22, reflected radar signals 24 and ambient sounds 26.
  • the light 22 received includes existing ambient light and light returned from any illumination projected by the monitoring unit 10.
  • the reflected radar signals 24 are the returns from the radar emitted by the monitoring unit 10.
  • the ambient sounds 26 are any audible sounds detected by the monitoring unit 10.
  • the light 22, reflected radar signals 24 and ambient sounds 26 received by the monitoring unit 10 are all internally processed.
  • the monitoring unit 10 uses circuitry and processing software to specifically extract features that are associated with the breathing of the subject person 14.
  • the monitoring unit 10 processes the light 22, reflected radar signals 24, and ambient sounds 26 in real time.
  • the processed information can be accessed by a remote computing device 28, such as a smart phone, running the application software 30 needed to display the processed signal information.
  • the processed signals can be shared directly with the remote computing device 28 or can be forwarded to the remote computing device 28 through a data network 32, such as a cellular network or the
  • An observer 34 such as a parent or nurse, can view the remote computing device 28 and receive the processed information. As will later be explained, the processed information is formatted in a user- friendly manner. Likewise, if an alarm condition is detected by the monitoring unit 12, the observer 34 is instantly informed.
  • the observer 34 can be
  • a subject person 14 who is agitated can be pacified and a subject person 14 in distress can be comforted until help arrives on scene .
  • the monitoring unit 12 contains a camera 36 for imaging the sleeping area in a crib, bed, bassinet or the like.
  • the camera 36 preferably has the ability to image the visible light spectrum and at least some of the infrared spectrum. In this manner, the camera
  • 36 can image in daylight and in the dark.
  • the camera 36 has an objective lens 38.
  • the objective lens 38 is directed in a particular direction that is shown by line 40.
  • the objective lens 38 of the camera 36 is directed toward the subject person 14 being monitored.
  • the light 22 captured by the camera 36 is converted into camera data 42 that is processed in a manner later
  • One or more LEDs 44 may be provided for illuminating the subject person 14 being monitored.
  • the LEDs 44 are preferably IR LEDs that produce light that can be detected by the camera 36 but not by the eyes of the subject person 14 being
  • a radar transceiver 46 is provided. Although different radars can be used, the radar transceiver 46 preferably is a low powered pulse Doppler radar. In this manner radar transceiver 46 can detect both velocity and range.
  • the radar transceiver 46 is configured to have its greatest range in a
  • the direction 48 of greatest range is parallel to the directional line 40 of the camera 36.
  • the radar transceiver 46 covers the same area as is being imaged by the camera 36. This causes the radar transceiver 46 to be more sensitive in the direction of the subject area.
  • the radar transceiver 46 emits radar signals 18 covering the subject area and detects reflected radar signals 24 that return.
  • the reflected radar signals 24 are detected by the radar transceiver 46 and are converted into radar data 50.
  • the radar data 50 is processed in a manner that is later described.
  • One or more microphones 52 are provided as part of the monitoring unit 12. Preferably, at least two microphones 52 are used. The microphones 52 are oriented toward the subject area targeted by the camera 36 and radar transceiver 46. In this manner, any ambient sounds 26 originating within the subject area will be detected by the microphones 52. The microphones 52 produce audio data 54. The audio data
  • a computing device 56 receives the camera data 42, the radar data 50 and the audio data 54.
  • the computing device 56 contains a clock 58 that enables the data to be indexed by time.
  • the computing device 56 can have a high capacity memory 60 or access to cloud memory 33 through the data network 32 so that large caches of time indexed data to be stored for later review.
  • the computing device 56 can exchange data with outside sources using a Bluetooth® transceiver 62 and/or a WiFi transceiver 64. Other data
  • the computing device 56 also controls one or more speakers 66.
  • the speakers 66 can broadcast audio signals 20 into the environment of the monitoring unit 12.
  • the broadcast audio signals 20 can be soothing music that can lull a child to sleep or a piercing alarm that can bring help .
  • the computing device 56 is also connected to a user interface 68.
  • the user interface 68 contains an on/off switch 70 for the monitoring unit 12 and may contain status lights and sensitivity controls that can be manually adjusted by a user.
  • the computing device 56 is programmable and runs specialized operational software 72.
  • the operational software 72 is capable of being
  • the computer system 56 receives the audio data 54 from the microphones 52, the camera data 42 from the camera 36, and the radar data 50 from the radar transceiver 46. This data is analyzed by the computing system 56 using the operational software 72. The purpose of the analysis is to first determine if the subject person 14 is within the area being monitored. If the subject person 14 is in the monitored area, it will then extract features from within the audio data 54, the camera data 42 and the radar data 50 and determine if they are attributable to the breathing of the subject person 14. These features are then monitored for change. If the features indicate that the subject person 14 has stopped breathing, then an alarm is generated.
  • the processing of the audio data 54 from the microphones 52 is first described. Both the sounds of crying and the sounds of breathing can be detected in the audio data 54. Detecting the sounds of crying can be accomplished using known sound processing techniques, such as those described in U.S. Patent No. 9,020,622 to Shoham. What is far more intricate is effectively isolating the features in the sound audio data 54 that corresponds to the delicate sounds of breathing. To isolate the sounds of breathing, the audio data 54 from the microphones 52 is initially filtered. See Block 80. The
  • filtering may include directional filtering, this may eliminate some sound signals that do not originate in the subject area.
  • the directional filtering is optional.
  • the ambient sound signals 26 are filtered in an attempt to isolate the sounds of breathing from other environmental noises .
  • the required filtering includes subjecting the audio data 54 to a low pass filter 81. This attenuates signals with
  • the audio data 54 is initially filtered, it is further processed to extract desired features, which in this case, are the sounds of breathing. See Block 82.
  • FIG. 4 The details of the feature extraction process are shown in Fig. 4. Referring to Fig. 4 in
  • a filtered audio signal 74 is obtained after the raw audio data 54 is filtered in the filtering process of Block 80.
  • extraction process is to extract a breathing
  • the filtered audio signal 74 is resampled with a reduction factor. See Block 84.
  • a preferred reduction factor for the resampling is 1/1000, however other reduction factors can be used.
  • the resampled audio data is the compressed using an arctan function. See Block 86.
  • the compressed audio data is then subjected to a fast Fourier transform to find the occurrences of max-peak signal events . See Block 88. These max-peak events correspond to the breathing waveform of interest .
  • the resulting breathing frequency waveform 76 is later used in a group classification process. See Block 90
  • the camera data 42 is also processed by the computing system 56.
  • the camera data 42 is initially subjected to area segmentation. See Block 92.
  • the person setting the monitoring unit 12 in place directs the camera 36 toward a crib or bed in the subject area.
  • the camera 36 has a field of view 93 that is imaged.
  • a person looking at the image of the camera 36 can also manage the image within the field of view 93.
  • the subject area 91 is selected as the area into which the subject person 14 is most likely located.
  • the subject area 91 is typically the area of the crib mattress or bed mattress .
  • the area surrounding the subject area 91 is then defined as the visitor area 95. This segmentation process is used to distinguish between movements that may be
  • the computer system 56 will only consider data that originates from within the selected subject area 91.
  • the camera data 42 is further processed to extract desired features, which in this case, are the movements associated with breathing. See Block 94.
  • desired features which in this case, are the movements associated with breathing. See Block 94.
  • the camera data 42 contains various frame images .
  • the frames undergo a color space transform that changes the images from color to grayscale. See Block 97. This reduces the amount of processing needed to analyze the images, therein increasing response time for the system.
  • the grayscale image frames are then stored in a circular buffer 99.
  • the grayscale image frames are analyzed by the computing system 56 to determine movement. It will be understood that in order to analyze
  • the image capture rate is dependent on framerate of the camera 36. Many cameras that are compatible with the system have a framerate that would require a capture rate of one frame per every ten to twenty-five frames. See Block 96
  • Block 98 subsequent captured frames are compared where the difference between image frames is the sum of the first frame minus the subsequent frame at the delay. Any differences in the image frame are indicative of movement that has occurred during the time of the delayed capture rate.
  • rhythmic patterns of movement are detected.
  • a fast Fourier transform is used to identify the max-peak signal events can be isolated that represent rhythmic movements .
  • These rhythmic patterns of movement are distinguishable over random periods of body movement .
  • the rhythmic patterns correlate to movements caused by breathing and/or the beating heart.
  • the result is a camera derived breathing waveform of heartbeat waveform that is later used in a group classification process. See Block 104.
  • the computing system 56 also analyzes the reflected radar signals 24 in an attempt to detect movements associated with breathing and/or the beating heart.
  • the reflected radar signals 24 are initially filtered, as indicated by Block 106.
  • the signals fed into a circular buffer 108.
  • the incoming reflected radar signals 24 are phase bounded and need to undergo phase unwrapping and phase bounding. See Block 110 and Block 112.
  • the unwrapped signal are passed through an exponential high pass filter so that the waveform is zero centered. See Block 114.
  • the unwrapped, high filtered data is then subjected to a moving average low pass filter to smooth the data. See Block 116. This creates the filtered data 118.
  • the targeted features are extracted from the filtered data 118.
  • the targeted features are the returns that correspond to movement caused by breathing and/or the beating heart. See Block 120 in Fig . 3.
  • the filtered radar return data 118 is arranged in bin buffers 122. Once a bin buffer 122 is full, the root mean square is calculated. See Block 124. With the root mean square of each bin buffer known, a signal-to-noise ratio is calculated. See Block 126. A true signal-to-noise ratio cannot be directly calculated without a-priori knowledge of the signal. As such, certain
  • the breathing rate of the person being monitored is assumed to be between 15 breaths per minute and 60 breaths per minute. This translates to a breathing rate of between 0.25 Hz and 1 Hz . If the heartbeat is being detected a slightly higher rate is utilized.
  • a fast Fourier transform is implemented to change the waveform from a time domain to a frequency domain. See Block 128. This creates a transformed waveform 130.
  • a window is applied to each time frame of the bin buffer 122 to prevent wrapping boundary effects. Using the transformed waveform 130, the maximum spectral magnitude and. its
  • the transformed waveform 130 contains both useful signals and noise. These aspects must be separated. See Block 132.
  • the fundamental frequency of the subject's breathing rate is determined by calculating the maximum component of the fast Fourier transform in the assumed breathing rate frequency of 0.25 Hz to 1.0Hz. From the maximum component, the waveform is walked left and right until it reaches thirty (30%) of its peak value. This bandwidth at this selected value is defined as the bandwidth of the signal. The remainder of the waveform is designated as noise. If the peak value is found to be near the waveform extremes, i.e. frequency equal to zero or equal to FT length/2, them the peak is considered invalid and a subsequent bin buffer is analyzed. Likewise, if another high value is found within the fast Fourier transform range that is larger than the originally calculated peak value, then the peak value is considered invalid and another bin buffer is analyzed .
  • the data from the various buffer bin analyses is then correlated in a bin correlation step and then aggregated in a bin aggregation step. See Block 136 and Block 138 in Fig. 3.
  • bin correlation the waveforms corresponding to each processed buffer bin is digitized into "l's" and "0 ' s” . This is accomplished by setting a threshold and assigning "l's" to values above the threshold and. "0' s" to values under the threshold. This creates groups. Subsequent groups are then compared using a ⁇ XOR comparator . The results are saved to a correlation matrix . Changes in the field are quickly identified due to the simple comparison and corresponding rapid processing time .
  • the area of correlated data identified in the radar field is used to identify the location of the monitored person in the radar field. Once the location of the person is
  • the analysis of the radar return data 50 can be limited to the returns from the identified area .
  • the bins identified as containing the data from the person being monitored are grouped.
  • the range bin having the maximum signal-to-noise ratio is identified.
  • each range bin is analyzed to sum the signal- to-noise ratio if the signal-to-noise ratio exceeds a percentage of the group' s maximum signal-to-noise ratio.
  • Subsequent groups are analyzed to determine how well they match the first group. The group with the highest match score is selected as the next group.
  • the data attributable to the person being monitored is isolated and the signal-to-noise ratio is known. Using these variables, the radar derived breathing waveform 140 can be isolated that most probably represents the rhythmic breathing of the person being monitored. See Block 139.
  • the miicrophone derived breathing waveform 76, the camera derived breathing waveform 104 and the radar derived breathing waveform 140 are known .
  • the ’waveforms 76, 104 , 140 are then classified using a group classification process, See Block 90.
  • the three classifications used in the present invention system are breathing, no-movement , and movement. All groups are defaulted to the no-movement state. If any waveform from any source indicates breathing, then the net result of the whole group is set. to breathing. Likewise, if any waveform from any source indicates movement of the person being monitored, then the net. result of the whole group is set. to movement. However, if all sources indicate a state of no-movement for a selected period of time, an alarm condition occurs.
  • the sensitivity of the system can be controlled by controlling waveform thresholds and applying
  • the validation is used to reduce the occurrences of false alarms.
  • the default state is the state of no- movement, which is the alarm state.
  • the existing state can only be changed if the new state persists for a selected period of time.
  • the period of time is adjustable and is preferably between 1 and 10 seconds.
  • the default no- movement. state is replaced with either the breathing state or the movement state.
  • the no-movement state will not be reinstated until breathing or movement is not detected for the duration of the threshold time period.
  • Block 144 if the no---movement state is recognized for the threshold time period, then an alarm is sent .
  • the remote computing device 28 of the observer 34 will provide the observer 34 with both an audio and visual alarm.
  • the observer 34 can live-stream the camera data 42 and the audio data 54, From this, the user may be able to quickly ascertain that the alarm is a false alarm. For instance, the observer 34 may be able to see that the subject person 14 being monitored sim ly woke up and left the monitored area of the crib or bed.
  • the observer 34 has certain options. First, the observer 34 can cause the monitoring unit 12 to sound a loud audible alarm. This may be able to startle a sleeping person into breathing.
  • the observer 34 can stream live audio to the monitoring unit 12, This will enable the observer 34 to speak to the person being monitored and hopefully can be used to rouse the person back to conscious breathing .
  • an exemplary screen 150 is shown that exemplifies what a user can see on his/her remote computing device 28.
  • the screen 150 shows a live feed of the camera data 42,
  • various icons 152 are displayed on the screen 150.
  • the observer 34 can elect to hear the audio feed from the monitoring unit 12, send an audio feed to the monitoring unit 12, and/or sound an alarm.
  • Optional icons such as autodialing of 911 and the like can also be included.
  • the observer 34 can see a reproduction of a breathing waveform 15 .
  • the breathing waveform 154 can be the microphone derived breathing waveform 76, the camera deri ed breathing waveform 10 , the radar derived breathing waveform 140, or a composite of any combination.
  • the status 156 of the current state is shown, that is, the state of movement, breathing or no-breathing .
  • the current state is also shown along with a time indication 158 that indicates the duration of that state. For example, in Fig. 9, the status 156 indicates a moving state and shows that the state has remained for the past 15 minutes. This may be an indication that the subject person 14 is awake .
  • the observer 34 can select to transmit his/her voice to the monitoring unit 12 in an attempt to quiet or assure the subject person 14.
  • the observer 34 may elect to transmit music or a recorded story to the ’monitoring unit 12 to help the subject person 14 fall back to sleep.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Physiology (AREA)
  • Business, Economics & Management (AREA)
  • Cardiology (AREA)
  • Pulmonology (AREA)
  • General Physics & Mathematics (AREA)
  • Emergency Management (AREA)
  • Gerontology & Geriatric Medicine (AREA)
  • Psychology (AREA)
  • General Business, Economics & Management (AREA)
  • Primary Health Care (AREA)
  • Psychiatry (AREA)
  • Radiology & Medical Imaging (AREA)
  • Social Psychology (AREA)
  • Epidemiology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Emergency Alarm Devices (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Alarm Systems (AREA)

Abstract

A system (10) and method for monitoring a person (14) within a defined area by detecting breathing, or the lack thereof. Breathing is detected using radar signals (24), camera signals (22) and/or microphone signals (26). The radar signals (24), camera signals (22) and/or microphone signals (26) are analyzed to determine if the subject person (14) is moving, and if not moving if the person (14) is breathing or not-breathing. An alarm is generated should the reflected radar signals (24), the camera signals (22) and the microphone signals (26) all simultaneously indicate no movement and no movement of the subject person (14) in the defined area for a selected period of time.

Description

SYSTEM AND METHOD FOR MONITORING LIFE SIGNS OF A
PERSON
Technical Field Of The Invention
In general, the present invention relates to monitoring equipment that can monitor the life signs of a person as that person sleeps . More
particularly, the present invention relates to monitoring equipment that monitors life signs using low-energy radar, cameras, and/or microphones.
Background Art
There are many monitoring systems that are designed to monitor various life signs. For example, in an intensive care unit of a hospital, patients are attached to heart rate monitors, blood pressure monitors, blood oxygen monitors and the like. Should any of these monitors detect a condition outside an acceptable threshold, an alarm is sounded.
In a hospital setting, the various sensors are typically wired sensors that are attached directly to the body. This makes the sensors very accurate and resistant to interfering signal noise from outside sources.
Monitoring devices are also used in a variety of ways outside of a hospital . For instance, parents often use baby monitors to monitor their children when they sleep. Such monitoring typically occurs from the time the child is an infant until the child is old enough to not need a crib. The monitoring is performed for many reasons. Infants are susceptible to Sudden Infant Death Syndrome (SIDS) . As infants grow and begin to move, they also face dangers from accidental strangulation and choking. Once the child is old enough to stand and climb, the child faces dangers from falling and entrapment. Monitoring is also used on adults, such as those who have sleep apnea or those who have a high risk of mortality due to disease or age.
When monitoring is used on a child or a mobile adult, wired sensors are rarely used. The wires of sensors create strangulation hazards and tripping hazards . As such, the potential harm can outweigh the potential good. Accordingly, most monitoring equipment sold for in-home use relies on wireless monitoring. The most common wireless monitoring system is a camera and microphone system, commonly referred to as a baby monitor. These devices are placed in the room and are directed toward a crib or bed. The baby monitor transmits images of the crib or bed, along with any detected audio signals to a remote receiver. A person viewing the display of the receiver can view any movement in the crib or bed and can hear if the occupant of the crib or bed is crying or making any sounds of distress .
The disadvantages of a traditional baby monitor system are obvious . The baby monitor only detects movement and sound. If an infant has a SIDS event, there may be no movement or sound. Likewise, if an adult passes away while sleeping, there may be no movement or sound.
Recognizing the disadvantages, improved monitoring devices have been developed for in-home use. Some of these monitoring devices use low energy radar to monitor a sleeping person. The radar is sensitive enough to detect the slow expansion and contraction of the chest as a person inhales and exhales . Such prior art monitoring systems are exemplified by Chinese Patent Disclosure No.
CN104133199A and Chinese Patent Disclosure No.
CN103110422A.
Radar-based monitoring systems also have some disadvantages . Even if a directional antenna is used, radar energy propagates from the antenna in all directions . This creates an omni-directional area of coverage. As a consequence, the radar system can detect movement from objects, pets, and people well away from the crib or bed being monitored.
Movement from non-targets objects, pets and people can be wrongly interpreted as movement within the crib or bed by the monitoring system. Accordingly, if a person stops breathing, the falsely detected movements can delay or prevent the danger from being detected .
In the prior art, monitoring systems have been developed that are hybrids of traditional camera baby monitors and low-energy radar monitors . Such prior art systems monitor a person in a crib or bed with both a camera and a radar transceiver. However, the outputs of the camera system and the radar system are not cross-correlated. Rather, if the radar system detects an alarm condition, the camera system is merely there to see if the alarm is a false alarm. If the person being monitored stops breathing and the radar fails to detect the
condition due to false returns, the camera system will not detect the danger. Such prior art hybrid systems are exemplified by U.S. Patent Application Publication No. 2016/0313442 to Ho, and Chinese Patent Disclosure No. CN102835958.
The signals captured by a radar system and/or a camera system that contain relevant data can easily be washed out by noise and signals that contain irrelevant data. For instance, the chest movement of a sleeping infant wrapped in a tight blanket are very small. Detecting such movements using low energy radar and/or a camera is difficult. The movements caused by breathing are buried in signals caused by body movements, signals caused by
movements in the surrounding environment, and signal noise. Accordingly, signal processing algorithms must be used to separate the useful signals from the noise and the irrelevant signals. The signal processing algorithms used in the prior art tend to produce a high number of false alarms in the hope of never missing a real alarm. However, the large number of false alarms make prior art systems unpopular and cause people to stop using the systems after experiencing a string of false alarms. Thus, many prior art monitoring systems are no better than having no monitoring system at all .
A need therefore exists for a wireless
monitoring system that can monitor a person in a crib or bed by detecting even the smallest movement caused by breathing. A need also exists for such a system that can separate useful signals from noise and irrelevant signals to produce a more reliable system with less false alarms . A need also exists for such a system that can analyze signals in real time without having to perform signal analysis at a remote location. These needs are met by the present invention as described and claimed below.
DISCLOSURE OF THE INVENTION
The present invention is a system and method for wirelessly monitoring a person. The system can detect breathing, or the lack thereof, in a subject person, such as an infant in a crib or an adult with sleep apnea.
The system and method can detect breathing using radar signals, camera signals and/or
microphone signals. Using a radar transceiver, radar signals are directed toward an area in which the subject person is sleeping. The radar signals reflect from the subject person, therein creating reflected radar signals . Contained within the reflected radar signals is data that references the rhythmic movements of breathing and/or the beating heart .
Likewise, a camera is directed toward the area in which the subject person is sleeping. The camera detects movements of the subject person. Contained within the detected movements are movements caused by rhythmic breathing and/or the beating heart .
At least one microphone also monitors the area of the subject person. The microphone detects sounds made by the subject person. Contained within the detected sounds are the sounds caused by rhythmic breathing .
The reflected radar signals, the signals from the camera, and the signals from the microphone are fused to determine if the subject person is moving, and if not moving if the person is breathing or not- breathing . An alarm is generated should the
reflected radar signals, the camera signals and the sound signals all simultaneously indicate no movement and no breathing of the subject person.
BRIEF DESCRIPTION OF THE DRAWINGS
For a better understanding of the present invention, reference is made to the following description of an exemplary embodiment thereof, considered in conjunction with the accompanying drawings, in which: FIG. 1 shows an exemplary embodiment of the present invention monitoring system;
FIG. 2 shows a schematic of the monitoring unit used by the present invention monitoring system; FIG. 3 shows a logic diagram that illustrates the operations performed within the monitoring unit;
FIG. 4 is a block diagram that shows
operational details of the microphone feature extraction process referenced in Fig. 3;
FIG. 5 shows a screen and indicates regions that can be segmented by a user;
FIG. 6 is a block diagram that shows
operational details of the camera feature extraction process referenced in Fig. 3;
FIG. 7 is a block diagram that shows
operational details of the radar signal filtering process referenced in Fig. 3;
FIG. 8 is a block diagram that shows
operational details of the radar feature extraction process referenced in Fig. 3; and
FIG. 9 shows an exemplary screen produced by a computing device that is used to interface with the monitoring unit of Fig. 2 and a user. DETAILED DESCRIPTION FOR CARRYING OUT THE INVENTION
Although the present invention monitoring system can be used in many institutional settings, such as hospitals and nursing homes, the system is particularly well suited for in-home use.
Accordingly, an exemplary embodiment of the
monitoring system is selected for the purposes of description and illustration that shows the present invention being used in a home to monitor a person in a bed or crib. The illustrated embodiment, however, is merely exemplary and should not be considered a limitation when interpreting the scope of the appended claims .
Referring to Fig. 1, an overview of the monitoring system 10 is shown. The monitoring system 10 includes a monitoring unit 12. The monitoring unit 12 is placed in a room and is directed toward a subject person 14, such as a child in a crib or an adult in bed. In the preferred embodiment, the monitoring unit 12 can actively emit light 16, radar signals 18 and audio signals 20. The light 16 emitted are preferably in the infrared spectrum so as not to be visible to the subject person 14. The emitted radar signals 18 are low-energy signals that are harmless to the subject person 14 and any other sensitive electronic equipment, such as a pacemaker. The emitted audio signals 20 are audible to the subject person 14 being monitored. As will later be explained, the audio signals 20 can be music, an alarm or the transmitted voice of another person.
The monitoring unit 12 receives light 22, reflected radar signals 24 and ambient sounds 26.
The light 22 received includes existing ambient light and light returned from any illumination projected by the monitoring unit 10. The reflected radar signals 24 are the returns from the radar emitted by the monitoring unit 10. The ambient sounds 26 are any audible sounds detected by the monitoring unit 10. The light 22, reflected radar signals 24 and ambient sounds 26 received by the monitoring unit 10 are all internally processed. The monitoring unit 10 uses circuitry and processing software to specifically extract features that are associated with the breathing of the subject person 14. The monitoring unit 10 processes the light 22, reflected radar signals 24, and ambient sounds 26 in real time. The processed information can be accessed by a remote computing device 28, such as a smart phone, running the application software 30 needed to display the processed signal information. Depending upon the location of the remote computing device 28, the processed signals can be shared directly with the remote computing device 28 or can be forwarded to the remote computing device 28 through a data network 32, such as a cellular network or the
Internet .
An observer 34, such as a parent or nurse, can view the remote computing device 28 and receive the processed information. As will later be explained, the processed information is formatted in a user- friendly manner. Likewise, if an alarm condition is detected by the monitoring unit 12, the observer 34 is instantly informed. The observer 34 can
communicate with the monitoring unit 12 and causing the monitoring unit 12 to broadcast music or words that can be heard by the subject person 14 being monitored. In such a manner, a subject person 14 who is agitated can be pacified and a subject person 14 in distress can be comforted until help arrives on scene .
Referring to Fig. 2, the primary components of the monitoring unit 12 are shown and explained. The monitoring unit 12 contains a camera 36 for imaging the sleeping area in a crib, bed, bassinet or the like. The camera 36 preferably has the ability to image the visible light spectrum and at least some of the infrared spectrum. In this manner, the camera
36 can image in daylight and in the dark.
The camera 36 has an objective lens 38. The objective lens 38 is directed in a particular direction that is shown by line 40. The objective lens 38 of the camera 36 is directed toward the subject person 14 being monitored. The light 22 captured by the camera 36 is converted into camera data 42 that is processed in a manner later
described .
One or more LEDs 44 may be provided for illuminating the subject person 14 being monitored. The LEDs 44 are preferably IR LEDs that produce light that can be detected by the camera 36 but not by the eyes of the subject person 14 being
monitored. It will be understood that the LEDs 44 are an economical source of IR light. However, other sources of IR light, such as low powered IR lasers or filtered polychromatic lights could also be used in the design. Regardless of the source of the IR light, the intensity of the light is sufficient to illuminate the area of the subject person 14 being monitored, therein enabling the camera 36 to image that area . A radar transceiver 46 is provided. Although different radars can be used, the radar transceiver 46 preferably is a low powered pulse Doppler radar. In this manner radar transceiver 46 can detect both velocity and range. The radar transceiver 46 is configured to have its greatest range in a
particular direction 48. The direction 48 of greatest range is parallel to the directional line 40 of the camera 36. As such, the radar transceiver 46 covers the same area as is being imaged by the camera 36. This causes the radar transceiver 46 to be more sensitive in the direction of the subject area. The radar transceiver 46 emits radar signals 18 covering the subject area and detects reflected radar signals 24 that return. The reflected radar signals 24 are detected by the radar transceiver 46 and are converted into radar data 50. The radar data 50 is processed in a manner that is later described.
One or more microphones 52 are provided as part of the monitoring unit 12. Preferably, at least two microphones 52 are used. The microphones 52 are oriented toward the subject area targeted by the camera 36 and radar transceiver 46. In this manner, any ambient sounds 26 originating within the subject area will be detected by the microphones 52. The microphones 52 produce audio data 54. The audio data
54 is processed in a manner that is later described.
A computing device 56 receives the camera data 42, the radar data 50 and the audio data 54. The computing device 56 contains a clock 58 that enables the data to be indexed by time. The computing device 56 can have a high capacity memory 60 or access to cloud memory 33 through the data network 32 so that large caches of time indexed data to be stored for later review.
The computing device 56 can exchange data with outside sources using a Bluetooth® transceiver 62 and/or a WiFi transceiver 64. Other data
transmission systems can also be used, such as a cellular network transmission and/or a hardwire connection. The computing device 56 also controls one or more speakers 66. The speakers 66 can broadcast audio signals 20 into the environment of the monitoring unit 12. As will later be explained, the broadcast audio signals 20 can be soothing music that can lull a child to sleep or a piercing alarm that can bring help .
The computing device 56 is also connected to a user interface 68. The user interface 68 contains an on/off switch 70 for the monitoring unit 12 and may contain status lights and sensitivity controls that can be manually adjusted by a user.
The computing device 56 is programmable and runs specialized operational software 72. The operational software 72 is capable of being
periodically updated with programming updates received through the Bluetooth® transceiver 62, the WiFi transceiver 64, or other data transmission system .
Referring to Fig. 3 in conjunction with Fig. 2, it will be understood that the computer system 56 receives the audio data 54 from the microphones 52, the camera data 42 from the camera 36, and the radar data 50 from the radar transceiver 46. This data is analyzed by the computing system 56 using the operational software 72. The purpose of the analysis is to first determine if the subject person 14 is within the area being monitored. If the subject person 14 is in the monitored area, it will then extract features from within the audio data 54, the camera data 42 and the radar data 50 and determine if they are attributable to the breathing of the subject person 14. These features are then monitored for change. If the features indicate that the subject person 14 has stopped breathing, then an alarm is generated.
Referring to Fig. 3 in conjunction with Fig. 2, it can be seen that the computing system 56
processes the audio data 54, camera data 42 and radar data 50. All three sets of data are analyzed to detect signal features that are indicative of breathing .
The processing of the audio data 54 from the microphones 52 is first described. Both the sounds of crying and the sounds of breathing can be detected in the audio data 54. Detecting the sounds of crying can be accomplished using known sound processing techniques, such as those described in U.S. Patent No. 9,020,622 to Shoham. What is far more intricate is effectively isolating the features in the sound audio data 54 that corresponds to the delicate sounds of breathing. To isolate the sounds of breathing, the audio data 54 from the microphones 52 is initially filtered. See Block 80. The
filtering may include directional filtering, this may eliminate some sound signals that do not originate in the subject area. The directional filtering is optional. In a required filtering step, the ambient sound signals 26 are filtered in an attempt to isolate the sounds of breathing from other environmental noises . The required filtering includes subjecting the audio data 54 to a low pass filter 81. This attenuates signals with
frequencies that are too high to represent breathing. After the audio data 54 is initially filtered, it is further processed to extract desired features, which in this case, are the sounds of breathing. See Block 82.
The details of the feature extraction process are shown in Fig. 4. Referring to Fig. 4 in
conjunction with Fig. 3 and Fig. 2, it can be seen that a filtered audio signal 74 is obtained after the raw audio data 54 is filtered in the filtering process of Block 80. The goal of the feature
extraction process is to extract a breathing
waveform from the filtered audio signal 74. To extract a breathing waveform from the filtered audio signal 74, the filtered audio signal 74 is resampled with a reduction factor. See Block 84. A preferred reduction factor for the resampling is 1/1000, however other reduction factors can be used. The resampled audio data is the compressed using an arctan function. See Block 86. The compressed audio data is then subjected to a fast Fourier transform to find the occurrences of max-peak signal events . See Block 88. These max-peak events correspond to the breathing waveform of interest . The resulting breathing frequency waveform 76 is later used in a group classification process. See Block 90
Returning to Fig. 3 and Fig. 2, it is shown that the camera data 42 is also processed by the computing system 56. The camera data 42 is initially subjected to area segmentation. See Block 92. Upon the setup of the monitoring unit 12, the person setting the monitoring unit 12 in place, directs the camera 36 toward a crib or bed in the subject area. Referring to Fig. 5 with Fig. 3, it can be seen that the camera 36 has a field of view 93 that is imaged. A person looking at the image of the camera 36, can also manage the image within the field of view 93. The subject area 91 is selected as the area into which the subject person 14 is most likely located. The subject area 91 is typically the area of the crib mattress or bed mattress . The area surrounding the subject area 91 is then defined as the visitor area 95. This segmentation process is used to distinguish between movements that may be
attributable to the subject person 14 from all other detected movements . Accordingly, when looking for the movements caused by breathing, the computer system 56 will only consider data that originates from within the selected subject area 91.
Referring to Fig. 6 in conjunction with Fig. 2 and Fig. 3, it can be seen that after segmentation (Block 92), the camera data 42 is further processed to extract desired features, which in this case, are the movements associated with breathing. See Block 94. After field segmentation, the camera data 42 contains various frame images . The frames undergo a color space transform that changes the images from color to grayscale. See Block 97. This reduces the amount of processing needed to analyze the images, therein increasing response time for the system. The grayscale image frames are then stored in a circular buffer 99. The grayscale image frames are analyzed by the computing system 56 to determine movement. It will be understood that in order to analyze
movement, images frames from different time points are compared. The image capture rate is dependent on framerate of the camera 36. Many cameras that are compatible with the system have a framerate that would require a capture rate of one frame per every ten to twenty-five frames. See Block 96
As is indicated by Block 98, subsequent captured frames are compared where the difference between image frames is the sum of the first frame minus the subsequent frame at the delay. Any differences in the image frame are indicative of movement that has occurred during the time of the delayed capture rate. The sum of the difference over time (buffer length = N) is subjected to frequency analysis to determine frequency of respiration as a feature. See Block 100.
By comparing image frames over time, rhythmic patterns of movement are detected. A fast Fourier transform is used to identify the max-peak signal events can be isolated that represent rhythmic movements . See Block 102. These rhythmic patterns of movement are distinguishable over random periods of body movement . The rhythmic patterns correlate to movements caused by breathing and/or the beating heart. The result is a camera derived breathing waveform of heartbeat waveform that is later used in a group classification process. See Block 104.
The computing system 56 also analyzes the reflected radar signals 24 in an attempt to detect movements associated with breathing and/or the beating heart. Referring to Fig. 7 in conjunction with Fig. 3 and Fig. 2, it will be understood that the reflected radar signals 24 are initially filtered, as indicated by Block 106. To filter the reflected radar signals 24, the signals fed into a circular buffer 108. The incoming reflected radar signals 24 are phase bounded and need to undergo phase unwrapping and phase bounding. See Block 110 and Block 112. The unwrapped signal are passed through an exponential high pass filter so that the waveform is zero centered. See Block 114. The unwrapped, high filtered data is then subjected to a moving average low pass filter to smooth the data. See Block 116. This creates the filtered data 118. The targeted features are extracted from the filtered data 118. The targeted features are the returns that correspond to movement caused by breathing and/or the beating heart. See Block 120 in Fig . 3.
Referring to Fig. 8 in conjunction with Fig. 3 and Fig. 2, it can be seen that the filtered radar return data 118 is arranged in bin buffers 122. Once a bin buffer 122 is full, the root mean square is calculated. See Block 124. With the root mean square of each bin buffer known, a signal-to-noise ratio is calculated. See Block 126. A true signal-to-noise ratio cannot be directly calculated without a-priori knowledge of the signal. As such, certain
assumptions must be made to make the calculation possible. The breathing rate of the person being monitored is assumed to be between 15 breaths per minute and 60 breaths per minute. This translates to a breathing rate of between 0.25 Hz and 1 Hz . If the heartbeat is being detected a slightly higher rate is utilized. Using the root mean square data, a fast Fourier transform is implemented to change the waveform from a time domain to a frequency domain. See Block 128. This creates a transformed waveform 130. Additionally, a window is applied to each time frame of the bin buffer 122 to prevent wrapping boundary effects. Using the transformed waveform 130, the maximum spectral magnitude and. its
corresponding frequency ca be calculated.
The transformed waveform 130 contains both useful signals and noise. These aspects must be separated. See Block 132. To distinguish signals from noise in the transformed waveform 130, the fundamental frequency of the subject's breathing rate is determined by calculating the maximum component of the fast Fourier transform in the assumed breathing rate frequency of 0.25 Hz to 1.0Hz. From the maximum component, the waveform is walked left and right until it reaches thirty (30%) of its peak value. This bandwidth at this selected value is defined as the bandwidth of the signal. The remainder of the waveform is designated as noise. If the peak value is found to be near the waveform extremes, i.e. frequency equal to zero or equal to FT length/2, them the peak is considered invalid and a subsequent bin buffer is analyzed. Likewise, if another high value is found within the fast Fourier transform range that is larger than the originally calculated peak value, then the peak value is considered invalid and another bin buffer is analyzed .
The data from the various buffer bin analyses is then correlated in a bin correlation step and then aggregated in a bin aggregation step. See Block 136 and Block 138 in Fig. 3. During bin correlation, the waveforms corresponding to each processed buffer bin is digitized into "l's" and "0 ' s" . This is accomplished by setting a threshold and assigning "l's" to values above the threshold and. "0' s" to values under the threshold. This creates groups. Subsequent groups are then compared using a ~XOR comparator . The results are saved to a correlation matrix . Changes in the field are quickly identified due to the simple comparison and corresponding rapid processing time . The area of correlated data identified in the radar field is used to identify the location of the monitored person in the radar field. Once the location of the person is
identified, the analysis of the radar return data 50 can be limited to the returns from the identified area .
During bin aggregation, the bins identified as containing the data from the person being monitored are grouped. In each group, the range bin having the maximum signal-to-noise ratio is identified. In each group, each range bin is analyzed to sum the signal- to-noise ratio if the signal-to-noise ratio exceeds a percentage of the group' s maximum signal-to-noise ratio. Subsequent groups are analyzed to determine how well they match the first group. The group with the highest match score is selected as the next group. At this point in the analysis, the data attributable to the person being monitored is isolated and the signal-to-noise ratio is known. Using these variables, the radar derived breathing waveform 140 can be isolated that most probably represents the rhythmic breathing of the person being monitored. See Block 139.
From the prior analysis, the miicrophone derived breathing waveform 76, the camera derived breathing waveform 104 and the radar derived breathing waveform 140 are known , The ’waveforms 76, 104 , 140 are then classified using a group classification process, See Block 90. The three classifications used in the present invention system are breathing, no-movement , and movement. All groups are defaulted to the no-movement state. If any waveform from any source indicates breathing, then the net result of the whole group is set. to breathing. Likewise, if any waveform from any source indicates movement of the person being monitored, then the net. result of the whole group is set. to movement. However, if all sources indicate a state of no-movement for a selected period of time, an alarm condition occurs. The sensitivity of the system can be controlled by controlling waveform thresholds and applying
probability functions to the data for each class.
As is indicated by Block 142, once a class of waveform is determined, it is validated . The validation is used to reduce the occurrences of false alarms. The default state is the state of no- movement, which is the alarm state. The existing state can only be changed if the new state persists for a selected period of time. The period of time is adjustable and is preferably between 1 and 10 seconds. As such, if the system detects breathing or movement for the set. period of time;, the default no- movement. state is replaced with either the breathing state or the movement state. The no-movement state will not be reinstated until breathing or movement is not detected for the duration of the threshold time period. As is indicated by Block 144, if the no---movement state is recognized for the threshold time period, then an alarm is sent .
Returning to Fig. 1, it will be understood that when an alarm is sent, the alarm is first sent to the remote; computing device 28 of the observer 34 , The remote computing device 28 will provide the observer 34 with both an audio and visual alarm. Using the remote; computing device; 28, the observer 34 can live-stream the camera data 42 and the audio data 54, From this, the user may be able to quickly ascertain that the alarm is a false alarm. For instance, the observer 34 may be able to see that the subject person 14 being monitored sim ly woke up and left the monitored area of the crib or bed.
If the alarm condition does appear to be real, the observer 34 has certain options. First, the observer 34 can cause the monitoring unit 12 to sound a loud audible alarm. This may be able to startle a sleeping person into breathing.
Additionally, the observer 34 can stream live audio to the monitoring unit 12, This will enable the observer 34 to speak to the person being monitored and hopefully can be used to rouse the person back to conscious breathing .
Referring to Fig. 9 in conjunction with Fig, 1 and Fig. 2, an exemplary screen 150 is shown that exemplifies what a user can see on his/her remote computing device 28. The screen 150 shows a live feed of the camera data 42, Also, on the screen 150 are various icons 152. By pressing the various icons 152, the observer 34 can elect to hear the audio feed from the monitoring unit 12, send an audio feed to the monitoring unit 12, and/or sound an alarm. Optional icons, such as autodialing of 911 and the like can also be included.
In addition to the live camera feed, the observer 34 can see a reproduction of a breathing waveform 15 . The breathing waveform 154 can be the microphone derived breathing waveform 76, the camera deri ed breathing waveform 10 , the radar derived breathing waveform 140, or a composite of any combination. The status 156 of the current state is shown, that is, the state of movement, breathing or no-breathing . The current state is also shown along with a time indication 158 that indicates the duration of that state. For example, in Fig. 9, the status 156 indicates a moving state and shows that the state has remained for the past 15 minutes. This may be an indication that the subject person 14 is awake .
Using the application software 30 of the remote computing device 28 and by menuing through the proper icons 152, the observer 34 can select to transmit his/her voice to the monitoring unit 12 in an attempt to quiet or assure the subject person 14. Alternatively, the observer 34 may elect to transmit music or a recorded story to the ’monitoring unit 12 to help the subject person 14 fall back to sleep.
It will be understood that the embodiment of the present invention that is illustrated and described is merely exemplary and that a person skilled in the art can make many variations to that embodiment. All such embodiments are intended to be included within the scope of the present invention as defined by the appended claims .

Claims

What is claimed is :
1. A method of monitoring a person within a defined area, said method comprising the steps of:
directing radar signals toward said defined area, wherein said radar signals reflect from said person within said defined area and create reflected radar signals;
directing a camera toward said defined area, wherein said camera detects movement of said person in said defined area and produces camera signals indicative of said movement;
monitoring said defined area with at least one microphone, wherein sounds made by said person in said defined area are detected by said at least one microphone and said at least one microphone produces audio signals indicative of said sound;
analyzing said reflected radar signals, said camera signals and said audio signals to determine if said person is moving within said defined area; generating an alarm should said reflected radar signals, said camera signals and said audio signals all simultaneously indicate no movement of said person in said defined area for a selected period of time .
2. The method according to Claim 1, wherein said movement is body movement selected from a group consisting of breathing and heartbeats.
3. The method according to Claim 2, further including analyzing said reflected radar signals to extract signal data indicative of said body
movement .
4. The method according to Claim 2, further including analyzing said camera signals to extract signal data indicative of said body movement .
5. The method according to Claim 2, further including analyzing said audio signals to extract signal data indicative of said body movement .
6. The method according to Claim 1, wherein said radar signals are generated by a pulse Doppler radar transceiver.
7. The method according to Claim 6, wherein said Doppler radar transceiver, said camera and said at least one microphone are contained in a single monitoring unit that is directed toward said defined area .
8. The method according to Claim 7, wherein said monitoring unit contains a processing unit that analyzes said reflected radar signals, said camera signals, and said audio signals.
9. The method according to Claim 8, wherein generating an alarm further includes the step of transmitting an alarm signal from said monitoring unit to a remote computing device.
10. The method according to Claim 9, wherein said monitoring unit transmits said camera signals to said remote computing device.
11. A method of monitoring whether movements of a person that are indicative of breathing have stopped, said method comprising the steps of :
providing a radar transceiver that directs radar signals toward said person, wherein said radar signals detect said movements of said person that may include movements indicative of breathing;
providing an imaging system that captures images of said person, wherein said imaging system optically detects said movements of said person that may include said movements indicative of breathing; analyzing said radar signals and said images to extract data corresponding to said movements indicative of breathing;
generating an alarm should said movements indicative of breathing not be found for a selected period of time .
12. The method according to Claim 11, further including providing at least one microphone that can detect sounds indicative of breathing and produce sound signals therefore.
13. The method according to Claim 12, wherein analyzing said radar signals and said images further includes analyzing said sound signals to extract said sounds indicative of breathing.
14. The method according to Claim 13, wherein generating an alarm only occurs should said
movements indicative of breathing and said sounds indicative of breathing not be found for said selected period of time.
15. The method according to Claim 12, wherein said radar transceiver, said imaging system and said at least one microphone are contained in a single monitoring unit that is directed toward said person.
16. The method according to Claim 15, wherein said monitoring unit contains a processing unit that analyzes said radar signals, said images, and said sound signals .
17. The method according to Claim 16, wherein generating an alarm further includes the step of transmitting an alarm signal from said monitoring unit to a remote computing device.
18. The method according to Claim 16, wherein said monitoring unit transmits said camera signals to said remote computing device.
19. A method of monitoring whether movements of a person that are indicative of breathing have stopped, said method comprising the steps of :
providing a radar transceiver that directs radar signals toward said person, wherein said radar signals detect said movements of said person that may include movements indicative of breathing;
providing a microphone for monitoring said person, wherein said microphone detects sound signals made by said person that may include sounds indicative of breathing;
analyzing said radar signals and said sound signals to extract data corresponding to said movements indicative of breathing and said sounds indicative of breathing;
generating an alarm should said movements indicative of breathing and said sounds indicative of breathing not be detected for a selected period of time.
20. The method according to Claim 19, further including providing a camera that is directed toward said person, wherein said camera detects said movements of said person that may include said movements indicative of breathing.
PCT/US2019/012568 2018-01-05 2019-01-07 System and method for monitoring life signs of a person WO2019136395A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CA3087705A CA3087705A1 (en) 2018-01-05 2019-01-07 System and method for monitoring life signs of a person
CN201980011948.8A CN111937047A (en) 2018-01-05 2019-01-07 System and method for monitoring vital signs of a person
MX2020007058A MX2020007058A (en) 2018-01-05 2019-01-07 System and method for monitoring life signs of a person.

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201862614164P 2018-01-05 2018-01-05
US62/614,164 2018-01-05
US201862718206P 2018-08-13 2018-08-13
US62/718,206 2018-08-13
US16/239,501 US20190139389A1 (en) 2016-08-19 2019-01-03 System and Method for Monitoring Life Signs of a Person
US16/239,501 2019-01-03

Publications (1)

Publication Number Publication Date
WO2019136395A1 true WO2019136395A1 (en) 2019-07-11

Family

ID=67143818

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/012568 WO2019136395A1 (en) 2018-01-05 2019-01-07 System and method for monitoring life signs of a person

Country Status (4)

Country Link
CN (1) CN111937047A (en)
CA (1) CA3087705A1 (en)
MX (1) MX2020007058A (en)
WO (1) WO2019136395A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112438708A (en) * 2019-08-28 2021-03-05 技嘉科技股份有限公司 Personnel condition detection device
WO2021174414A1 (en) * 2020-03-03 2021-09-10 苏州七星天专利运营管理有限责任公司 Microwave identification method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030201894A1 (en) * 2002-03-15 2003-10-30 Songnian Li Vehicle occupant detection system and method using radar motion sensor
US7417727B2 (en) * 2004-12-07 2008-08-26 Clean Earth Technologies, Llc Method and apparatus for standoff detection of liveness
US20080294019A1 (en) * 2007-05-24 2008-11-27 Bao Tran Wireless stroke monitoring
US7502643B2 (en) * 2003-09-12 2009-03-10 Bodymedia, Inc. Method and apparatus for measuring heart related parameters
US20120022348A1 (en) * 2010-05-14 2012-01-26 Kai Medical, Inc. Systems and methods for non-contact multiparameter vital signs monitoring, apnea therapy, sway cancellation, patient identification, and subject monitoring sensors
US20160135734A1 (en) * 2014-11-19 2016-05-19 Resmed Limited Combination therapy for sleep disordered breathing and heart failure

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6062216A (en) * 1996-12-27 2000-05-16 Children's Medical Center Corporation Sleep apnea detector system
US8348840B2 (en) * 2010-02-04 2013-01-08 Robert Bosch Gmbh Device and method to monitor, assess and improve quality of sleep
US20160022204A1 (en) * 2013-03-13 2016-01-28 Kirill Mostov An apparatus for remote contactless monitoring of sleep apnea
WO2016046789A1 (en) * 2014-09-25 2016-03-31 Moustafa Amin Youssef A non-invasive rf-based breathing estimator
US20160313442A1 (en) * 2015-04-21 2016-10-27 Htc Corporation Monitoring system, apparatus and method thereof
US20160345832A1 (en) * 2015-05-25 2016-12-01 Wearless Tech Inc System and method for monitoring biological status through contactless sensing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030201894A1 (en) * 2002-03-15 2003-10-30 Songnian Li Vehicle occupant detection system and method using radar motion sensor
US7502643B2 (en) * 2003-09-12 2009-03-10 Bodymedia, Inc. Method and apparatus for measuring heart related parameters
US7417727B2 (en) * 2004-12-07 2008-08-26 Clean Earth Technologies, Llc Method and apparatus for standoff detection of liveness
US20080294019A1 (en) * 2007-05-24 2008-11-27 Bao Tran Wireless stroke monitoring
US20120022348A1 (en) * 2010-05-14 2012-01-26 Kai Medical, Inc. Systems and methods for non-contact multiparameter vital signs monitoring, apnea therapy, sway cancellation, patient identification, and subject monitoring sensors
US20160135734A1 (en) * 2014-11-19 2016-05-19 Resmed Limited Combination therapy for sleep disordered breathing and heart failure

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112438708A (en) * 2019-08-28 2021-03-05 技嘉科技股份有限公司 Personnel condition detection device
CN112438708B (en) * 2019-08-28 2024-05-14 技嘉科技股份有限公司 Personnel status detection device
WO2021174414A1 (en) * 2020-03-03 2021-09-10 苏州七星天专利运营管理有限责任公司 Microwave identification method and system

Also Published As

Publication number Publication date
MX2020007058A (en) 2020-10-28
CN111937047A (en) 2020-11-13
CA3087705A1 (en) 2019-07-11

Similar Documents

Publication Publication Date Title
US20190139389A1 (en) System and Method for Monitoring Life Signs of a Person
US10643081B2 (en) Remote biometric monitoring system
US10447972B2 (en) Infant monitoring system
US10825314B2 (en) Baby monitor
KR100838099B1 (en) Automatic system for monitoring independent person requiring occasional assistance
US6968294B2 (en) Automatic system for monitoring person requiring care and his/her caretaker
US20190279481A1 (en) Subject detection for remote biometric monitoring
US20070156060A1 (en) Real-time video based automated mobile sleep monitoring using state inference
CN106162073A (en) Communication equipment
CN105096527A (en) Monitoring and alerting method and system based on child's sleep state
WO2014198570A1 (en) System, method and device for monitoring light and sound impact on a person
CN107257651A (en) The scene detection of medical monitoring
CA2541729A1 (en) Wireless monitoring device used for childcare and smoke detection
CN106605238A (en) Occupancy monitoring
WO2019136395A1 (en) System and method for monitoring life signs of a person
JP2019512331A (en) Timely triggering of measurement of physiological parameters using visual context
US20080117485A1 (en) Home protection detection system featuring use of holograms
JP7468350B2 (en) Condition monitoring device and control method for condition monitoring device
US20200390339A1 (en) System and Method for Monitoring a Person for Signs of Sickness
JPWO2017081995A1 (en) Monitored person monitoring apparatus and method, and monitored person monitoring system
JP2006285795A (en) Monitor system and method for monitoring resident
KR100961476B1 (en) Behavior pattern recognition device and its method
KR102405957B1 (en) System for monitoring safety of living using sound waves and radio waves
KR102331335B1 (en) Vulnerable person care robot and its control method
JP2020201592A (en) Behavior detection device, system including the same, behavior detection method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19736216

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 3087705

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19736216

Country of ref document: EP

Kind code of ref document: A1