CN111937047A - System and method for monitoring vital signs of a person - Google Patents

System and method for monitoring vital signs of a person Download PDF

Info

Publication number
CN111937047A
CN111937047A CN201980011948.8A CN201980011948A CN111937047A CN 111937047 A CN111937047 A CN 111937047A CN 201980011948 A CN201980011948 A CN 201980011948A CN 111937047 A CN111937047 A CN 111937047A
Authority
CN
China
Prior art keywords
signal
person
activity
radar
breathing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980011948.8A
Other languages
Chinese (zh)
Inventor
埃里克·格雷戈里·怀特
大卫·罗伯特·艾布拉姆斯
费德里科·格雷罗-雷耶斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mikul Co ltd
Original Assignee
Mikul Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/239,501 external-priority patent/US20190139389A1/en
Application filed by Mikul Co ltd filed Critical Mikul Co ltd
Publication of CN111937047A publication Critical patent/CN111937047A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0015Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
    • A61B5/0022Monitoring a patient using a global network, e.g. telephone networks, internet
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • A61B5/02416Detecting, measuring or recording pulse rate or heart rate using photoplethysmograph signals, e.g. generated by infrared radiation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/0507Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  using microwaves or terahertz waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/0816Measuring devices for examining respiratory frequency
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1113Local tracking of patients, e.g. in a hospital or private home
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0407Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis
    • G08B21/0415Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis detecting absence of activity per se
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0438Sensor means for detecting
    • G08B21/0492Sensor dual technology, i.e. two or more technologies collaborate to extract unsafe condition, e.g. video tracking and RFID tracking
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/04Babies, e.g. for SIDS detection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2562/00Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
    • A61B2562/02Details of sensors specially adapted for in-vivo measurements
    • A61B2562/0204Acoustic sensors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2562/00Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
    • A61B2562/02Details of sensors specially adapted for in-vivo measurements
    • A61B2562/0228Microwave sensors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2576/00Medical imaging apparatus involving image processing or analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4818Sleep apnoea
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0438Sensor means for detecting
    • G08B21/0476Cameras to detect unsafe condition, e.g. video cameras
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Abstract

A system (10) and method for monitoring a person (14) within a defined area by detecting breathing or lack of breathing. Respiration is detected using the radar signal (24), the camera signal (22), and/or the microphone signal (26). The radar signal (24), the camera signal (22) and/or the microphone signal (26) are analyzed to determine whether the subject person (14) is active and, if not, whether the person is breathing or not breathing. An alert is generated if the reflected radar signal (24), the camera signal (22) and the microphone signal (26) all simultaneously indicate that the subject person (14) has not moved and is not active for a selected period of time.

Description

System and method for monitoring vital signs of a person
Technical Field
In general, the present invention relates to a monitoring device capable of monitoring vital signs of a person while the person is sleeping. More particularly, the present invention relates to monitoring devices that use low energy radar, cameras and/or microphones to monitor vital signs.
Background
There are many monitoring systems designed to monitor various vital signs. For example, in a hospital intensive care unit, a patient is connected to a heart rate monitor, a blood pressure monitor, a blood oxygen monitor, and the like. If any of these monitors detects a condition that exceeds an acceptable threshold, an alarm is raised.
In a hospital environment, the various sensors are typically wired sensors that are directly connected to the body. This makes the sensor very accurate and resistant to interfering signal noise from external sources.
Monitoring devices are also used outside hospitals in a variety of ways. For example, parents often use baby monitors to monitor when their children are sleeping. Such monitoring typically begins when the child is an infant until the child is large enough to not require a crib. The monitoring is performed for a number of reasons. Infants are susceptible to Sudden Infant Death Syndrome (SIDS). Infants also face the risk of accidental choking and choking when they grow up and begin to move. Once a child grows to be able to stand and climb, the child is at risk of falling and trapping. Monitoring is also used for adults, such as those suffering from sleep apnea or those with high mortality due to illness or age.
Wired sensors are rarely used when monitoring is used by children or adults who are convenient to move. The wires of the sensor create a strangulation hazard and a tripping hazard. Thus, potential injury may outweigh potential benefits. Therefore, most monitoring devices sold for home use rely on wireless monitoring. The most common wireless monitoring systems are the camera and microphone systems, commonly referred to as baby monitors. These devices are placed in the room and face the crib or bed. The baby monitor sends the image on the crib or bed to a remote receiver along with any detected audio signals. A person observing the display of the receiver is able to observe any activity on the crib or bed and hear whether the occupant of the crib or bed is crying or making any painful sounds.
The disadvantages of conventional infant monitoring systems are apparent. The baby monitor only detects activity and sound. If the infant is experiencing SIDS, there may be no activity or sound. Likewise, if an adult died while sleeping, there may be no activity or sound.
Recognizing these shortcomings, improved monitoring devices have been developed for home use. Some of these monitoring devices use low energy radar to monitor sleeping people. The sensitivity of the radar is sufficient to detect the slow expansion and contraction of the chest when a person inhales and exhales. Such prior art monitoring systems are exemplified by chinese patent publication No. cn104133199a and chinese patent publication No. cn103110422a.
Radar-based surveillance systems also have some disadvantages. Even with directional antennas, radar energy propagates from the antenna in all directions. This results in an omni-directional coverage area. Thus, the radar system can detect the activity of objects, pets and people away from the crib or bed being monitored. Activities from non-target objects, pets, and people may be incorrectly interpreted by the monitoring system as activities within the crib or bed. Thus, if a person stops breathing, erroneously detected activity may delay or prevent a hazard from being detected.
In the prior art, surveillance systems have been developed that are a composite of conventional cameras, baby monitors, and low energy radar monitors. Such prior art systems utilize cameras and radar transceivers to monitor the crib or the person in the bed. However, the outputs of the camera system and the radar system are not cross-correlated. Conversely, if the radar system detects an alarm condition, the camera system simply checks there to see if the alarm is a false alarm. If the person being monitored stops breathing and the radar fails to detect a condition due to false positives, the camera system will not detect a hazard. Such prior art synthesis systems are exemplified by U.S. patent application publication No.2016/0313442 and chinese patent publication No. cn102835958 to Ho.
Signals captured by radar systems and/or camera systems containing relevant data can be easily washed out by noise and signals containing irrelevant data. For example, the chest movement of an infant sleeping wrapped in a tight blanket is very small. It is difficult to detect such activity using low energy radar and/or cameras. The activity caused by breathing is hidden in the signals caused by physical activity, signals caused by activity in the surrounding environment and signal noise. Therefore, signal processing algorithms must be used to separate the useful signal from the noise and extraneous signals. The signal processing algorithms used in the prior art tend to generate a large number of false alarms, in the hope that true alarms are never missed. However, the large number of false alarms make the prior art systems undesirable and result in people ceasing to use the system after experiencing a string of false alarms. Thus, many prior art monitoring systems are no better than no monitoring system at all.
Therefore, there is a need for a wireless monitoring system that is capable of monitoring a person in a crib or bed by detecting even minimal activity caused by breathing. There is also a need for a system that can separate the useful signal from noise and extraneous signals to produce a more reliable system with fewer false alarms. There is also a need for a system that can analyze signals in real time without having to perform the signal analysis at a remote location. These needs are met by the present invention, as described and claimed below.
Disclosure of Invention
The present invention is a system and method for wirelessly monitoring a person. The system may detect breathing or non-breathing of a subject person, such as an infant in a crib or an adult with sleep apnea.
The systems and methods may use radar signals, camera signals, and/or microphone signals to detect respiration. Using a radar transceiver, a radar signal is directed to an area where a subject person is sleeping. The radar signal is reflected from the subject person, where a reflected radar signal is generated. Contained within the reflected radar signal is data relating to breathing and/or rhythmic activity of the beating heart.
Similarly, the camera is directed to the area in which the subject person is sleeping. The camera detects the movement of the subject person. Included in the detected activity is activity caused by rhythmic breathing and/or beating heart.
At least one microphone also monitors the subject person's area. The microphone detects a sound made by the subject person. Included in the detected sounds are sounds caused by rhythmic breathing.
The reflected radar signal, the signal from the camera, and the signal from the microphone are fused to determine whether the subject person is active, and if the subject person is not active, whether the person is breathing or not breathing. An alarm is generated if the reflected radar signal, the camera signal and the sound signal all simultaneously indicate that the subject person is not moving and breathing.
Drawings
For a better understanding of the present invention, reference is made to the following description of exemplary embodiments of the invention, which is to be considered in connection with the accompanying drawings, in which:
FIG. 1 illustrates an exemplary embodiment of a monitoring system of the present invention;
FIG. 2 shows a schematic view of a monitoring unit used in the monitoring system of the present invention;
FIG. 3 shows a logic diagram illustrating operations performed within a monitoring unit;
FIG. 4 is a block diagram illustrating operational details of the microphone feature extraction process referred to in FIG. 3;
FIG. 5 shows a screen and represents regions that may be segmented by a user;
FIG. 6 is a block diagram illustrating operational details of the camera feature extraction process involved in FIG. 3;
FIG. 7 is a block diagram showing details of the operation of the radar signal filtering process referred to in FIG. 3;
FIG. 8 is a block diagram illustrating operational details of the radar feature extraction process referred to in FIG. 3; and
FIG. 9 illustrates an exemplary screen generated by a computing device for interfacing with the monitoring unit and user of FIG. 2.
Detailed Description
Although the monitoring system of the present invention may be used in many institutional settings, such as hospitals and nursing homes, the system is particularly suited for in-home use. Thus, for purposes of description and illustration, an exemplary embodiment of a monitoring system of the present invention for monitoring a person in a bed or crib in a household is chosen. The illustrated embodiments are, however, merely exemplary and are not to be construed as limiting the scope of the appended claims.
Referring to fig. 1, an overview of a monitoring system 10 is shown. The monitoring system 10 comprises a monitoring unit 12. The monitoring unit 12 is placed in a room and directed towards a subject person 14, such as a child in a crib or an adult in a bed. In a preferred embodiment, the monitoring unit 12 may actively emit light 16, radar signals 18, and audio signals 20. The emitted light 16 is preferably in the infrared spectrum so as to be invisible to the subject person 14. Transmitted radar signal 18 is a low energy signal that is not harmful to subject person 14 and any other sensitive electronic devices (e.g., pacemakers). The emitted audio signals 20 are audible to the subject person 14 being monitored. As will be explained later, the audio signal 20 may be music, an alarm, or a transmitted voice of another person.
The monitoring unit 12 receives light 22, reflected radar signals 24 and ambient sound 26. The received light 22 includes existing ambient light and any illumination return light projected from the monitoring unit 10. The reflected radar signal 24 is an echo from the radar transmitted by the monitoring unit 10. The ambient sound 26 is any audible sound detected by the monitoring unit 10. The light 22 received by the monitoring unit 10, the reflected radar signal 24 and the ambient sound 26 are all internally processed. The monitoring unit 10 uses circuitry and processing software to specifically extract features associated with the breathing of the subject person 14. The monitoring unit 10 processes the light 22, the reflected radar signal 24 and the ambient sound 26 in real time. The processed information may be accessed by a remote computing device 28 (e.g., a smartphone) running application software 30 needed to display the processed signal information. Depending on the location of the remote computing device 28, the processed signals may be shared directly with the remote computing device 28 or may be forwarded to the remote computing device 28 over a data network 32, such as a cellular network or the internet.
An observer 34, such as a parent or nurse, can view the remote computing device 28 and receive the processed information. As will be explained later, the processed information is formatted in a user-friendly manner. Likewise, if an alarm condition is detected by the monitoring unit 12, the observer 34 is immediately notified. An observer 34 may communicate with the monitoring unit 12 and cause the monitoring unit 12 to broadcast music or words that may be heard by the subject person 14 being monitored. In this way, a subject person 14 who is anxious can be pacified, and a subject person 14 who is in distress can be comforted until assistance arrives at the scene.
Referring to fig. 2, the main components of the monitoring unit 12 are shown and described. The monitoring unit 12 includes a camera 36 for imaging a sleep area in a crib, bed, cradle, or the like. The camera 36 preferably has the capability of imaging at least some of the visible and infrared spectrum. In this way, the camera 36 is able to image during the day and in the dark.
The camera 36 has an objective lens 38. The objective lens 38 is oriented in a particular direction as shown by line 40. The objective lens 38 of the camera 36 is directed toward the subject person 14 being monitored. The light 22 captured by the camera 36 is converted into camera data 42, which is processed in a manner described later.
One or more LEDs 44 may be provided for illuminating the subject person 14 being monitored. LED44 is preferably an infrared light emitting diode that produces light that is detectable by camera 36 but not by the eyes of the subject person 14 being monitored. It should be appreciated that the LED44 is an economical source of infrared light. However, other sources of IR light may be used in this design, such as low power IR lasers or filtered multi-color lamps. Regardless of the source of the IR light, the intensity of the light is sufficient to illuminate the area of the subject person 14 being monitored, wherein the camera 36 is enabled to image the area.
A radar transceiver 46 is provided. The radar transceiver 46 is preferably a low power pulse doppler radar, although different radars may be used. In this manner, the radar transceiver 46 may detect speed and distance. The radar transceiver 46 is configured to have its maximum range in a particular direction 48. The direction 48 of maximum extent is parallel to the direction line 40 of the camera 36. In this way, the radar transceiver 46 covers the same area as the area imaged by the camera 36. This makes the radar transceiver 46 more sensitive in the direction of the target area. The radar transceiver 46 transmits radar signals 18 covering the target area and detects the returned reflected radar signals 24. The reflected radar signal 24 is detected by the radar transceiver 46 and converted into radar data 50. The radar data 50 is processed in a manner described later.
One or more microphones 52 are provided as part of the monitoring unit 12. Preferably, at least two microphones 52 are used. The microphone 52 is oriented toward the target area where the camera 36 and radar transceiver 46 are oriented. In this way, any ambient sound 26 originating within the target area will be detected by the microphone 52. The microphone 52 produces audio data 54. The audio data 54 is processed in a manner described later.
The computing device 56 receives the camera data 42, the radar data 50, and the audio data 54. The computing device 56 includes a clock 58 that enables data to be indexed by time. The computing device 56 may have high capacity storage 60 or access to cloud storage 33 over the data network 32 so that a large cache of time-indexed data will be stored for later viewing.
The computing device 56 is capable of using
Figure BDA0002619596870000051
The transceiver 62 and/or the WiFi transceiver 64 exchange data with an external source. Other data transmission systems may also be used, e.g. cellularNetwork transmission and/or hard-wired connection (hardwire connection). Computing device 56 also controls one or more speakers 66. The speaker 66 may broadcast the audio signal 20 into the environment of the monitoring unit 12. As will be explained later, the broadcast audio signal 20 may be a soothing music or a glaring alarm that may help the child fall asleep.
The computing device 56 is also connected to a user interface 68. The user interface 68 includes an on/off switch 70 for the monitoring unit 12 and may include status lights and sensitivity controls that are manually adjustable by the user.
The computing device 56 is programmable and runs dedicated operating software 72. The operating software 72 can utilize the pass-through
Figure BDA0002619596870000061
The programming updates received by the transceiver 62, WiFi transceiver 64, or other data transmission system are periodically updated.
Referring to fig. 3 in conjunction with fig. 2, it will be appreciated that the computer system 56 receives audio data 54 from the microphone 52, camera data 42 from the camera 36, and radar data 50 from the radar transceiver 46. The data is analyzed by the computing system 56 using the operating software 72. The purpose of the analysis is to first determine whether the subject person 14 is within the area being monitored. If the subject person 14 is in the monitored area, it will then extract features from within the audio data 54, camera data 42, and radar data 50, and determine whether they are attributable to the breathing of the subject person 14. Changes in these characteristics are then monitored. If these features indicate that the subject person 14 has stopped breathing, an alarm is generated.
Referring to fig. 3 in conjunction with fig. 2, it can be seen that the computing system 56 processes the audio data 54, the camera data 42, and the radar data 50. All three sets of data are analyzed to detect signal features indicative of respiration.
The processing of the audio data 54 from the microphone 52 is first explained. Both crying and breathing sounds can be detected in the audio data 54. Detection of crying can be accomplished using known sound processing techniques, such as those described in U.S. patent No.9,020,622 to Shoham. More complicated is the effective isolation of features in the acoustic audio data 54 that correspond to weak breathing sounds. To isolate the breathing sound, the audio data 54 from the microphone 52 is first filtered. See block 80. The filtering may include directional filtering, which may eliminate some sound signals that do not originate from the target region. The directional filtering is optional. In the required filtering step, the ambient sound signal 26 is filtered in an attempt to isolate the breathing sound from other ambient noise. The required filtering includes subjecting the audio data 54 to a low pass filter 81. This attenuates signals whose frequency is too high to be indicative of breathing. After the audio data 54 is first filtered, it is further processed to extract the desired features, in this case the breathing sound. See block 82.
The details of the feature extraction process are shown in fig. 4. Referring to fig. 4 in conjunction with fig. 3 and 2, it can be seen that after filtering the raw audio data 54 in the filtering process of block 80, a filtered audio signal 74 is obtained. The goal of the feature extraction process is to extract the respiration waveform from the filtered audio signal 74. To extract the respiratory waveform from the filtered audio signal 74, the filtered audio signal 74 is resampled with a reduction factor. See block 84. The preferred reduction factor for resampling is 1/1000, however other reduction factors may be used. The resampled audio data is compressed using an arctan function (arctan function). See block 86. The compressed audio data is then subjected to a fast fourier transform to find the occurrence of the maximum peak signal event. See block 88. These maximum peak events correspond to the respiration waveform of interest. The resulting respiratory rate waveform 76 is then used in the grouping classification process. See block 90.
Returning to fig. 3 and 2, it is shown that the camera data 42 is also processed by the computing system 56. The camera data 42 is first subjected to region segmentation. See block 92. When setting up the monitoring unit 12, the person setting up the monitoring unit 12 in place points the camera 36 at the crib or crib in the target area. Referring to fig. 5 and 3, it can be seen that the camera 36 has a field of view 93 that is imaged. The person viewing the image of the camera 36 may also manage the image within the field of view 93. The target area 91 is selected as the area where the subject person 14 is most likely to be located. The target area 91 is typically a crib mattress or a region of a mattress. Then, an area around the target area 91 is defined as a guest area 95. The segmentation process is used to distinguish the activity attributable to the subject person 14 from all other detected activities. Thus, when looking for activity caused by breathing, the computer system 56 will only consider data originating within the selected target area 91.
Referring to fig. 6 in conjunction with fig. 2 and 3, it can be seen that after segmentation (block 92), the camera data 42 is further processed to extract the desired features, in this case, the activity associated with respiration. See block 94. After field segmentation, the camera data 42 includes various frame images. The frame undergoes a color space transformation that changes the image from color to grayscale. See block 97. This reduces the amount of processing required to analyze the image, which increases the response time of the system. The grayscale image frames are then stored in a ring buffer 99. The grayscale image frames are analyzed by the computing system 56 to determine activity. It should be appreciated that for analyzing activity, image frames from different points in time are compared. The image capture rate depends on the frame rate of the camera 36. Many cameras compatible with the system have a frame rate at which one frame needs to be captured every ten to twenty-five frames. See block 96.
Subsequent captured frames are compared, wherein the difference between the image frames is the sum of the first frame minus the delayed subsequent frame, as shown in block 98. Any differences in the image frames represent activity that has occurred during the time that the capture rate is delayed. The sum of the differences over time (buffer length N) is subjected to frequency analysis to determine the breathing frequency as a feature. See block 100.
By comparing image frames over time, a rhythmic pattern of activity is detected. A fast fourier transform is used to identify the largest peak signal event that can isolate the representation of cadence activity. See block 102. The rhythmic patterns of these activities are distinguishable in random periods of physical activity. The rhythmic pattern is related to activity caused by respiration and/or heart beat. The result is a camera-derived respiration waveform of the heartbeat waveform that is used later in the group classification process. See block 104.
The computing system 56 also analyzes the reflected radar signals 24 in an attempt to detect activity associated with breathing and/or the beating heart. Referring to fig. 7 in conjunction with fig. 3 and 2, it will be appreciated that reflected radar signal 24 is first filtered, as indicated by block 106. To filter the reflected radar signal 24, the signal is fed into a ring buffer 108. The incident reflected radar signal 24 is phase limited and needs to undergo phase unwrapping and phase bounding. See blocks 110 and 112. The unwrapped signal is passed through an exponential high-pass filter such that the waveform is centered at zero. See block 114. The unwrapped, highly filtered data is then subjected to a moving average low pass filter to smooth the data. See block 116. This creates filtered data 118. Target features are extracted from the filtered data 118. The target feature is the return of activity corresponding to respiration and/or beating heart. See block 120 in fig. 3.
Referring to fig. 8 in conjunction with fig. 3 and 2, it can be seen that filtered radar return data 118 is disposed in bin buffer 122. Once the bin buffer 122 is full, the root mean square is calculated. See block 124. With the root mean square of each bin buffer known, the signal-to-noise ratio is calculated. See block 126. The true signal-to-noise ratio cannot be directly calculated without a priori knowledge of the signal. Therefore, certain assumptions must be made to make the calculations possible. The respiratory rate of the monitored person is assumed to be between 15 breaths per minute and 60 breaths per minute. This translates to a breathing rate between 0.25Hz and 1 Hz. If a heartbeat is being detected, a slightly higher rate is utilized. Using the root mean square data, a fast fourier transform is performed to change the waveform from the time domain to the frequency domain. See block 128. This creates a transformed waveform 130. In addition, a window is applied to each time frame of the bin buffer 122 to prevent wrap-around boundary effects. Using the transformed waveform 130, the maximum spectral amplitude and its corresponding frequency may be calculated.
The transformed waveform 130 includes both the desired signal and noise. These aspects must be separated. See block 132. To distinguish the signal from noise in the transformed waveform 130, the fundamental frequency of the subject's breathing rate is determined by calculating the largest component of the fast fourier transform in the assumed breathing rate frequency of 0.25Hz to 1.0 Hz. Starting with the largest component, the waveform travels to the left and right until it reaches thirty (30%) of its peak. The bandwidth at the selected value is defined as the bandwidth of the signal. The remainder of the waveform is designated as noise. If the peak is found to be close to the waveform extremum, i.e. the frequency is equal to zero or equal to FT length/2, the peak is considered invalid and the subsequent bin buffer is analyzed. Likewise, if another high value is found in the fast fourier transform range that is larger than the first calculated peak, this peak is considered invalid and another bin buffer is analyzed.
The data from the various buffered binary (buffer bin) analyses are then correlated in a binary correlation (bin correlation) step and then accumulated in a binary accumulation (bin aggregation) step. See blocks 136 and 138 of fig. 3. During binary correlation, the waveform of the buffered binary corresponding to each process is digitized into "1's" and "0's". This is achieved by setting a threshold value and assigning "1's" to values above the threshold value and "0's" to values below the threshold value. This forms a packet. Subsequent packets are then compared using a-XOR comparator. The results are saved to the correlation matrix. Changes in the field are quickly identified due to simple comparisons and corresponding fast processing times. The region of relevant data identified in the radar field is used to identify the location of the monitored person in the radar field. Once the location of the person is identified, the analysis of the radar return data 50 may be limited to returns from the identified area.
During binary accumulation, bins (bins) identified as containing data from the person being monitored are grouped. Within each group, the range bin (range bin) with the largest signal-to-noise ratio is identified. In each group, each range bin is analyzed to sum the signal-to-noise ratios if the signal-to-noise ratio exceeds a percentage of the maximum signal-to-noise ratio for the group. Subsequent groups are analyzed to determine how well they match the first group. The group with the highest matching score is selected as the next group. At this point in the analysis, the data attributable to the person being monitored is isolated and the signal-to-noise ratio is known. Using these variables, the radar-derived respiration waveform 140 can be isolated, which most likely represents rhythmic respiration of the monitored person. See block 139.
From the previous analysis, the microphone-derived respiration waveform 76, the camera-derived respiration waveform 104, and the radar-derived respiration waveform 140 are known. The waveforms 76, 104, 140 are then classified using a group classification process. See block 90. The three categories used in the system of the present invention are breathing, inactivity and activity. All groups default to an inactive state. If any waveform from either source represents respiration, the net result for the entire group is set to respiration. Similarly, if any waveform from either source represents activity of the person being monitored, the net result for the entire group is set as activity. However, if all sources represent a state of no activity for a selected period of time, an alarm condition occurs. The sensitivity of the system can be controlled by controlling the waveform threshold and applying a probability function to each class of data.
Once a type of waveform is determined, it is verified, as shown in block 142. Validation is used to reduce the occurrence of false alarms. The default state is an inactive state, i.e., an alarm state. Only when the new state persists for a selected period of time can the existing state be changed. The time period is adjustable and preferably between 1 and 10 seconds. Thus, if the system detects breathing or activity within a set period of time, the default inactive state is replaced by a breathing state or an active state. The inactive state will not resume until no breathing or activity is detected for the duration of the threshold time period. As shown in block 144, if an inactivity state is identified within a threshold period of time, an alert is sent.
Returning to fig. 1, it will be understood that when an alert is sent, the alert is first sent to the remote computing device 28 of the observer 34. The remote computing device 28 will provide an audio and visual alert to the observer 34. Using the remote computing device 28, the observer 34 can stream the camera data 42 and audio data 54 in real-time. Thus, the user can quickly determine that the alarm is a false alarm. For example, the observer 34 can see that the subject person 14 being monitored simply wakes up and leaves the monitored area of the crib or bed.
If the alarm condition does appear to be genuine, the observer 34 has certain options. First, the observer 34 may cause the monitoring unit 12 to sound a loud audible alarm. This may startle the sleeping person to breathe. Additionally, the observer 34 may stream live audio to the monitoring unit 12. This will enable the observer 34 to speak with the person being monitored and is expected to be able to wake the person back up to conscious breathing.
Referring to fig. 9 in conjunction with fig. 1 and 2, an exemplary screen 150 is shown illustrating what a user may see on his/her remote computing device 28. Screen 150 shows the instantaneous dynamics of camera data 42. Also on the screen 150 are various icons 152. By pressing various icons 152, the observer 34 may choose to listen to audio feeds from the monitoring unit 12, send audio feeds into the monitoring unit 12, and/or issue an alarm. Selectable icons such as automatic dialing of 911, etc. may also be included.
In addition to the live camera feed, the viewer 34 can see a reproduction of the respiration waveform 154. The respiration waveform 154 may be a composite of the microphone-derived respiration waveform 76, the camera-derived respiration waveform 104, the radar-derived respiration waveform 140, or any combination. The state 156 of the current state, i.e., active, breathing or non-breathing, is shown. The current state is also shown along with a time indication 158 indicating the duration of the state. For example, in FIG. 9, state 156 represents the active state and shows that the state has remained for the past 15 minutes. This may be an indication that subject person 14 is awake.
Using application software 30 of remote computing device 28 and by making modifications on appropriate icon 152, observer 34 may choose to send his/her voice to monitoring unit 12 in an attempt to quiet subject person 14 or secure subject person 14. Alternatively, observer 34 may choose to communicate music or recorded stories to monitoring unit 12 to help subject person 14 fall asleep again.
It should be understood that the embodiments of the invention shown and described are exemplary only, and that numerous changes to the embodiments may be made by those skilled in the art. All such embodiments are intended to be included within the scope of the present invention as defined in the appended claims.

Claims (20)

1. A method of monitoring a person within a defined area, the method comprising the steps of:
directing a radar signal toward the defined area, wherein the radar signal reflects from the person within the defined area and generates a reflected radar signal;
directing a camera at the defined area, wherein the camera detects activity of the person in the defined area and generates a camera signal representative of the activity;
monitoring the defined area with at least one microphone, wherein sound emitted by the person in the defined area is detected by the at least one microphone and the at least one microphone produces an audio signal representative of the sound;
analyzing the reflected radar signal, the camera signal and the audio signal to determine whether the person is active within the defined area;
generating an alert in the event that the reflected radar signal, the camera signal and the audio signal all simultaneously represent that the person is not active in the defined area for a selected period of time.
2. The method of claim 1, wherein the activity is a physical activity selected from the group consisting of respiration and heartbeat.
3. The method of claim 2, further comprising analyzing the reflected radar signal to extract signal data representative of the physical activity.
4. The method of claim 2, further comprising analyzing the camera signal to extract signal data representative of the physical activity.
5. The method of claim 2, further comprising analyzing the audio signal to extract signal data representative of the physical activity.
6. The method of claim 1, wherein the radar signal is generated by a pulse doppler radar transceiver.
7. The method of claim 6, wherein the Doppler radar transceiver, the camera and the at least one microphone are contained in a single monitoring unit directed at the defined area.
8. The method of claim 7, wherein the monitoring unit comprises a processing unit that analyzes the reflected radar signal, the camera signal, and the audio signal.
9. The method of claim 8, wherein generating an alarm further comprises the step of transmitting an alarm signal from the monitoring unit to a remote computing device.
10. The method of claim 9, wherein the monitoring unit transmits the camera signal to the remote computing device.
11. A method of monitoring whether activity of a person indicative of breathing has ceased, the method comprising the steps of:
providing a radar transceiver that directs radar signals toward the person, wherein the radar signals detect the activity of the person, which may include activity indicative of breathing;
providing an imaging system that captures images of the person, wherein the imaging system optically detects the activity of the person, which may include the activity representing respiration;
analyzing the radar signal and the image to extract data corresponding to the activity representative of respiration;
generating an alert in the event that the activity indicative of breathing is not found for a selected period of time.
12. The method of claim 11, further comprising providing at least one microphone capable of detecting sounds representative of breathing and producing sound signals therefrom.
13. The method of claim 12, wherein analyzing the radar signal and the image further comprises analyzing the sound signal to extract the sound representative of breathing.
14. The method of claim 13, wherein an alarm is generated only if the activity representative of breathing and the sound representative of breathing are not found within the selected time period.
15. The method of claim 12, wherein the radar transceiver, the imaging system, and the at least one microphone are contained in a single monitoring unit directed at the person.
16. The method of claim 15, wherein the monitoring unit includes a processing unit that analyzes the radar signal, the image, and the sound signal.
17. The method of claim 16, wherein generating an alarm further comprises the step of transmitting an alarm signal from the monitoring unit to a remote computing device.
18. The method of claim 16, wherein the monitoring unit transmits the camera signal to the remote computing device.
19. A method of monitoring whether activity of a person indicative of breathing has ceased, the method comprising the steps of:
providing a radar transceiver that directs radar signals toward the person, wherein the radar signals detect the activity of the person, which may include activity indicative of breathing;
providing a microphone for monitoring the person, wherein the microphone detects sound signals emitted by the person, which may include sounds representative of breathing;
analyzing the radar signal and the sound signal to extract data corresponding to the activity representative of respiration and the sound representative of respiration;
generating an alert if the activity indicative of breathing and the sound indicative of breathing are not detected for a selected period of time.
20. The method of claim 19, further comprising providing a camera directed at the person, wherein the camera detects the activity of the person, which may include the activity representing breathing.
CN201980011948.8A 2018-01-05 2019-01-07 System and method for monitoring vital signs of a person Pending CN111937047A (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US201862614164P 2018-01-05 2018-01-05
US62/614,164 2018-01-05
US201862718206P 2018-08-13 2018-08-13
US62/718,206 2018-08-13
US16/239,501 US20190139389A1 (en) 2016-08-19 2019-01-03 System and Method for Monitoring Life Signs of a Person
US16/239,501 2019-01-03
PCT/US2019/012568 WO2019136395A1 (en) 2018-01-05 2019-01-07 System and method for monitoring life signs of a person

Publications (1)

Publication Number Publication Date
CN111937047A true CN111937047A (en) 2020-11-13

Family

ID=67143818

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980011948.8A Pending CN111937047A (en) 2018-01-05 2019-01-07 System and method for monitoring vital signs of a person

Country Status (4)

Country Link
CN (1) CN111937047A (en)
CA (1) CA3087705A1 (en)
MX (1) MX2020007058A (en)
WO (1) WO2019136395A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021174414A1 (en) * 2020-03-03 2021-09-10 苏州七星天专利运营管理有限责任公司 Microwave identification method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6062216A (en) * 1996-12-27 2000-05-16 Children's Medical Center Corporation Sleep apnea detector system
US20080294019A1 (en) * 2007-05-24 2008-11-27 Bao Tran Wireless stroke monitoring
US20110190594A1 (en) * 2010-02-04 2011-08-04 Robert Bosch Gmbh Device and method to monitor, assess and improve quality of sleep
US20120022348A1 (en) * 2010-05-14 2012-01-26 Kai Medical, Inc. Systems and methods for non-contact multiparameter vital signs monitoring, apnea therapy, sway cancellation, patient identification, and subject monitoring sensors
US20160022204A1 (en) * 2013-03-13 2016-01-28 Kirill Mostov An apparatus for remote contactless monitoring of sleep apnea
WO2016046789A1 (en) * 2014-09-25 2016-03-31 Moustafa Amin Youssef A non-invasive rf-based breathing estimator
US20160313442A1 (en) * 2015-04-21 2016-10-27 Htc Corporation Monitoring system, apparatus and method thereof
US20160345832A1 (en) * 2015-05-25 2016-12-01 Wearless Tech Inc System and method for monitoring biological status through contactless sensing

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6753780B2 (en) * 2002-03-15 2004-06-22 Delphi Technologies, Inc. Vehicle occupant detection system and method using radar motion sensor
EP2319410A1 (en) * 2003-09-12 2011-05-11 BodyMedia, Inc. Apparatus for measuring heart related parameters
US7417727B2 (en) * 2004-12-07 2008-08-26 Clean Earth Technologies, Llc Method and apparatus for standoff detection of liveness
US20160135734A1 (en) * 2014-11-19 2016-05-19 Resmed Limited Combination therapy for sleep disordered breathing and heart failure

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6062216A (en) * 1996-12-27 2000-05-16 Children's Medical Center Corporation Sleep apnea detector system
US20080294019A1 (en) * 2007-05-24 2008-11-27 Bao Tran Wireless stroke monitoring
US20110190594A1 (en) * 2010-02-04 2011-08-04 Robert Bosch Gmbh Device and method to monitor, assess and improve quality of sleep
US20120022348A1 (en) * 2010-05-14 2012-01-26 Kai Medical, Inc. Systems and methods for non-contact multiparameter vital signs monitoring, apnea therapy, sway cancellation, patient identification, and subject monitoring sensors
US20160022204A1 (en) * 2013-03-13 2016-01-28 Kirill Mostov An apparatus for remote contactless monitoring of sleep apnea
WO2016046789A1 (en) * 2014-09-25 2016-03-31 Moustafa Amin Youssef A non-invasive rf-based breathing estimator
US20160313442A1 (en) * 2015-04-21 2016-10-27 Htc Corporation Monitoring system, apparatus and method thereof
US20160345832A1 (en) * 2015-05-25 2016-12-01 Wearless Tech Inc System and method for monitoring biological status through contactless sensing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
芮静康 等: "《电工技术百问》", 31 August 2000 *

Also Published As

Publication number Publication date
CA3087705A1 (en) 2019-07-11
MX2020007058A (en) 2020-10-28
WO2019136395A1 (en) 2019-07-11

Similar Documents

Publication Publication Date Title
US20190139389A1 (en) System and Method for Monitoring Life Signs of a Person
US10643081B2 (en) Remote biometric monitoring system
US20240005768A1 (en) Apparatus, system, and method for motion sensing
US10825314B2 (en) Baby monitor
EP2120712B1 (en) Arrangement and method to wake up a sleeping subject at an advantageous time instant associated with natural arousal
US20180035082A1 (en) Infant monitoring system
KR100838099B1 (en) Automatic system for monitoring independent person requiring occasional assistance
US6968294B2 (en) Automatic system for monitoring person requiring care and his/her caretaker
EP3007620B1 (en) System and method for monitoring light and sound impact on a person
US20070156060A1 (en) Real-time video based automated mobile sleep monitoring using state inference
US20190279481A1 (en) Subject detection for remote biometric monitoring
CA2541729A1 (en) Wireless monitoring device used for childcare and smoke detection
JP2019512331A (en) Timely triggering of measurement of physiological parameters using visual context
US20080117485A1 (en) Home protection detection system featuring use of holograms
CN111937047A (en) System and method for monitoring vital signs of a person
JP7468350B2 (en) Condition monitoring device and control method for condition monitoring device
US20200390339A1 (en) System and Method for Monitoring a Person for Signs of Sickness
KR102495203B1 (en) Apparatus for determining sleep status and assistancing sleep and control method thereof
KR102405957B1 (en) System for monitoring safety of living using sound waves and radio waves
US9384644B1 (en) Sleepwalking motion detection motion alarm
KR102658390B1 (en) Devices, systems, and methods for health and medical sensing
CN108806177A (en) Security protection control system
WO2021122136A1 (en) Device, system and method for monitoring of a subject

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40039612

Country of ref document: HK

RJ01 Rejection of invention patent application after publication

Application publication date: 20201113

RJ01 Rejection of invention patent application after publication