KR101888677B1 - Rescue information generating method using drone collecting image at disaster scene - Google Patents

Rescue information generating method using drone collecting image at disaster scene Download PDF

Info

Publication number
KR101888677B1
KR101888677B1 KR1020160016966A KR20160016966A KR101888677B1 KR 101888677 B1 KR101888677 B1 KR 101888677B1 KR 1020160016966 A KR1020160016966 A KR 1020160016966A KR 20160016966 A KR20160016966 A KR 20160016966A KR 101888677 B1 KR101888677 B1 KR 101888677B1
Authority
KR
South Korea
Prior art keywords
image
server
drone
information
average value
Prior art date
Application number
KR1020160016966A
Other languages
Korean (ko)
Other versions
KR20170095505A (en
Inventor
홍광석
박상민
Original Assignee
성균관대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 성균관대학교산학협력단 filed Critical 성균관대학교산학협력단
Priority to KR1020160016966A priority Critical patent/KR101888677B1/en
Publication of KR20170095505A publication Critical patent/KR20170095505A/en
Application granted granted Critical
Publication of KR101888677B1 publication Critical patent/KR101888677B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0062Arrangements for scanning
    • A61B5/0064Body surface scanning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/01Measuring temperature of body parts ; Diagnostic temperature sensing, e.g. for malignant or inflamed tissue
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/021Measuring pressure in heart or blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/1455Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
    • A61B5/14551Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters for measuring blood gases
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • B64C39/024Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENT OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D47/00Equipment not otherwise provided for
    • B64D47/08Arrangements of cameras
    • B64C2201/127
    • B64C2201/146

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Surgery (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Pathology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Business, Economics & Management (AREA)
  • Cardiology (AREA)
  • Physiology (AREA)
  • Tourism & Hospitality (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Vascular Medicine (AREA)
  • Human Resources & Organizations (AREA)
  • Optics & Photonics (AREA)
  • Pulmonology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Economics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

A method for generating a structure information using an image collected by a drone comprises the steps of: receiving an image collected by a drones by a server; detecting a region of interest of a human skin region in the image; Estimating the biometric information of the person using the color value of the person, and generating the structure information that the server analyzes the health status of the person based on the biometric information. The biometric information includes at least one of a pulse wave, pulse, blood pressure, respiratory rate, oxygen saturation, and body temperature.

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a structure information generation method using an image collected by a drone,

The technique described below relates to a technique for generating information for lifesaving in a disaster scene using a drones.

Recently, various services using drone have appeared. Drones basically move to areas or points that are difficult for people to access and collect certain information.

Korean Patent Publication No. 10-2003-0031948

The technique described below is intended to provide a method of generating structure information for life-saving in a disaster scene using only the images collected by the drones by putting the drones into a disaster site difficult for human access.

A method for generating a structure information using an image collected by a drone comprises the steps of: receiving an image collected by a drones by a server; detecting a region of interest of a human skin region in the image; Estimating the biometric information of the person using the color value of the person, and generating the structure information that the server analyzes the health status of the person based on the biometric information.

In another aspect of the present invention, there is provided a method for generating a structure information using an image collected by a drone, comprising the steps of: acquiring an image including a skin region of a human dron, The method comprising the steps of: detecting a region of interest; estimating the biometric information of the person using the color value of the region of interest by the drones or the control apparatus; and determining, based on the biometric information, And generating structural information analyzing the health state.

The biometric information may include at least one of a pulse wave, pulse, blood pressure, respiration rate, oxygen saturation, and body temperature, and the structure information may include a degree of severity with respect to the living condition and the health state when the living body is alive.

The technique described below extracts a person's biometric information using only an image without additional information, and can easily provide information for a life structure based on the extracted biometric information. The techniques described below enable immediate structural planning and emergency response based on structural information for people at the disaster site.

FIG. 1 shows an example of a configuration for a lifesaving system using a drone.
FIG. 2 is an example of a flowchart of a structure information generation method using an image collected by a drone.
FIG. 3 is another example showing a configuration for a lifesaving system using a drone.
FIG. 4 is another example of a flowchart of a method for generating structure information using images collected by a drone.
5 is another example showing a configuration for a lifesaving system using a drones.
6 is an example of a process of detecting an object in an image.
7 is an example of a process of estimating blood pressure using an image.
8 is an example of a process of estimating a pulse wave transmission time using a facial image.
9 is an example of a process of estimating oxygen saturation using an image.
FIG. 10 is an example of a process of determining whether a person is alive or not using an image collected from a drones.
11 is an example of information serving as a reference for generating structural information representing a health condition of a person.

The following description is intended to illustrate and describe specific embodiments in the drawings, since various changes may be made and the embodiments may have various embodiments. However, it should be understood that the following description does not limit the specific embodiments, but includes all changes, equivalents, and alternatives falling within the spirit and scope of the following description.

The terms first, second, A, B, etc., may be used to describe various components, but the components are not limited by the terms, but may be used to distinguish one component from another . For example, without departing from the scope of the following description, the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component. And / or < / RTI > includes any combination of a plurality of related listed items or any of a plurality of related listed items.

As used herein, the singular " include "should be understood to include a plurality of representations unless the context clearly dictates otherwise, and the terms" comprises & , Parts or combinations thereof, and does not preclude the presence or addition of one or more other features, integers, steps, components, components, or combinations thereof.

Before describing the drawings in detail, it is to be clarified that the division of constituent parts in this specification is merely a division by main functions of each constituent part. That is, two or more constituent parts to be described below may be combined into one constituent part, or one constituent part may be divided into two or more functions according to functions that are more subdivided. In addition, each of the constituent units described below may additionally perform some or all of the functions of other constituent units in addition to the main functions of the constituent units themselves, and that some of the main functions, And may be carried out in a dedicated manner.

Also, in performing a method or an operation method, each of the processes constituting the method may take place differently from the stated order unless clearly specified in the context. That is, each process may occur in the same order as described, may be performed substantially concurrently, or may be performed in the opposite order.

The technique described below is a technique for providing information for a structure for a person in a disaster scene such as a war or natural disaster by using an image shot by a drone. The technique described below is a technique for estimating human bio-information (bio-signals) using only the images photographed by the drone, and generating structure information for structuring a person included in the image based on the bio-information.

FIG. 1 shows an example of a configuration for a lifesaving system 100 using a drone. The dragon lifeguard system 100 includes a dron 110, a manipulator 115, and an analysis server 130. The lifesaving system 100 using the drone may include a structure server 150 that receives the analysis result of the analysis server 130 and transmits a structure command. The rescue server 150 corresponds to a server that manages information for rescue sites in a fire station, a jurisdiction office, a government department, and the like. Although the analysis server 130 and the structure server 150 are shown separately in FIG. 1, the analysis server 130 and the structure server 150 may be one server located in the structure center. The server may be a Picture Archiving Communication System (PACS), an Electronic Medical Record (EMR), or the like, and may include bio-signals, image data, location information, and the like.

In FIG. 1, the drone 110 collects images while flying in an area where a disaster such as an earthquake occurs. The user can control the drones 110 using a constant control device 115. The control device 115 refers to a device such as a smart phone having a dedicated device for controlling the drones and an application for controlling the device. The dron 110 collects images that include persons at the disaster site. The drone 110 can directly transmit the collected image to the analysis server 130. [ In this case, the drone 110 may transmit the image to the analysis server 130 using communication such as Zigbee, WiFi, Bluetooth, mobile communication, or the like. Or the drone 110 may transmit an image of a disaster scene to a steering apparatus 115 connected by a local communication or a mobile communication and the steering apparatus 115 may analyze the image using communication such as Zigbee, WiFi, Bluetooth, Or may be transmitted to the server 130.

The analysis server 130 estimates biometric information on a person (object) included in the image based on the received image. The analysis server 130 generates structure information serving as a reference for the structure of the person based on the biometric information. The structural information may include the degree of severity of health status if survival and survival to humans. The analysis server 130 transmits the generated structure information to the structure server 150. Based on the structural information, the rescue server 150 transmits a command such as rescue operation to a person located at the disaster site.

FIG. 2 is an example of a flowchart of a structure information generation method 200 using an image collected by a drone. FIG. 2 is an example of a process in which the life-saving system 100 using the drone operates. FIG. 2 illustrates a schematic operation of the structure information generation method using the images collected by the drone. A detailed description of the image processing and biometric information generation will be described later.

The drone collects images of the disaster scene (210). The drones or controllers deliver the collected images to the server (220). The server detects the skin region in the image and sets the region of interest (230). The server selects a region of interest (ROI) for estimating biometric information based on a region in which human skin is revealed in the image.

The server estimates the human biometric information based on the color value of the region of interest (240). The biometric information includes at least one of a pulse wave, a pulse, a blood pressure, respiratory rate, and oxygen saturation. Further, the biometric information may include body temperature. In general, body temperature can be estimated by analyzing the information collected through the thermal imaging camera installed in the drones.

Basically, the server may have a specific operation depending on the existence of the person included in the image. The server can use the biometric information to determine whether a person is alive or not. If the server determines that the person included in the image is dead, the server transmits the death information, which is information indicating the location of the person and the death, to the rescue center (rescue server) (270). The rescue center can use the death and location information to deliver rescue orders to the rescue team to remediate the dead body. The rescue team responds to the rescue order (280).

If the server determines that the person included in the image has survived, the server generates detailed structural information based on the biometric information (250). Here, the structural information means information on the health state of the surviving person. The structural information provides information on whether the health condition is good or in an emergency. The server generates structure information using at least one biometric information. The server transmits the generated structure information to the structure center (260). The rescue center (rescue server) can prioritize rescue teams according to the severity of their health condition. The rescue center may send rescue teams by allocating necessary supplies and manpower according to the health condition of the person (280).

On the other hand, although not shown in FIG. 2, the drone can mount a certain emergency treatment kit, relief item (food, etc.). In this case, the server may instruct the drones to drop or drop the necessary emergency care kit, relief items (food, etc.) at the person's position according to the structure information. The article supplied by the drones is called a structural article. Structural articles include items needed for construction such as emergency treatment kits, relief items (food, water, etc.), communication equipment (radio, smart phone, etc.) necessary for treatment. In this case, the structure information may further include location information on which the person is located.

The drones continue to collect images at the disaster site until they receive a return order and deliver the images they have collected to the server.

3 is another example showing the construction of the life saving system 300 using the drone. The dragon lifeguard system 300 includes a dragon 310, a manipulator 315, and a structural server 350. The rescue server 350 corresponds to a server that manages information for rescue sites in a fire station, a jurisdiction office, a government department, and the like.

In FIG. 3, the drone 310 collects images while flying in an area where a disaster such as an earthquake occurs. The user can control the drones 310 using a constant control device 315. The drone 310 collects images unlike the drones 110 of FIG. 1, and directly analyzes the collected images to generate biometric information. The drones 310 collect images containing persons at the disaster site.

The drone 310 estimates biometric information on a person (object) included in the image. The drone 310 estimates biometric information on a person (object) included in the image based on the collected image. The drone 310 generates structural information serving as a reference for the structure of the person based on the biometric information. The structural information may include the degree of severity of health status if survival and survival to humans. The drone 310 transfers the generated structure information to the structure server 350. Based on the structural information, the rescue server 350 delivers commands such as rescue operations to a person located at the disaster site. In this case, the drone 310 may transmit the structure information to the structure server 350 using communication such as Zigbee, WiFi, Bluetooth, mobile communication, and the like.

FIG. 4 is another example of a flowchart for a structure information generation method 400 using an image collected by a drone. FIG. 4 is an example of a process in which the life saving system 300 using the drone operates.

The drone collects images of the disaster scene (410). The drones detect the skin region in the image and set the region of interest (420). The drone selects a region of interest (ROI) 420 for estimating biometric information based on a region in which the human skin is revealed in the image.

The drone estimates the human biometric information based on the color value of the region of interest (430). The biometric information includes at least one of a pulse wave, a pulse, a blood pressure, respiratory rate, and oxygen saturation. Further, the biometric information may include body temperature. In general, body temperature can be estimated by analyzing the information collected through the thermal imaging camera installed in the drones.

The drones can basically have specific behavior depending on the survival of the person included in the image. Drones can use biometric information to determine whether a person is alive or not. If the drones judge that the person included in the image is dead, the droner transmits death information, which is information indicating the location where the person is located and death, to the rescue center (rescue server) (460). The rescue center can use the death and location information to deliver rescue orders to the rescue team to remediate the dead body. The rescue team responds to the rescue order (470).

The drone generates concrete structural information based on the biometric information (440) when it judges that the person in the image has survived. Here, the structural information means information on the health state of the surviving person. The structural information provides information on whether the health condition is good or in an emergency. The drone generates structural information using at least one biometric information. The drone transmits the generated structural information to the structural center (450). The rescue center (rescue server) can prioritize rescue teams according to the severity of their health condition. The rescue center may allocate necessary supplies and manpower to rescue teams according to the health condition of the person (470).

On the other hand, although not shown in FIG. 4, a drone can mount a certain emergency treatment kit, relief item (food, etc.), and the like. In this case, the drone can drop or drop the necessary emergency treatment kit, relief goods (food, etc.) to the person's position according to the direct structure information.

The drones continue to collect images from the disaster site until they receive a return order and deliver the structural information based on the images collected on the server.

FIG. 5 is another example showing the configuration of the life saving system 500 using the drone. The dragon lifeguard system 500 includes a dragon 510, a manipulator 515, and a rescue server 550. 5, unlike FIG. 1 or FIG. 3, the life support system 500 using the dragon analyzes the image by the steering apparatus 515.

In FIG. 5, the drone 510 collects images while flying in an area where a disaster such as an earthquake occurs. The user can control the drones 510 using a constant control device 515. [ The drones 510 transmit the collected images to the control device 515. [

The control device 515 estimates biometric information on a person (object) included in the image. The control device 515 estimates biometric information on a person (object) included in the image based on the collected image. The control device 515 generates structure information serving as a reference for the structure of the person based on the biometric information. The structural information may include the degree of severity of health status if survival and survival to humans. The control device 515 transmits the generated structure information to the structure server 550. [ Based on the structural information, the rescue server 550 delivers commands such as rescue operations to a person located at a disaster site. In this case, the drone 510 can transmit the structure information to the structure server 550 using communication such as Zigbee, WiFi, Bluetooth, mobile communication, and the like.

6 is an example of a process of detecting an object in an image. The drones, controls or servers detect objects in the image. However, for convenience of explanation, the following description assumes that the server detects an object and generates biometric information.

We describe a technique for the server to detect the skin area in the image. In addition to the techniques described below, the server may also detect skin regions and the like using a variety of other techniques. Fig. 5 shows an example of detecting a skin region.

The server performs an RGB image normalization process to obtain a robust image from the image acquired from the camera. The RGB image normalization is normalized for each pixel of the RGB color model as shown in Equation 1 below.

Figure 112016014500067-pat00001

In Equation 1, R, G and B denote the respective color channels, and r, g and b denote the respective normalized color channels and denote T = R + G + B. Then, face detection using gradient information and face detection using color information are performed in parallel from the normalized image.

In FIG. 6, a process of branching from the source image to the left is a process of processing a morphological gradient image, and a process of branching from a source image to a right is a process of processing a YCbCr image. Finally, the preprocessing of the source image is completed by ANDing the morphological gradient image and the YCbCr image.

In order to emphasize the detection component of the face, the face detection using the gradient information combines only the morphological gradient maximum pixels in the channels of Red, Green, and Blue colors, not the Morphological Gradient operation in the general gray image. . The formula for the Maximum Morphological Gradient Combination (MMGC) image is shown in Equation 2 below.

Figure 112016014500067-pat00002

Here, i and j are pixel coordinates, MG r is a pixel having the maximum morphological gradient in the R channel, MG g is a pixel having a maximum morphological gradient in the G channel, and MG b is a pixel having a morphological gradient Pixel "

Converting the RGB image to the YCbCr color includes converting the image from the RGB color model to the YCbCr color, applying a threshold of skin color to the source image, and removing noise using erosion and dilation operations.

A skin color threshold for separating the background and the skin area image can be set as shown in Equation 3 below.

Figure 112016014500067-pat00003

The threshold value may vary depending on the skin color, and such threshold value setting can be set by a person having ordinary knowledge in the field.

The detected skin color region is converted into a binary image, and noise is removed through a closing operation using an erosion and a dilation operation. In the noise removing step, a portion having a large size may not be removed. In this case, only the skin region image is detected after labeling each region except for the skin region image. Finally, only the skin image with the background removed is detected (Blob detection).

Finally, the morphological gradient image is combined with the YCbCr image (AND). The server can then use the AdaBoost (Adaptive Boosting) algorithm to detect the face region. The AdaBoost algorithm performs learning by iterative computation of weak classifiers using samples of classes, and generates strong classifiers by combining weak classifiers generated. In the initial stage, weights are applied to all the samples and the weak classifiers are learned. The lower error weights are applied to correctly classified data in the baseline classifier and the higher error weights are applied to the incorrectly classified data as the steps progress. It is a technique to improve the performance of the classifier. The AdaBoots algorithm itself is widely known to those of ordinary skill in the art, so a detailed description thereof will be omitted.

Hereinafter, a process of estimating biometric information in an image will be described.

 1. Pulse wave (PPG) estimation method

Pulse is the wavelength at which blood circulates in the heart. It is used mainly to measure heart rate (HRV), current blood circulation and accumulated stress. Pulse waves can be measured with a variety of medical devices, but the server uses only images to detect the pulse waves.

A method of detecting a pulse wave using an image can be achieved by a method of using skin color (face or body skin area) in a state in which the skin is not in close contact as described below and a method of using an image obtained by bringing the skin close to the camera.

1) A method of estimating a pulse wave by detecting a face or detecting a skin color of a part of the body skin in a state in which the skin is not brought close to the camera will be described. After detecting an area that can reflect a user's condition such as a face or a finger, the skin color is detected through a preprocessing process such as face detection and skin color detection. The PPG signal can be detected by setting a region of interest from the detected skin region and extracting a color average value such as Cg or Red of all the pixels in the region.

 2) Describe how to use the image obtained by bringing the skin close to the camera.

(1) Instead of using all the images obtained from the camera, the RGB color values extracted for each frame are substituted into the following expression (4), and only the frame having the output value of 1 is selected and used. mean (R), std (G), and std (B) are the mean values of red signal, mean (G) , B Shows the standard deviation value of each color channel.

Figure 112016014500067-pat00004

(2) The color threshold is set to estimate the pulse wave. For example, the threshold value can be calculated by substituting the maximum value and the minimum value of the average value of the Red signal for the first 5 seconds into the following equation (5).

Figure 112016014500067-pat00005

The process of Equation (6) is performed for the case where the output value is 1 through the process of (1) in each frame. I is the Red value of each pixel in one frame, and the PPG value for one frame can be obtained by adding the number of pixels whose Red value is larger than Threshold (T) value determined in step 2).

Figure 112016014500067-pat00006

This process is repeated every frame to obtain the PPG signal. It is also possible to detect the PPG signal by extracting the red color average value of all the pixels for the ROI per frame as well as the upper method when an image obtained by bringing the skin into close contact with the camera is used.

 2. Pulse Signal Estimation Method

Pulse signal estimation uses the brightness value for the region of interest. The server changes the image model from RGB to YCgCo (a color space composed of luminance Y, green color difference Cg, and orange color difference Co) in order to extract a pulse signal. Then, the Cg signal is extracted by calculating the average of the Cg values in each frame, and the Cg signal is extracted for several tens to several hundreds of frames and then converted into the frequency domain using the FFT. The server determines the frequency component having the largest value observed in the frequency domain as the cycle of the pulse. The server can estimate a frequency component of a frequency range higher than a predetermined threshold value as a pulse. Normally, the pulse rate per minute can be measured from about 40 to 200 depending on the degree of stability or excitement, and thus the region observed in the frequency domain can be limited from 0.75 Hz to 4.00 Hz.

 3. Blood pressure signal estimation method

 Blood pressure is the pressure in the blood vessel of the blood released from the heart. Systolic blood pressure refers to the pressure on the artery when the heart contracts, and diastolic blood pressure refers to the pressure on the artery when the heart is relaxed. Blood pressure fluctuates greatly according to body weight, height, and age, and individual differences are relatively severe.

7 is an example of a process 600 for estimating a blood pressure using an image. First, the camera acquires the user's image (610). The camera acquires images for a certain period of time. As will be described later, it is necessary to extract the degree of change of the brightness value in the image. At this time, the image should include the user's skin. For example, the camera needs to photograph an area including the skin, such as the user's face, arm, hand, and the like. The server then extracts the skin region from the image (620). The skin area means the area where the skin appears in the image. For example, the background can be removed from the image using the face recognition algorithm, and only the face region can be extracted. The server can detect a skin region in an image using a face region detection algorithm or a skin detection algorithm. The server includes a smart device with a built-in camera, a PC connected to the camera, a server at a remote site receiving the image collected by the camera, and a server receiving the image collected by the camera.

The server stores (630) changes in brightness values for the two object regions in the skin region. The process 630 includes a process of setting two object regions and a process of storing a change of a brightness value for each object region. The server sets two object regions in the skin region.

If the server extracts one continuous skin area, the server can divide one skin area into two or set two specific areas in one continuous skin area as the target area. For example, if the skin area is a face area, the face area can be divided into two to set two object areas. Furthermore, if two cameras are used to acquire an image for a user, each skin region may be extracted from each image acquired by each camera, and two skin regions may be set as target regions. For example, when one camera photographs a face area and one camera photographs a hand, the server may set a face area and a hand area as target areas.

The server stores a change in the brightness value for the two target areas. The skin changes color depending on the blood flow in the blood vessels near the skin. That is, by monitoring the brightness value of the target area, it is possible to grasp the flow of the blood flow having a certain rule. A certain rule is the flow of blood flow that moves according to the heartbeat. (1) The server can calculate the average brightness value of the target area for each frame and store the average brightness value for each frame. In this case, the server stores the change of the brightness value in frame units. (2) The server may also calculate and store the average brightness value for the frame at regular intervals. In this case, the brightness value for the still image is calculated at a predetermined time interval. (3) Further, the server may calculate and store the average value of the entire brightness values in a predetermined frame unit.

The server generates a pulse wave signal based on a change in the brightness value of the two target regions (640). As described above, the brightness value of the target area is related to the flow of blood flow. The vertical axis is the brightness value and the horizontal axis is the time flow, the change of the brightness value can be a signal having a constant waveform. The server can convert a signal represented by the brightness value into a pulse wave signal using a band pass filter. Furthermore, the server may use a different filter for removing noise from the pulse wave signal.

The server determines an associated peak point in the pulse wave signal for the two target regions and estimates the time difference between the two peak points as Pulse-wave Transit Time (PTT) (650). There may be several peak points in a single pulse wave signal. The peak point is the point at which the brightness value increases. The brightness value may vary depending on the flow of blood flow.

Once the heart is beating, the moment it is beating, it delivers a constant blood flow to the artery, and then the blood flow to the artery for a short time is reduced. This process is repeated according to the heartbeat. When the heart is beating, the blood vessels that follow the artery repeat the pattern of increasing or decreasing velocity of the blood flow or blood flow according to the heart rate. Eventually, the point at which the brightness value increases in the subject area is due to the action of the heart pushing the blood flow into the artery. Thus, the peak point in the pulse wave signal may be constant or somewhat irregular depending on the heartbeat.

The server looks for peak points in two target areas. At this time, the peak points to be found in the two object regions are the peak points associated with each other. An associated peak point is a point affected by a specific heartbeat. For example, if the heart makes a first beat, the blood flow first increases in the blood vessel at the first point near the heart, and then increases in the blood vessel at the second point at a certain distance from the first blood vessel. At the same time, the blood flow increases at two different points in time. The change in blood flow due to the same beating varies with the distance between the two points. The peak points associated with each other in the two subject areas are due to changes in blood flow over the same beat. Therefore, the server finds the associated peak point considering the distance of the target area. The server estimates the pulse wave propagation time based on the time interval between the associated peak points.

Finally, the server can estimate the blood pressure using the pulse wave transmission time (660). The formula used in the blood pressure estimation can be used in the equation used in the study using the PPG signal and the ECG signal. Most estimates of blood pressure are based on regression equations. The formula for estimating the blood pressure includes the user's body information in addition to the pulse wave transmission time. The body information includes the age, height, weight, etc. of the user. In order for the server to estimate the blood pressure using the regression equation, the body information of the user is required.

However, it may be difficult to obtain physical information about a person who needs to be rescued in an emergency. Thus, the server can basically use the average and weight of adult males and females in the area (country, etc.) providing the structural information. For example, the server can measure the blood pressure using the average height and weight of male and female adults aged 20 to 80 years, 164.4 cm and 64.1 kg, and standard deviations of 9.1 cm and 11.8 kg, respectively.

Furthermore, the supervisor who manages the emergency situation inputs the body information of the rescue object to the server based on the empirical judgment, and the server can measure the blood pressure using the inputted body information. The supervisor can also input the physical information by checking the person included in the image.

The server calculates the systolic blood pressure (BP systolic ) and the minimum blood pressure ( systolic and diastolic blood pressure) of the form as shown in Equation 7 below using the body weight, height, height, and PTT as independent variables, BP diastolic measurement equation can be derived.

Figure 112016014500067-pat00007

In Equation (7), PTT is a value obtained by calculating a time difference between peak points of two signals, and W systolic and W diastolic are values that can be derived through multiple regression analysis as weights. As described above, a constant standard value was used for a person's body weight, height, and age, and constant constant systolic and constant diastolic values were used. For example, in Equation (7), the systolic weight W systolic is -101.5, the diastolic weight W diastolic is -3.6. Constant systolic as a constant to compensate for individual differences can be in the range of 150.4 ~ 172.5, and constant diastolic can be in the range of 78.2 ~ 85.2.

Unlike Equation (7), the server can estimate the blood pressure using only the PTT without using the body information. The server can estimate the systolic BP (BP systolic 2 ) and the BP systolic BP (BP diastolic 2 ) using the following equation (8). The weights W systolic2 and W diastolic2 , the constant systolic2 and constant diastolic2 used in Equation (8) can be derived through regression analysis using only PTT.

Figure 112016014500067-pat00008

8 is an example of a process of estimating a pulse wave transmission time using a facial image. FIG. 8 (a) shows a change in brightness value for two object regions in the form of a signal. In FIG. 8, the signal for the upper region is shown in blue and the signal for the lower region is shown in red.

FIG. 8A is a graph showing the average brightness value for each frame of a moving image of a target area, and the average brightness value for the entire frame. The brightness can be used based on the R, G, B values in the color image. Alternatively, the brightness value may be determined using another color model (YUV, YCgCo, or the like) representing the luminance value of the RGB image.

Fig. 8 (b) shows an example in which the brightness value of the target area expressed in Fig. 8 (a) is converted into a pulse wave signal. In FIG. 8 (a), a signal corresponding to a change in the brightness value stored for each frame is expressed. The brightness value signal for the target area includes not only the pulse wave signal to be photographed but also noise due to movement. A filtering process is required to eliminate noise. The filter can use a band pass filter that allows only the frequency of the corresponding region to pass therethrough so as to extract only the pulse wave signal component. 8 (b) is an example of extracting only the signal corresponding to the pulse wave from the brightness value signal.

8 (c) shows an example of estimating the pulse wave propagation time between two object regions from a pulse wave signal. As described above, the pulse wave signals of the two target regions repeat a flow of increasing and decreasing with a constant period. Find the peak points that are related to each other for the two target areas. In Fig. 8 (c), the peak points related to each other in the two pulse wave signals are indicated by dotted lines. When a peak point of each cycle is found from two pulse wave signals, a time difference in which peak points appear between the two signals is determined as a pulse wave propagation time (PTT) as shown in FIG. 8 (c). Of course, the pulse wave propagation time does not necessarily have to be based on the peak point of the pulse wave signal. Since the pulse-wave propagation time corresponds to the time interval between two pulse-wave signals, it can be calculated using another point as a reference. The server can estimate the BP systolic and BP diastolic using Equation (7) based on the PPT as described above.

 4. Breathing Estimation Method

A method of measuring respiratory rate from an image will be described. It can be estimated based on brightness value in respiration image. The server computes the mean value of the Cg values from each frame to measure respiration rate, and observes the signals in the frequency domain by applying FFT to the Cg signal. That is, the frequency component for the Cg signal is observed during a constant frame (for a constant time). The server analyzes the correlation between pulse and respiration rate and estimates the respiration rate using the highest frequency within the frequency range of 0.13-0.33 Hz. The server can estimate the number of breaths at certain time intervals.

 5. Estimation of oxygen saturation

 Oxygen saturation is one of the important vital signs that need to be managed in everyday life because oxygen deficiency is a cause of various diseases and it can cope with the risk of respiratory management and hypoxia quickly by measuring and observing oxygen saturation. In the present invention, a method of measuring oxygen saturation from an image obtained through a terminal with a camera is used. Set the region of interest for the measurement of oxygen saturation and extract RGB color values from the region of interest.

 And the oxygen saturation is measured by using the color combination obtained by weighting the extracted RGB channels as the characteristic parameter. Oxygen saturation refers to the ratio of oxyhemoglobin to total hemoglobin in the blood as a percentage. The oxygen saturation is measured using an infrared wavelength (750 to 940 nm) and a red wavelength (660 to 750 nm).

FIG. 9 is an example of a process 700 for estimating oxygen saturation using an image. The server acquires an image including the skin area (720). The server may extract the skin region from the image using an image processing technique and determine a particular ROI in the skin region (730). Generally, a skin area such as a face or a finger distinguishes it from a background using skin color. There are various face detection techniques that can be applied to face region detection. The server then determines the region of interest that is the basis for measuring oxygen saturation in the extracted skin region. Various regions of interest can be used. However, it is desirable to set the region of interest to be as small as possible of the skin color and other colors. In the case of the face region, the region including the human eye, nose, mouth, etc. has a different color and has a constant edge, which is not desirable as a region of interest. If the region of interest is set in the face region, a relatively low noise cheek surround is appropriate.

After determining the region of interest, the server generates a feature parameter for measuring oxygen saturation using the color value of the region of interest (740). The server generates feature parameters using the R color value, the G color value, and the B color value of the ROI.

Conventionally, the oxygen saturation measuring apparatus irradiates the infrared wavelength and the red wavelength to the finger, and measures the oxygen saturation using the wavelength (light) transmitted in the opposite direction. The characteristic parameter includes a first parameter corresponding to an infrared wavelength and a second parameter corresponding to a red wavelength.

The server uses the RGB color values of the region of interest to determine the first parameter (C 660 nm ) and the second parameter (C 940 nm ). The feature parameters are determined using the RGB color values of the region of interest. The first parameter (C 660 nm ) and the second parameter (C 940 nm ) are expressed by the following equations (9) and (10), respectively.

Figure 112016014500067-pat00009

Figure 112016014500067-pat00010

Mean (Red) is an average value of R color values of pixels included in the ROI, mean (Green) is an average value of G color values of pixels included in the ROI, mean (Blue) B is the average value of the color values. W R , W G , and W B are weight values for each channel for obtaining C 660 nm . T R , T G and T B are weight values for each channel for obtaining C 940 nm . The weights can be set in advance using blood images or the like.

As shown in FIG. 9, the weights must be set 710 before the server generates the feature parameters.

The server now calculates the average value of the feature parameters for the region of interest and the standard deviation of the feature parameters (750). Wherein the average value of the feature parameters includes an average value for each of the first and second parameters described above. The standard deviation of the feature parameter includes the standard deviation for each of the first and second parameters described above. Here, the mean value and the standard deviation mean an average value and a standard deviation of a plurality of frames for the region of interest. The server determines a first parameter and a second parameter for a region of interest for each moving picture frame and calculates a first parameter and a second parameter for each of a first parameter and a second parameter for a plurality of frames (based on a predetermined time or frame number) .

The average value for the first parameter (C 660 nm ) is represented by DC 660 nm , and the average value for the second parameter (C 940 nm ) is represented by DC 940 nm . In addition, the standard deviations for the characteristic parameters C 660 nm and C 940 nm of each frame are represented by AC 660 nm and AC 940 nm , respectively. The server can measure the oxygen saturation (SpO 2 ) (760) by substituting the four variables (AC 660 nm , AC 940 nm , DC 660 nm , DC 940 nm ) into the following equation (11).

Figure 112016014500067-pat00011

The constants A and B can be determined from the oxygen saturation value measured using the actual equipment. It is possible to determine A and B to minimize Equation (12) below. That is, A and B can be determined by applying a least squares method between the R value and the actual oxygen saturation measurement value.

Figure 112016014500067-pat00012

Here, O ximeter i means the oxygen saturation value measured using the actual equipment. R i can be determined using Equation (13) below.

Figure 112016014500067-pat00013

6. Estimation of body temperature

Body temperature is an important factor in determining the current health status of a person. For example, if a person is exposed to the external environment for a long time, hypothermia due to a drop in body temperature may occur. Hypothermia generally refers to the case where the deep body temperature is below 35 ° C and may lead to death if the hypothermic state persists. As described above, the body temperature of a person included in the image can be estimated by utilizing the information collected using the thermal imaging camera installed in the drones.

10 is an example of a process 800 for determining whether a person is alive or not using an image collected from a drones. The server detects the skin region in the image and sets the region of interest (810). The server estimates biometric information based on the color value of the ROI according to the above-described method (820).

In general, a method of judging whether or not a person has died is to check whether the heart has stopped. Therefore, pulse wave, pulse, blood pressure, etc., which can confirm whether cardiac arrest is possible, can be used. When a cardiac arrest occurs, pulse and pulse are not measured. Therefore, the server sets a certain reference value in consideration of the error in the biometric information measurement process, and can compare the measurement value of the biometric information with the reference value to determine the survival (830). For example, the server may determine that the person included in the image is dead if the measured value of the biometric information is less than the reference value (850). The server may determine that the person included in the image is alive if the measured biometric information value exceeds the reference value (840).

11 is an example of information serving as a reference for generating structural information representing a health condition of a person. 11 (a) is an example of a table that can be used to determine whether a person included in an image is alive or not. For example, the server may determine that a person included in the image is dead if the measured value of the pulse wave is less than A, the pulse is less than B, or the blood pressure is less than C. Or the server may determine that the person included in the image is dead if the measurement value of the pulse wave is less than A, the pulse measurement value is less than B, and the blood pressure measurement value is less than C.

11 (b) is an example of a constant reference value using the measured biometric information and biometric information. 11 (b) shows an example in which the pulse wave v1, the pulse v2, the blood pressure v3, the respiratory rate v4 and the oxygen saturation v5 are all measured. The server may generate the structure information using the measured values of the pulse wave v1, the pulse v2, the blood pressure v3, the respiratory rate v4 and the oxygen saturation v5, respectively. Alternatively, the server can generate a value obtained by summing up each information measured as shown in FIG. 11 (b) to a predetermined function as a reference value.

The item (2) in Fig. 11 (b) is an example in which the pulse v2 and the blood pressure v3 are measured. The server can generate the reference value by summing the values of the pulse v2 and the blood pressure v3. Alternatively, the server may generate a reference value by assigning a predetermined weight to each measured value.

11 (b) to (3) are examples in which the blood pressure v3, the respiratory rate v4, and the oxygen saturation v5 are measured. The server may generate a reference value by constantly calculating blood pressure v3, respiratory rate v4, and oxygen saturation v5.

The server may compare the reference value with a predetermined threshold to determine the urgency of the person included in the image. The threshold may vary depending on the type of biometric information measured and environmental factors at the disaster site.

The present embodiment and drawings attached hereto are only a part of the technical idea included in the above-described technology, and it is easy for a person skilled in the art to easily understand the technical idea included in the description of the above- It will be appreciated that variations that may be deduced and specific embodiments are included within the scope of the foregoing description.

100: Lifesaving system using drones
110: Drones
115: Steering device
130: Analysis server
150: Rescue Server
300: Lifesaving system using drones
310: Drones
315: Steering device
350: Rescue Server
500: Lifesaving system using drones
510: Drones
515: Steering device
550: Rescue Server

Claims (19)

The server receiving the images collected by the drones;
The server detecting an area of interest of a human skin region in the image;
The server estimating the biometric information of the person using the color value of the ROI; And
And generating, by the server, structural information analyzing a health state of the person based on the biometric information,
Wherein the structural information includes the degree of severity of health condition in the case of survival and survival of the person,
Wherein the biometric information includes at least one of a pulse wave, pulse, blood pressure, respiratory rate, oxygen saturation, and body temperature,
And the server adds the number of pixels having a red color of more than a threshold value in the region of interest for every frame of the image, and collects the collected information by using a dron that estimates the pulse wave.
The method according to claim 1,
The drone manipulator receiving the image from the drones; And
Further comprising the step of transferring the image to the server by the drone operating device.
The method according to claim 1,
Wherein the server receives the location of the image taken by the drones and adding the location to the structure information.
The method according to claim 1,
Wherein the server further comprises a step of delivering the structure information to a server managed by a structure center for a structure of a name.
The method according to claim 1,
Wherein the server is further configured to transmit the structure information to the drones.
delete The method according to claim 1,
Wherein the server calculates an average value of brightness values of the ROI for every frame of the image, converts the average value signal extracted from the consecutive frames having a predetermined length into a frequency domain, The method comprising the steps of: (a) acquiring an image of a heart of a patient;
The method according to claim 1,
Wherein the region of interest comprises two regions of interest,
Wherein the server calculates an average value of brightness values of the two ROIs for each frame of the image, converts the average value signal extracted from the consecutive frames of a predetermined length into a frequency domain, Determining an associated peak point having a magnitude equal to or greater than a threshold value among the components, estimating a pulse wave propagation time based on a time difference between peak points of the two regions of interest, and estimating the blood pressure using the pulse wave propagation time A method for generating structural information using collected images.
The method according to claim 1,
Wherein the server calculates an average value of brightness values of the ROI for every frame of the image, converts the average value signal extracted from the continuous frames having a predetermined length into a frequency domain, And estimating the respiratory rate using a frequency having a largest value in a range.
The method according to claim 1,
The server calculates an average value of each of the R color value, the G color value, and the B color value of the ROI in a plurality of frames of the image, and estimates the oxygen saturation using two parameters that give different weights to the average value A method for generating structural information using images collected by a drone.
The drones collecting an image including a skin area of a person;
Detecting a region of interest of the skin region in the image by the controller that receives the drones or the image;
Estimating the human body information of the person using the color value of the region of interest by the drone or the controller; And
And generating the structural information by analyzing the health state of the person based on the biometric information,
Wherein the structural information includes the degree of severity of health condition in the case of survival and survival of the person,
Wherein the biometric information includes at least one of a pulse wave, pulse, blood pressure, respiratory rate, oxygen saturation, and body temperature,
And the drone or the control device adds the number of pixels having a red color equal to or more than a threshold value in the region of interest for every frame of the image, and collects the extracted structure information as a drones for estimating the pulse wave.
12. The method of claim 11,
And transmitting the position and the structure information of the image taken by the drone or the controller to a remote server.
delete 12. The method of claim 11,
The drone or the control device calculates an average value of brightness values of the ROI in every frame of the image, converts the average value signal extracted from consecutive frames of a predetermined length into a frequency domain, The method comprising the steps of: (a) generating a structure information by using an image acquired by a drone, which estimates the pulse at a time interval between frequency components having the frequency components.
12. The method of claim 11,
Wherein the region of interest comprises two regions of interest,
The drone or the control device calculates an average value of brightness values of the two ROIs in every frame of the image, converts the average value signal extracted from a continuous frame having a predetermined length into a frequency domain, Estimating a pulse wave propagation time on the basis of a time difference between peak points of the two regions of interest and determining the blood pressure propagation time using the pulse wave propagation time, A method for generating structural information using images estimated by a drone.
12. The method of claim 11,
The drone or the controller calculates an average value of brightness values of the ROIs for every frame of the image, converts the average value signals extracted from the continuous frames of a predetermined length into a frequency domain, A method for generating structure information using an image collected by a drone for estimating the respiration rate using a frequency having a largest value in the range of Hz to 0.33 Hz.
12. The method of claim 11,
The drone or the controller calculates an average value of each of the R color value, the G color value, and the B color value of the ROI in a plurality of frames of the image, and using the two parameters having different weights to the average value A method for generating structural information using images collected by a drone to estimate oxygen saturation.
12. The method of claim 11,
Further comprising the step of the drones receiving the structure information from the server to deliver the structural article to the person.
12. The method of claim 11,
And transferring the structural article to the person based on the structural information by the drone.
KR1020160016966A 2016-02-15 2016-02-15 Rescue information generating method using drone collecting image at disaster scene KR101888677B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020160016966A KR101888677B1 (en) 2016-02-15 2016-02-15 Rescue information generating method using drone collecting image at disaster scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020160016966A KR101888677B1 (en) 2016-02-15 2016-02-15 Rescue information generating method using drone collecting image at disaster scene

Publications (2)

Publication Number Publication Date
KR20170095505A KR20170095505A (en) 2017-08-23
KR101888677B1 true KR101888677B1 (en) 2018-09-20

Family

ID=59759330

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160016966A KR101888677B1 (en) 2016-02-15 2016-02-15 Rescue information generating method using drone collecting image at disaster scene

Country Status (1)

Country Link
KR (1) KR101888677B1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2750880B2 (en) * 2018-09-27 2021-05-11 Univ Malaga Fixing device, system and method controllable by means of a mechanical arm
KR102013145B1 (en) * 2019-04-05 2019-08-27 대한민국 Method for providing information gathering and sharing service using drone at diaster-affected site based on cooperation of citizen and government
KR102300348B1 (en) * 2021-04-13 2021-09-09 주식회사 프로펠 Drone for sharing field situation based on real-time image analysis, and control method thereof
KR102668778B1 (en) * 2021-06-04 2024-05-23 국립금오공과대학교 산학협력단 System of detecting injured based on uav
KR102369351B1 (en) * 2021-12-14 2022-03-02 (주)에스엔디글로벌 Operating method of smart drone for crime prevention
CN115089150A (en) * 2022-05-30 2022-09-23 合肥工业大学 Pulse wave detection method and device based on unmanned aerial vehicle, electronic equipment and storage medium

Also Published As

Publication number Publication date
KR20170095505A (en) 2017-08-23

Similar Documents

Publication Publication Date Title
KR101888677B1 (en) Rescue information generating method using drone collecting image at disaster scene
JP6727599B2 (en) Biometric information display device, biometric information display method, and biometric information display program
KR101738278B1 (en) Emotion recognition method based on image
KR101996996B1 (en) Method And Apparatus For Measuring Bio-Signal Using Infrared Image
CN108135487A (en) For obtaining the equipment, system and method for the vital sign information of object
US10004410B2 (en) System and methods for measuring physiological parameters
KR101777738B1 (en) Estimating method for blood pressure using video
US9443289B2 (en) Compensating for motion induced artifacts in a physiological signal extracted from multiple videos
US9504426B2 (en) Using an adaptive band-pass filter to compensate for motion induced artifacts in a physiological signal extracted from video
US9436984B2 (en) Compensating for motion induced artifacts in a physiological signal extracted from a single video
US20160278644A1 (en) Contact-less blood pressure measurement
US10984914B2 (en) CPR assistance device and a method for determining patient chest compression depth
Rahman et al. Non-contact physiological parameters extraction using facial video considering illumination, motion, movement and vibration
WO2016012469A9 (en) Unobtrusive skin tissue hydration determining device and related method
CN104519951A (en) Rescue services activation
KR101752560B1 (en) Oxygen saturation measuring method using image and computer readable storage medium of recording oxygen saturation measuring method using image
EP3826539B1 (en) Device, system and method for detection of pulse of a subject
Patil et al. A camera-based pulse transit time estimation approach towards non-intrusive blood pressure monitoring
JP7161812B1 (en) Consciousness state analysis device and program, and observation system
Zarándy et al. Multi-Level Optimization for Enabling Life Critical Visual Inspections of Infants in Resource Limited Environment
CN116994746B (en) Intelligent AED first aid auxiliary method, system, device and readable storage medium
US20230293113A1 (en) System and method of estimating vital signs of user using artificial intelligence
KR102360697B1 (en) Contactless vital-sign measuring system
Islam et al. Extracting heart rate variability: a summary of camera based Photoplethysmograph
Mitsuhashi et al. Two-band infrared video-based measurement for non-contact pulse wave detection on face without visible lighting

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
AMND Amendment
E601 Decision to refuse application
AMND Amendment
X701 Decision to grant (after re-examination)
GRNT Written decision to grant