CN112651388A - Disaster area vital signal detection and positioning method based on unmanned aerial vehicle - Google Patents

Disaster area vital signal detection and positioning method based on unmanned aerial vehicle Download PDF

Info

Publication number
CN112651388A
CN112651388A CN202110075179.7A CN202110075179A CN112651388A CN 112651388 A CN112651388 A CN 112651388A CN 202110075179 A CN202110075179 A CN 202110075179A CN 112651388 A CN112651388 A CN 112651388A
Authority
CN
China
Prior art keywords
signal
video
motion
motion signal
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110075179.7A
Other languages
Chinese (zh)
Other versions
CN112651388B (en
Inventor
杨学志
张龙
沈晶
吴克伟
孔瑞
杨平安
梁帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN202110075179.7A priority Critical patent/CN112651388B/en
Publication of CN112651388A publication Critical patent/CN112651388A/en
Application granted granted Critical
Publication of CN112651388B publication Critical patent/CN112651388B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Abstract

The invention provides a disaster area vital signal detection and positioning method based on an unmanned aerial vehicle, which comprises the following steps: step S1, using an unmanned aerial vehicle to carry out video acquisition and image stabilization; step S2, extracting motion signals of the processed video; step S3, analyzing the frequency characteristics of the motion signal, and extracting the peak frequency sequence of the motion signal; step S4, designing a signal classifier, wherein the signal classifier is used for judging whether the peak frequency is approximately kept unchanged along with the time and whether the peak frequency meets the normal respiration rate range of the search and rescue object; step S5, vital signal detection and localization: and distinguishing the respiratory motion signal through a designed signal classifier, and mapping the respiratory motion signal to a preset frame of the video image after image stabilization processing to obtain a distribution map of the respiratory motion signal, namely obtaining the position of the potential survivor. The invention effectively improves the accuracy of detecting and positioning the vital signals in the disaster area and has stronger robustness to background motion.

Description

Disaster area vital signal detection and positioning method based on unmanned aerial vehicle
Technical Field
The invention relates to the technical field of computer vision, in particular to a life signal detection and positioning method based on unmanned aerial vehicle and capable of resisting background motion interference in a disaster scene.
Background
Natural disasters (such as fire, earthquake, debris flow and the like) threaten the life safety of people, so that post-disaster rescue becomes more important. The information such as the position of the survivor, the vital sign condition and the like can be timely obtained, the comprehensive deployment of the rescue task is facilitated, and more lives can be rapidly saved. However, since the environment of the disaster area is unknown, hazardous environments such as toxic gases, harmful substances, radiation, extreme temperatures, etc. may exist, which presents challenges to the development of rescue.
To address the above challenges, rescue robots are used as an aid in disaster rescue, for example, they can enter narrow gaps that people cannot reach to help rescue teams detect survivors. However, the robot generally requires a human to manually operate and control the robot, and in the case of time-critical situations, the operator may experience cognitive fatigue and physical fatigue. With the development of scientific technology, semi-automatically controlled and fully-automatically controlled detection robots appear, but the detection robots need strict technical support including path planning, autonomous navigation, task allocation and decision, victim identification and the like. In addition, due to the influence of the terrain, the robot is difficult to reach disaster sites across mountains, rivers, lakes and the like, and the disaster rescue time may be delayed.
The development of the unmanned aerial vehicle technology provides a new solution for disaster rescue; the unmanned aerial vehicle is not influenced by complex terrain, can fly to a disaster site quickly, and transmits site images and videos back in real time, so that rescue workers can know disaster site conditions quickly and deploy rescue tasks in time. However, in a disaster site, survivors may be completely blocked by objects such as dust and debris, and it is impossible to determine whether or not survivors exist from the appearance. In 2 months 2020, researchers at southern australian university published a research method to solve this problem by detecting respiratory motion in the drone video through image processing and signal processing to extract respiratory rate and determine the survivors' locations. The method can detect the survivors which are shielded or not shielded and lie on the ground in the disaster site.
The specific steps of the study included: firstly, stabilizing an unmanned aerial vehicle video, and eliminating the unmanned aerial vehicle video shaking to a certain extent; then, dividing the video after the initial image stabilization into video blocks with equal size, and further stabilizing the image; motion amplifying the signal in each block of video to enhance the breathing signal; then, carrying out differential processing and averaging on each amplified video to obtain a 1-dimensional signal as a respiratory signal, and calculating the respiratory rate; and finally, mapping the detected respiration rate to the original video image to obtain a distribution graph of the respiration signal, namely the position of the potential survivor.
However, in a disaster scene video shot by an unmanned aerial vehicle, there are some background motions such as trees and grass movement due to wind in addition to respiratory motions of survivors. The research method proposed by the university scholars of south australia presupposes that only respiratory motion exists in the scene, and therefore, the method is only suitable for the scene without background motion; when background motion exists in a disaster site, the method cannot accurately distinguish the background motion from respiratory motion, and the processing mode of directly averaging the background motion and the respiratory motion obviously cannot ensure that correct respiratory signals are extracted, so that the defects of failure in respiratory rate detection, positioning error and the like are caused, and accurate and efficient rescue cannot be realized.
Disclosure of Invention
The invention provides a method for assisting rescue development based on motion signal frequency characteristic analysis, aiming at the problem that the existing unmanned aerial vehicle-based life signal detection and positioning method has poor background motion interference resistance and the current improvement requirement. The method is characterized in that a classifier for distinguishing respiratory motion signals from background motion signals is designed by analyzing the change characteristics of the frequency of motion signals in the video of the unmanned aerial vehicle along with time. The method can detect the vital signs and positions of survivors sheltered by foreign objects on the disaster site, and is not influenced by other background motions in the environment. The method realizes higher accuracy in the aspects of detection and positioning of the vital signals.
In order to achieve the purpose, the technical scheme provided by the invention is as follows:
the method for detecting and positioning the vital signals of the disaster area resisting background motion interference comprises the following steps:
and step S1, video acquisition and image stabilization.
And shooting the disaster scene video by using a Dajiang unmanned plane (model: DJI Mahic Air 2). The shooting parameters are set as follows: video resolution, e.g., 1920 x 1080, frame rate 60fps, duration 20 seconds, shot height 3-4 meters, video format MP 4. Next, the resolution of the drone video is reduced, e.g., 960 × 540, in order to increase the method processing speed. Then, the video is stabilized for eliminating the shaking of the unmanned aerial vehicle.
In step S2, a motion signal is extracted.
The video is subjected to color space conversion, the video is converted from an RGB space to a YIQ space, and video data of a Y channel (luminance channel) is extracted and recorded as a "Y video". Let the luminance signal at time t, x-coordinate in Y video be I (x; t), then the motion signal at x-coordinate is approximately represented as B (x; t) ═ I (x; t) -I (x; 0).
Step S3, analyzing the frequency characteristics of the motion signal.
Firstly, carrying out time-frequency analysis on the extracted motion signal B (x; t), namely solving the short-time Fourier transform of the motion signal B (x; t), recording the frequency peak value in each sub-signal frequency spectrum, obtaining a peak value frequency sequence changing along with time, and recording the peak value frequency sequence as:
Figure BDA0002907227120000031
i is more than or equal to 1 and less than or equal to N, wherein N represents the number of B (x; t) sub-signals;
then, the variance std (x) of the peak frequency sequence is solved. Considering that the respiratory motion is a stable periodic signal, the peak frequency of which remains constant with time, the variance is small and close to 0; in contrast, in a disaster scene, trees and grasses are not periodically shaken by wind, and the peak frequency is not constant, so that the variance of the peak frequency sequence is large. Therefore, by determining the variance of the sequence of peak frequencies of the motion signal, periodic signals and non-periodic background motion can be separated to some extent.
In addition, it is determined whether the peak frequency meets the breathing band requirement. Considering that the breathing rate of an adult is generally between 0.2 Hz and 0.33Hz, i.e. 12 Hz to 20 breaths per minute, only the motion signal satisfying the breathing rate requirement is the breathing signal. Therefore, by determining whether each peak frequency in the sequence of peaks of the motion signal meets the requirement of the breathing frequency band, the breathing motion can be further distinguished from the background motion.
And step S4, designing a signal classifier.
By analyzing the variance of the peak frequency sequence of the motion signal and whether the respiratory band requirement is met, a signal classifier is designed to distinguish respiratory motion from background motion, which is expressed as follows:
Figure BDA0002907227120000032
when lm (x) is 1, the signal at the coordinate x is a respiratory motion signal; when lm (x) is 0, it indicates that the signal at the coordinate x is a background motion signal;
and step S5, detecting and positioning vital signals.
Firstly, mapping a detection result LM (x) of a signal classifier to a predetermined video frame to obtain a positioning map LMLS (x) of a vital signal, which is expressed as follows:
LMLS(x)=LM(x)e IMG(x) (2)
where e represents a dot product, and img (x) represents a predetermined frame image of the stabilized video.
Further, when the lm (x) is referred to as 1, the coordinate x 'is referred to, and the motion signal B (x'; t) at the corresponding coordinate position is extracted and averaged, so that the respiration signal can be obtained as follows:
Figure BDA0002907227120000041
wherein M represents the total number of x' coordinates,
Figure BDA0002907227120000042
the BPF (t) represents convolution, and is an ideal band-pass filter with a frequency band of 0.2 to 0.33 Hz.
The invention provides a disaster area vital signal detection and positioning method capable of resisting background motion interference, and compared with the prior art, the method has the following beneficial effects:
(1) the method creatively designs a signal classifier by researching and analyzing the frequency characteristic of a motion signal, and the signal classifier is used for distinguishing respiratory motion and background motion in the video of the unmanned aerial vehicle;
(2) compared with the research method of the university scholars in south Australia and other existing researches, the method effectively improves the accuracy of detection and positioning of the vital signals in the disaster area, and has stronger robustness to background motion. The invention can help realize rapid, efficient and accurate life detection in a large range.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the example serve to explain the principles of the invention and not to limit the invention.
FIG. 1 is a schematic flow chart of the method for detecting and positioning the vital signals of the disaster area with background motion interference resistance according to the present invention;
FIG. 2 is a schematic diagram of the frequency characteristic analysis and detection results of the signal classifier of the motion signal according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating a result of detecting and positioning vital signals according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following detailed description of the embodiments of the present invention will be made with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the scope of the invention. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
With reference to fig. 1, the invention provides a method for detecting and positioning a disaster area vital signal resisting background motion interference, which mainly comprises the following steps: video image stabilization, motion signal extraction, frequency characteristic analysis, signal classifier design, and vital signal detection and positioning. Specifically, the constituent steps are set forth as follows:
step S1: and (5) video acquisition and image stabilization.
In this step, according to an embodiment of the present invention, it is preferably implemented as:
1a) using a macro drone (model: DJI movie Air2) to take video of a disaster site. The shooting parameters are set as follows: video resolution 1920 × 1080, frame rate 60fps, duration 20 seconds, shooting height 3-4 m, video format MP 4. Next, the resolution of the drone video is reduced to 960 x 540 in order to increase the method processing speed.
1b) And stabilizing the video to eliminate the shaking of the unmanned aerial vehicle.
In step S2, a motion signal is extracted.
In this step, according to an embodiment of the present invention, specifically, the following steps are performed:
2a) the video is subjected to color space conversion, the video is converted from an RGB space to a YIQ space, and video data of a Y channel (luminance channel) is extracted and recorded as a "Y video".
2b) Recording a brightness signal at the t moment and the x coordinate in the Y video as I (x; t), then the motion signal at the x coordinate is approximately represented as B (x; t) ═ I (x; t) -I (x; 0).
Step S3, analyzing the frequency characteristics of the motion signal.
In this step, according to an embodiment of the present invention, specifically, the following steps are performed:
3a) with reference to fig. 2, first the extracted motion signal B (x; t) performing time-frequency analysis, namely solving a motion signal B (x; t), recording the frequency peak value in each sub-signal frequency spectrum, obtaining the peak value frequency sequence changing along with time, and recording as:
Figure BDA0002907227120000051
i is more than or equal to 1 and less than or equal to N, wherein N represents the number of B (x; t) sub-signals;
3b) solving the mean value mu of the peak frequency seriesxAnd the variance std (x), i.e.
Figure BDA0002907227120000052
Figure BDA0002907227120000053
3c) Determining peak frequency PFi xWhether the respiratory band requirement is met, if PFi xAnd if the frequency band is between 0.2 Hz and 0.33Hz, the point x is considered as a potential respiration signal point.
Wherein, for the judgment of the peak frequency: considering that the respiratory motion is a stable periodic signal, the peak frequency of which remains constant with time, the variance is small and close to 0; in contrast, in a disaster scene, trees and grasses are not periodically shaken by wind, and the peak frequency is not constant, so that the variance of the peak frequency sequence is large. Therefore, by determining the variance of the sequence of peak frequencies of the motion signal, periodic signals and non-periodic background motion can be separated to some extent.
In addition, it is determined whether the peak frequency meets the breathing band requirement. Considering that the breathing rate of an adult is generally between 0.2 Hz and 0.33Hz, i.e. 12 Hz to 20 breaths per minute, only the motion signal satisfying the breathing rate requirement is the breathing signal. Therefore, by determining whether each peak frequency in the sequence of peaks of the motion signal meets the requirement of the breathing frequency band, the breathing motion can be further distinguished from the background motion.
And step S4, designing a signal classifier.
With reference to fig. 2, by analyzing the variance of the peak frequency sequence of the motion signal and determining whether the respiratory band requirement is satisfied, a signal classifier for distinguishing respiratory motion from background motion is designed, and is represented as follows:
Figure BDA0002907227120000061
and step S5, detecting and positioning vital signals.
With reference to fig. 3, in this step, according to an embodiment of the present invention, specifically:
3a) mapping the detection result LM (x) of the signal classifier to a predetermined video frame to obtain a positioning map LMLS (x) of the vital signal, which is expressed as follows:
LMLS(x)=LM(x)e IMG(x) (4)
where e represents a dot product, and img (x) represents a predetermined frame image of the stabilized video.
3b) Inquiring the coordinate x 'when LM (x) is 1, and extracting a motion signal B (x' at the corresponding coordinate position; t) and averaged to obtain a respiratory signal, which is expressed as follows:
Figure BDA0002907227120000062
wherein M represents the total number of x' coordinates,
Figure BDA0002907227120000063
the BPF (t) represents convolution, and is an ideal band-pass filter with a frequency band of 0.2 to 0.33 Hz.
In addition, the predetermined frame is a first frame image of the stabilized video.
The invention creatively introduces the analysis of the motion frequency to distinguish the background motion and the respiratory motion, and improves the precision of the unmanned detection, thereby being applicable to more scenes, having higher accuracy, higher search and rescue efficiency and lower cost.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that are within the spirit and principle of the present invention are intended to be included therein. Therefore, the scope of the present invention should be determined by the following claims.

Claims (8)

1. A disaster area vital signal detection and positioning method based on an unmanned aerial vehicle is characterized by comprising the following steps:
step S1, using an unmanned aerial vehicle to carry out video acquisition and image stabilization;
step S2, extracting motion signals of the processed video;
step S3, analyzing the frequency characteristics of the motion signal, and extracting the peak frequency sequence of the motion signal;
step S4, designing a signal classifier, wherein the signal classifier is used for judging whether the peak frequency is approximately kept unchanged along with the time and whether the peak frequency meets the normal respiration rate range of the search and rescue object;
step S5, vital signal detection and localization: and distinguishing the respiratory motion signal through a designed signal classifier, and mapping the respiratory motion signal to a preset frame of the video image after image stabilization processing to obtain a distribution map of the respiratory motion signal, namely obtaining the position of the potential survivor.
2. The method according to claim 1, wherein the step S1 includes:
(11) utilize unmanned aerial vehicle to shoot disaster scene video, shoot the parameter setting as follows: 1920 × 1080 video resolution, 60fps frame rate, 20 seconds duration, 3-4 m shooting height and MP4 video format;
(12) reducing the resolution of the drone video to 960 x 540 to increase the processing speed of the method;
(13) and stabilizing the video to eliminate the shaking of the unmanned aerial vehicle.
3. The method according to claim 1, wherein the step S2 includes: converting the video from an RGB space to a YIQ space, extracting video data of a Y channel, and recording the video data as a Y video; let the luminance signal at time t, x-coordinate in Y video be I (x; t), then the motion signal at x-coordinate is approximately represented as B (x; t) ═ I (x; t) -I (x; 0).
4. The method according to claim 3, wherein the step S3 includes: (31) and carrying out time-frequency analysis on the extracted motion signal B (x; t), solving short-time Fourier transform of the motion signal B (x; t), recording frequency peak values in each sub-signal frequency spectrum, obtaining a peak value frequency sequence changing along with time, and recording the peak value frequency sequence as:
Figure FDA0002907227110000011
wherein N represents the number of B (x; t) sub-signals; (32) the variance std (x) of the peak frequency series is solved.
5. The method according to any one of claims 1 to 4, wherein the step S4 includes: by analyzing the variance of the peak frequency sequence of the motion signal and whether the peak frequency meets the respiratory band requirement, a signal classifier for distinguishing respiratory motion from background motion is designed, and is represented as follows:
Figure FDA0002907227110000021
when lm (x) is 1, the signal at the coordinate x is a respiratory motion signal; when lm (x) is 0, the signal at the coordinate x is a background motion signal.
6. The method according to claim 5, wherein the step S5 includes: mapping the detection result LM (x) of the signal classifier to a predetermined video frame to obtain a positioning map LMLS (x) of the vital signal, which is expressed as follows:
LMLS(x)=LM(x)e IMG(x) (2)
where e represents a dot product, and img (x) represents a predetermined frame image of the stabilized video.
7. The method according to claim 6, wherein the step S5 further comprises:
the respiratory signal can be obtained by looking up the coordinate x 'when lm (x) ═ 1, extracting the motion signal B (x'; t) at the corresponding coordinate position and taking the average value, and is expressed as follows:
Figure FDA0002907227110000022
wherein M represents the total number of x' coordinates,
Figure FDA0002907227110000023
the BPF (t) represents convolution, and is an ideal band-pass filter with a frequency band of 0.2 to 0.33 Hz.
8. The method of claim 1, wherein the predetermined frame is a first frame image of the stabilized video.
CN202110075179.7A 2021-01-20 2021-01-20 Disaster area vital signal detection and positioning method based on unmanned aerial vehicle Active CN112651388B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110075179.7A CN112651388B (en) 2021-01-20 2021-01-20 Disaster area vital signal detection and positioning method based on unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110075179.7A CN112651388B (en) 2021-01-20 2021-01-20 Disaster area vital signal detection and positioning method based on unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN112651388A true CN112651388A (en) 2021-04-13
CN112651388B CN112651388B (en) 2022-04-26

Family

ID=75371116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110075179.7A Active CN112651388B (en) 2021-01-20 2021-01-20 Disaster area vital signal detection and positioning method based on unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN112651388B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100249611A1 (en) * 2009-03-26 2010-09-30 Edan Instruments. Inc. Respiratory Signal Processing Method
CN104173051A (en) * 2013-05-28 2014-12-03 天津点康科技有限公司 Automatic noncontact respiration assessing system and assessing method
EP2960862A1 (en) * 2014-06-24 2015-12-30 Vicarious Perception Technologies B.V. A method for stabilizing vital sign measurements using parametric facial appearance models via remote sensors
CN106821347A (en) * 2016-12-20 2017-06-13 中国人民解放军第三军医大学 A kind of life detection radar breathing of FMCW broadbands and heartbeat signal extraction algorithm
CN107831491A (en) * 2017-10-10 2018-03-23 广州杰赛科技股份有限公司 Vital signs detection method and system
CN109507653A (en) * 2018-10-22 2019-03-22 中国人民解放军第四军医大学 A method of multi-information perception bioradar system and its acquisition target information based on UWB
CN109875529A (en) * 2019-01-23 2019-06-14 北京邮电大学 A kind of vital sign detection method and system based on ULTRA-WIDEBAND RADAR
CN110200607A (en) * 2019-05-14 2019-09-06 南京理工大学 Method for eliminating body motion influence in vital sign detection based on optical flow method and LMS algorithm
CN110327036A (en) * 2019-07-24 2019-10-15 东南大学 The method of breath signal and respiratory rate is extracted from wearable ECG
CN110544259A (en) * 2019-09-04 2019-12-06 合肥工业大学 method for detecting disguised human body target under complex background based on computer vision
WO2020004721A1 (en) * 2018-06-27 2020-01-02 유메인주식회사 Method for measuring vital information by using ultra-wideband impulse radar signal
CN110909717A (en) * 2019-12-09 2020-03-24 南京理工大学 Moving object vital sign detection method

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100249611A1 (en) * 2009-03-26 2010-09-30 Edan Instruments. Inc. Respiratory Signal Processing Method
CN104173051A (en) * 2013-05-28 2014-12-03 天津点康科技有限公司 Automatic noncontact respiration assessing system and assessing method
EP2960862A1 (en) * 2014-06-24 2015-12-30 Vicarious Perception Technologies B.V. A method for stabilizing vital sign measurements using parametric facial appearance models via remote sensors
CN106821347A (en) * 2016-12-20 2017-06-13 中国人民解放军第三军医大学 A kind of life detection radar breathing of FMCW broadbands and heartbeat signal extraction algorithm
CN107831491A (en) * 2017-10-10 2018-03-23 广州杰赛科技股份有限公司 Vital signs detection method and system
WO2020004721A1 (en) * 2018-06-27 2020-01-02 유메인주식회사 Method for measuring vital information by using ultra-wideband impulse radar signal
CN109507653A (en) * 2018-10-22 2019-03-22 中国人民解放军第四军医大学 A method of multi-information perception bioradar system and its acquisition target information based on UWB
CN109875529A (en) * 2019-01-23 2019-06-14 北京邮电大学 A kind of vital sign detection method and system based on ULTRA-WIDEBAND RADAR
CN110200607A (en) * 2019-05-14 2019-09-06 南京理工大学 Method for eliminating body motion influence in vital sign detection based on optical flow method and LMS algorithm
CN110327036A (en) * 2019-07-24 2019-10-15 东南大学 The method of breath signal and respiratory rate is extracted from wearable ECG
CN110544259A (en) * 2019-09-04 2019-12-06 合肥工业大学 method for detecting disguised human body target under complex background based on computer vision
CN110909717A (en) * 2019-12-09 2020-03-24 南京理工大学 Moving object vital sign detection method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DINGLIANG WANG 等: "Photoplethysmography based stratification of blood pressure using multi information fusion artificial neural network", 《2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW)》 *
傅正龙 等: "一种可靠的运动检测方案及应用", 《数字电视与数字视频》 *
杨昭 等: "抗运动干扰的人脸视频心率估计", 《电子与信息学报》 *
霍亮 等: "适用于昼夜环境的呼吸率视频检测", 《中国图象图形学报》 *

Also Published As

Publication number Publication date
CN112651388B (en) 2022-04-26

Similar Documents

Publication Publication Date Title
CN111967393B (en) Safety helmet wearing detection method based on improved YOLOv4
CN104680555B (en) Cross the border detection method and out-of-range monitoring system based on video monitoring
CN105898216B (en) A kind of number method of counting carried out using unmanned plane
CN104106260B (en) Control based on geographical map
CN104656658B (en) Air blowdown remote-sensing monitoring method and system based on unmanned plane
WO2018144929A1 (en) System and methods for improved aerial mapping with aerial vehicles
CN112166439A (en) True-to-composite image domain transfer
US20130336526A1 (en) Method and system for wildfire detection using a visible range camera
CN110889327B (en) Intelligent detection method for sewage outlet around water area based on thermal infrared image
US9418299B2 (en) Surveillance process and apparatus
CN106485868A (en) The monitoring server of the monitoring method of the condition of a fire, system and the condition of a fire
CN111982291A (en) Fire point positioning method, device and system based on unmanned aerial vehicle
CN106056624A (en) Unmanned aerial vehicle high-definition image small target detecting and tracking system and detecting and tracking method thereof
CN110047092B (en) multi-target real-time tracking method in complex environment
CN111461013B (en) Unmanned aerial vehicle-based real-time fire scene situation awareness method
CN108268811A (en) Image processing method, device and computer readable storage medium
JP6536567B2 (en) Detection apparatus, detection method, and computer program
CN105528581B (en) Video smoke event intelligent detecting method based on bionical color reaction model
CN104702917A (en) Video concentrating method based on micro map
US11089235B2 (en) Systems and methods for automatic detection and correction of luminance variations in images
CN105761275A (en) Fire-fighting early warning aircraft with binocular visual structure
CN112651388B (en) Disaster area vital signal detection and positioning method based on unmanned aerial vehicle
Rahman et al. Computer vision-based wildfire smoke detection using UAVs
Bradley et al. Georeferenced mosaics for tracking fires using unmanned miniature air vehicles
CN114120171A (en) Fire smoke detection method, device and equipment based on video frame and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant