CN112248032B - Life body feature detection and identification method for rescue robot - Google Patents

Life body feature detection and identification method for rescue robot Download PDF

Info

Publication number
CN112248032B
CN112248032B CN202011101395.6A CN202011101395A CN112248032B CN 112248032 B CN112248032 B CN 112248032B CN 202011101395 A CN202011101395 A CN 202011101395A CN 112248032 B CN112248032 B CN 112248032B
Authority
CN
China
Prior art keywords
target point
rescue robot
rescue
robot
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011101395.6A
Other languages
Chinese (zh)
Other versions
CN112248032A (en
Inventor
蔡磊
王效朋
徐涛
罗锦
唐文飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Institute of Science and Technology
Original Assignee
Henan Institute of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Institute of Science and Technology filed Critical Henan Institute of Science and Technology
Priority to CN202011101395.6A priority Critical patent/CN112248032B/en
Publication of CN112248032A publication Critical patent/CN112248032A/en
Application granted granted Critical
Publication of CN112248032B publication Critical patent/CN112248032B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/022Optical sensing devices using lasers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/026Acoustical sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones

Abstract

The invention provides a life body feature detection and identification method for a rescue robot, which is used for solving the technical problem that rescue fails due to complex environment of the existing rescue robot. The method comprises the following steps: firstly, constructing an O-shaped detection area by taking the position of a rescue robot as a starting point, and starting an obstacle avoidance system of the rescue robot; secondly, performing rescue scanning on the O-shaped detection area by using a carbon dioxide detector to obtain a target point set; then, performing target detection on the target point set by using an infrared detector, and determining a target point to be rescued; and finally, identifying a target point to be rescued by using the radar life detector in a third stage, determining the specific position of the rescuers, sending a distress signal by using a controller of the rescue robot, and timely informing rescue workers of rescuing. According to the invention, multiple detectors are combined, so that the identification precision is improved, the rescue time is shortened, and the rescue robot has the capability of efficiently and accurately detecting the vital signs of a human body.

Description

Life body feature detection and identification method for rescue robot
Technical Field
The invention relates to the technical field of robot engineering, in particular to a life body characteristic detection and identification method for a rescue robot.
Background
After the 21 st century, artificial intelligence, big data and machine learning have gone up to the stage of the era. The carrier robot as 'artificial intelligence +' also becomes a new favorite of the times. The multifunctional intelligent electronic product is applied to the fields of military industry, agriculture, education, industry and the like, and can finish tasks of conveying goods, detecting agricultural products, guiding children to learn efficiently and the like. In addition, the robot in the high-risk industry is also a focus of attention of people, and when a disaster occurs, rescue team members may not be able to enter a site for rescue due to the restriction of the geographic environment, and the best rescue opportunity is missed. The rescue robot has stronger environmental adaptability relative to rescue team members, which is one of the reasons why the rescue robot becomes the important development direction at present. However, most rescue robots are only provided with a camera, a thermal imager and a communication system, and for some special environments such as a case of being surrounded by glass, a detector can be isolated, so that the accuracy of the detector is seriously affected, and rescue fails. Due to the fact that most of accident areas are complex and glassware is relatively common, most rescue robots face a new round of updating at present. The identification method based on the fusion of multiple detectors in the life science field opens a new door for life feature detection of the rescue robot. The identification method of multi-detector fusion can improve the speed and accuracy of detection of the vital signs, and has good prospects in the field of rescue.
Disclosure of Invention
Aiming at the technical problem that the existing rescue robot fails in rescue due to complex environment, the invention provides a method for detecting and identifying the life body characteristics of the rescue robot, so that the rescue robot has higher efficiency in the complex rescue environment and has quick and efficient rescue capability.
The technical scheme of the invention is realized as follows:
a rescue robot life body feature detection and identification method comprises the following steps:
the method comprises the following steps: introducing the landform characteristics of the disaster area into the rescue robot, constructing an O-shaped detection area by taking the current position of the rescue robot as a starting point, and starting an obstacle avoidance system of the rescue robot;
step two: performing first-stage rescue scanning on the O-shaped detection area by using a carbon dioxide detector of the rescue robot to obtain a target point set;
step three: performing second-stage target detection on the target point set in the second step by using an infrared detector of the rescue robot, and determining a target point to be rescued;
step four: performing third-stage identification on a target point to be rescued in the third step by using a radar life detector, determining the specific position of a rescuer, sending a distress signal by using a controller of the rescue robot, and timely informing rescue workers to rescue;
step five: and (4) reconstructing an O-shaped detection area by using the starting point of the rescue robot in the fourth step, and returning to the second step to search a next target point.
The radius R of the O-shaped detection area is the detection radius R of the carbon dioxide detector 1 2 times of the total weight of the powder.
The working method of the obstacle avoidance system of the rescue robot comprises the following steps:
s1.1, scanning by using a two-dimensional laser radar to obtain a temporary grid map, marking obstacles in the grid map by different pixel points, and planning a path for avoiding the obstacles for the rescue robot through an artificial potential field algorithm;
s1.2, starting an ultrasonic detector, detecting the relative position of the front obstacle and the rescue robot in real time through the ultrasonic detector, triggering an emergency state through the ultrasonic detector when the relative position is smaller than 20cm, and enabling the rescue robot to start to move in the opposite direction or the left and right direction, so that the automatic obstacle avoidance function of the rescue robot is realized.
The method for performing rescue scanning on the O-shaped detection area in the first stage by using the carbon dioxide detector of the rescue robot to obtain the target point set comprises the following steps:
s2.1, starting a rescue mode of the rescue robot, and using a carbon dioxide detector with the radius of R 1 Detecting an O-shaped detection area in a semicircular range of 20m, and determining all target points according to the measured carbon dioxide concentration;
s2.2, calculating the gravitation between all the target points and the rescue robot, sequencing the target points according to the sequence from large to small of the gravitation, and storing the target points and the corresponding angle information into the controller to obtain a target point set.
The calculation formula of the gravitation between the target point and the rescue robot is as follows:
Figure BDA0002725485050000021
where ζ represents the gravitational gain, p(p 1 ,p i,goal ) Indicating the position p of the rescue robot 1 To the target point p i,goal Concentration difference of (2), U f (x i ) Representing target point x i And the attraction force between the rescue robot and the robot.
The method for performing second-stage target detection on the target point set in the step two by using the infrared detector of the rescue robot and determining the target point to be rescued comprises the following steps of:
s3.1, when the target point x in the target point set i When the object enters the detection range of the infrared detector, the infrared radiation wavelength of the infrared detector is used for detecting to obtain a target point x i Wherein i ═ 1,2, … denotes the ith target point;
s3.2, when target point x i Is higher than the infrared radiation energy of its surroundings, and target point x i When the error between the infrared radiation energy and the human body infrared radiation energy range value is less than 5%, the target point x is judged i Locate the human life and aim at the target point x i As a target point to be rescued;
s3.3, target point x i Is present with infrared radiation energy and at target point x i When no infrared radiation energy exists, the angle of the rescue robot is adjusted to a target point x i Performing secondary detection until the target point x obtained by 3 times of continuous detection i No infrared radiation energy exists at the target position, and when infrared energy radiation exists at the periphery, the target point x is detected i As a target point to be rescued;
s3.4, when target point x i When the infrared radiation energy exceeds 5% of the infrared radiation energy range value of the human body, the target point x is detected i Deleting from the target point set, let i be i +1, and go to step S3.1.
The method for identifying the target point to be rescued in the third step by using the radar life detector in the third step and determining the specific position of the rescuer comprises the following steps: the radar life detection instrument firstly carries out horizontal detection on a target point to be rescued to obtain the specific position of the rescue robot on the target point to be rescued, then the specific position is uploaded to the controller, and the rescue robot reads the specific position in the controller and moves to the position near the target point to be rescued according to the slam navigation function of the laser radar; and secondly, the radar life detection instrument vertically detects a target point to be rescued, detects the vertical distance between the personnel at the target point to be rescued and the rescue robot, and determines the accurate position of the rescuers.
The detection range of the infrared detector is R 2 Is a semicircular region of radius, and R 2 <R 1
The range value of the human body infrared radiation energy is 3-50 mu m, wherein 8-14 mu m accounts for 46% of the whole human body infrared radiation energy.
The beneficial effect that this technical scheme can produce:
(1) according to the rescue robot, the infrared detector, the carbon dioxide detector and the radar life detector are combined to carry out gradual detection, so that the rescue robot has the capability of efficiently and accurately detecting the life characteristics of a human body; when the rescue robot reaches a disaster area to work, firstly, a carbon dioxide detector is started to work, and large-range scanning is carried out aiming at a changeable surrounding environment; after the first target point is detected and found, secondary judgment is carried out through an infrared detector, and a radar life detector carries out precise life feature identification and positioning; meanwhile, the rescue robot can independently walk in the rescue area by combining the path planning and obstacle avoidance functions.
(2) The method for detecting and identifying the human vital signs is positioned in the rescue robot, and the detection and the identification can work simultaneously, so that the detection and the identification of the rescue robot are integrated; the combination of a plurality of detectors is realized, the rescue time when only the radar life detector is used is shortened, and the inaccuracy when only the carbon dioxide detector and the infrared detector are used is improved. When the rescue task is executed, the rescue robot automatically starts carbon dioxide detection, and when a target is detected, the next-stage detection is started until the radar life detection is finished. After detection is finished, the rescue robot ensures that a specific rescue task is finished according to the path planning and obstacle avoidance capability of the rescue robot.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a schematic of the carbon dioxide detection of the present invention.
FIG. 3 is a comparative table of carbon dioxide concentrations for the present invention.
Fig. 4 is a schematic diagram of the detection of the infrared detector of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art based on the embodiments of the present invention without inventive step, are within the scope of the present invention.
As shown in fig. 1, an embodiment of the invention provides a method for detecting and identifying life characteristics of a rescue robot, which comprises the following specific steps:
the method comprises the following steps: introducing the landform characteristics of the disaster area into the rescue robot, constructing an O-shaped detection area by taking the current position of the rescue robot as a starting point, and starting an obstacle avoidance system of the rescue robot;
the radius R of the O-shaped detection area is the detection radius R of the carbon dioxide detector 1 2 times of (i.e. R is 2R 1 Wherein, R is 40 m.
The working method of the obstacle avoidance system of the rescue robot comprises the following steps:
s1.1, scanning by using a two-dimensional laser radar to obtain a temporary grid map, marking obstacles in the grid map by different pixel points, and planning a path for avoiding the obstacles for the rescue robot through an artificial potential field algorithm;
s1.2, starting an ultrasonic detector, detecting the relative position of the front obstacle and the rescue robot in real time through the ultrasonic detector, triggering an emergency state through the ultrasonic detector when the relative position is smaller than 20cm, and enabling the rescue robot to start to move in the opposite direction or the left and right directions, so that the automatic obstacle avoidance function of the rescue robot is realized.
The ultrasonic detector combines the obstacle avoidance mode of the two-dimensional laser radar, so that local obstacles can be avoided, and the body of the rescue robot can be prevented from being damaged by objects when the disaster area collapses secondarily.
Step two: performing rescue scanning on the O-shaped detection area in the first stage by using a carbon dioxide detector of the rescue robot to obtain a target point set; the carbon dioxide detector detects carbon dioxide gas in a disaster-stricken environment by the absorption principle of the infrared light source. The carbon dioxide detector has short detection time, but the accuracy is not high, and the aerobic respiration of disaster victims, animals or plants cannot be analyzed during rescue detection. Carbon dioxide is therefore used as a first step in human feature detection and identification.
The specific method comprises the following steps:
s2.1, starting a rescue mode of the rescue robot, namely starting a coarse detection mode, and using a carbon dioxide detector to set the radius as R 1 Detecting an O-shaped detection area in a semicircular range of 20m, and determining all target points according to the measured carbon dioxide concentration; as shown in fig. 2, the schematic diagram of detecting carbon dioxide in this embodiment shows two target points, which are a target point one and a target point two.
The graph of the carbon dioxide detector when detecting that carbon dioxide exists is shown in fig. 3, and when the rescue robot detects the carbon dioxide concentration, the carbon dioxide concentration at different rescue points can be compared. For example, two rescue points are detected in fig. 2, the rescue robot makes a corresponding curve (such as a carbon dioxide concentration comparison table shown in fig. 3) according to the carbon dioxide data, and it is obvious that the carbon dioxide concentration at the target point two is higher.
S2.2, calculating the gravitation between all the target points and the rescue robot, sequencing the target points according to the sequence from large to small of the gravitation, and storing the target points and the corresponding angle information into the controller to obtain a target point set.
In the implementation, a target point I and a target point II are given, so that the gravitation between the target point I and the rescue robot and the target point II and the rescue robot are respectively calculated by the following formula:
Figure BDA0002725485050000051
where ζ represents the gravitational gain, p (p) 1 ,p i,goal ) Indicating the position p of the rescue robot 1 To the target point p i,goal Concentration difference of (2), U f (x i ) Representing target point x i And the attraction between the rescue robot and the robot.
And comparing the attraction corresponding to the target point I with the attraction corresponding to the target point II, wherein the attraction corresponding to the target point I is smaller than the attraction corresponding to the target point II. Therefore, the rescue robot transmits the angle information with higher concentration to the controller, and the rescue robot combines the two-dimensional laser radar obstacle avoidance to approach the target point two; meanwhile, the direction information of the target point I is recorded and transmitted to the controller to wait for the next rescue robot to explore or the target point II determines the characteristics of the life body and then explores.
Step three: performing second-stage target detection on the target point set in the step two by using an infrared detector of the rescue robot, and determining a target point to be rescued; after the rescue robot carries out the first-stage detection, the controller issues a second-stage detection instruction, and the infrared detector starts to work. When the target object enters the detection range of the infrared detector, the infrared life detector judges the life characteristics of the human body by utilizing the difference between the infrared radiation energy wavelengths. When the infrared detector faces the glass for detection, the detection result is inaccurate, and even zero induction or induction zero-approaching image for the rescue workers can be generated. Therefore, when the carbon dioxide detector guides the rescue robot to seek the target point, the infrared sensor has distortion phenomenon. The rescue detection robot can analyze the error judgment data transmitted back by the infrared sensor. The specific method comprises the following steps:
s3.1, when the target point x in the target point set i When the object enters the detection range of the infrared detector, the infrared radiation wavelength of the infrared detector is used for detecting to obtain a target point x i Wherein i ═ 1,2, … denotes the ith target point; the detection range of the infrared detector is R 2 10m is a semicircular area of radius, and R 2 <R 1
S3.2, when the target point x i The infrared radiation energy of the human body is higher than the infrared radiation energy around the human body, secondary analysis is carried out on the wavelength of the infrared radiation, the range value of the infrared radiation energy of the human body is 3-50 mu m, and 8-14 mu m accounts for 46% of the infrared radiation energy of the human body. Target point x i The infrared radiation energy of (2) is compared with the radiation range of human body when the target point x i When the error between the infrared radiation energy and the infrared radiation energy range value of the human body is less than 5 percent, the target point x is judged i Is located in human life (i.e. disaster-stricken person), and the target point x is set i As a target point to be rescued; the infrared detector accurately calibrates a target point based on a sub-pixel feature extraction method of a corner point, and issues a third identification command to the radar life detector through the controller.
S3.3, target point x i Is present with infrared radiation energy and at target point x i When there is no infrared radiation energy, the target point x is judged i The glass object possibly existing in the position obstructs the detection of the thermal infrared, so that the angle of the rescue robot is adjusted to the target point x i Performing secondary detection until the target point x obtained by 3 times of continuous detection i When no infrared radiation energy exists (namely, all three detection results are distortion phenomena), the radar continues to advance and explore by taking the azimuth angle detected by the carbon dioxide as a target, starts the radar life detection instrument to search, and enables a target point x to be detected i As a target point to be rescued.
S3.4, when target point x i When the infrared radiation energy exceeds 5% of the infrared radiation energy range value of the human body, namely the infrared sensor is used for aiming at the target under the condition of no distortionAfter the range is scanned, no human life signs exist, and a target point x is set i Deleting from the target point set, let i be i +1, and go to step S3.1.
Step four: performing third-stage identification on a target point to be rescued in the third step by using a radar life detector, determining the specific position of a rescuer, sending a distress signal by a controller of the rescue robot, and timely informing rescue workers of rescuing;
and the radar life detector receives the identification command of the controller and analyzes the target points of the infrared detector and the carbon dioxide detector. The radar life detection instrument firstly carries out horizontal detection on a target point to be rescued to obtain a specific position of the rescue robot on the target point to be rescued, then the specific position is uploaded to the controller, and the rescue robot reads the specific position in the controller and then moves to the position near the target point to be rescued according to the slam navigation function of the laser radar; and secondly, the radar life detection instrument vertically detects a target point to be rescued, detects the vertical distance between a person at the target point to be rescued and the rescue robot, and determines the accurate position of the rescuer.
After the accurate position of the rescuer is determined, a distress signal is transmitted by the controller. And informing rescue workers, and transmitting the detected position information to a network for real-time sharing. Then the radar life detector begins to analyze the behavior characteristics of the rescued person and detects the movement condition, the breathing condition and the like of the distressed person. The condition of the person in distress is transmitted to the controller, so that the rescue personnel can design the rescue steps more systematically.
The radar life detection instrument can realize accurate positioning and analysis and identification for target object detection, the ultra-low frequency electric wave emitted by the radar life detection instrument can detect the heart condition of a human body, but the time required by the radar life detection instrument in rescue is longer. Then the longer the rescued person is trapped after the disaster occurs, the more the life is threatened, so the rescue time is not delayed. Before the radar life detector detects, a carbon dioxide detector and an infrared detector are added; after the accurate target is searched and determined, the depth recognition is carried out, so that the rescue time is greatly shortened, and the rescue efficiency is improved.
Step five: and (4) reconstructing an O-shaped detection area by using the starting point of the rescue robot in the fourth step, and returning to the second step to search a next target point.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. A rescue robot life body characteristic detection and identification method is characterized by comprising the following steps:
the method comprises the following steps: guiding the landform characteristics of the disaster area into the rescue robot, constructing an O-shaped detection area by taking the current position of the rescue robot as a starting point, and starting an obstacle avoidance system of the rescue robot;
step two: performing rescue scanning on the O-shaped detection area in the first stage by using a carbon dioxide detector of the rescue robot to obtain a target point set;
step three: performing second-stage target detection on the target point set in the second step by using an infrared detector of the rescue robot, and determining a target point to be rescued;
step four: performing third-stage identification on a target point to be rescued in the third step by using a radar life detector, determining the specific position of a rescuer, sending a distress signal by using a controller of the rescue robot, and timely informing rescue workers to rescue;
step five: and (4) reconstructing an O-shaped detection area by using the starting point of the rescue robot in the fourth step, and returning to the second step to search a next target point.
2. The rescue robot life body feature detection and identification method according to claim 1, wherein a radius R of the O-shaped detection area is a detection radius R of the carbon dioxide detector 1 2 times of the total weight of the composition.
3. The rescue robot life body feature detection and identification method as claimed in claim 1, wherein the working method of the obstacle avoidance system of the rescue robot is as follows:
s1.1, scanning by using a two-dimensional laser radar to obtain a temporary grid map, marking obstacles in the grid map by different pixel points, and planning a path for avoiding the obstacles for the rescue robot through an artificial potential field algorithm;
s1.2, starting an ultrasonic detector, detecting the relative position of the front obstacle and the rescue robot in real time through the ultrasonic detector, triggering an emergency state through the ultrasonic detector when the relative position is smaller than 20cm, and enabling the rescue robot to start to move in the opposite direction or the left and right direction, so that the automatic obstacle avoidance function of the rescue robot is realized.
4. The rescuing robot vital body feature detecting and identifying method as claimed in claim 1, wherein the rescuing scan of the O-shaped detecting area by the carbon dioxide detecting instrument of the rescuing robot at the first stage to obtain the target point set comprises:
s2.1, starting a rescue mode of the rescue robot, and using a carbon dioxide detector with the radius of R 1 Detecting an O-shaped detection area in a semicircular range of 20m, and determining all target points according to the measured carbon dioxide concentration;
s2.2, calculating the gravitation between all target points and the rescue robot, sequencing the target points according to the sequence from large to small of the gravitation, and storing the target points and the corresponding angle information into the controller to obtain a target point set.
5. The rescue robot life body feature detection and identification method according to claim 4, wherein the calculation formula of the attraction force between the target point and the rescue robot is as follows:
Figure FDA0002725485040000011
wherein, the Zeta tableGravity gain, p (p) 1 ,p i,goal ) Indicating the position p of the rescue robot 1 To the target point p i,goal Concentration difference of (2), U f (x i ) Representing target point x i And the attraction force between the rescue robot and the robot.
6. The rescuing robot vital body feature detecting and identifying method as claimed in claim 1 or 4, wherein the second stage target detection is performed on the set of target points in step two by using an infrared detector of the rescuing robot, and the method for determining the target point to be rescued comprises:
s3.1, when the target point x in the target point set i When the object enters the detection range of the infrared detector, the infrared radiation wavelength of the infrared detector is used for detecting to obtain a target point x i Wherein i ═ 1,2, … denotes the ith target point;
s3.2, when target point x i Is higher than the infrared radiation energy of its surroundings, and the target point x i When the error between the infrared radiation energy and the human body infrared radiation energy range value is less than 5%, the target point x is judged i Treating the human body with life, and pointing the target point x i As a target point to be rescued;
s3.3, target point x i In the presence of infrared radiation energy and at a target point x i When no infrared radiation energy exists, the angle of the rescue robot is adjusted to a target point x i Performing secondary detection until the target point x obtained by 3 times of continuous detection i When no infrared radiation energy exists at the position and infrared energy radiation exists around the position, the target point x is detected i As a target point to be rescued;
s3.4, when the target point x i When the infrared radiation energy exceeds 5% of the infrared radiation energy range value of the human body, the target point x is detected i Deleting from the target point set, let i be i +1, and go to step S3.1.
7. The rescuing robot vital body feature detecting and identifying method as claimed in claim 1, wherein the radar life detector is used to perform a third stage of identification on the target point to be rescued in the third step, and the method for determining the specific position of the rescuer comprises: the radar life detection instrument firstly carries out horizontal detection on a target point to be rescued to obtain the specific position of the rescue robot on the target point to be rescued, then the specific position is uploaded to the controller, and the rescue robot reads the specific position in the controller and moves to the position near the target point to be rescued according to the slam navigation function of the laser radar; and secondly, the radar life detection instrument vertically detects a target point to be rescued, detects the vertical distance between a person at the target point to be rescued and the rescue robot, and determines the accurate position of the rescuer.
8. The rescue robot life body feature detection and identification method according to claim 6, wherein the detection range of the infrared detector is R 2 Is a semicircular region of radius, and R 2 <R 1
9. The rescue robot life body feature detection and identification method according to claim 6, wherein the human body infrared radiation energy range value is 3-50 μm, wherein 8-14 μm accounts for 46% of all human body infrared radiation energy.
CN202011101395.6A 2020-10-15 2020-10-15 Life body feature detection and identification method for rescue robot Active CN112248032B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011101395.6A CN112248032B (en) 2020-10-15 2020-10-15 Life body feature detection and identification method for rescue robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011101395.6A CN112248032B (en) 2020-10-15 2020-10-15 Life body feature detection and identification method for rescue robot

Publications (2)

Publication Number Publication Date
CN112248032A CN112248032A (en) 2021-01-22
CN112248032B true CN112248032B (en) 2022-07-26

Family

ID=74242603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011101395.6A Active CN112248032B (en) 2020-10-15 2020-10-15 Life body feature detection and identification method for rescue robot

Country Status (1)

Country Link
CN (1) CN112248032B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2403085A1 (en) * 1999-03-16 2000-09-21 Timothy John Brooking Tagging system and method
CN202684924U (en) * 2012-06-25 2013-01-23 防灾科技学院 Intelligent remote control system for mechanical arm and rescue robot using intelligent remote control system
CN103147789A (en) * 2013-03-07 2013-06-12 中国矿业大学 System and method for controlling underground coal mine rescue robot
CN106514661A (en) * 2016-10-28 2017-03-22 天津城建大学 Underground fire disaster patrolling robot system
CN206048216U (en) * 2016-09-06 2017-03-29 广东工业大学 A kind of domestic monitoring robot and its system
CN210829379U (en) * 2019-11-04 2020-06-23 开滦(集团)有限责任公司电信分公司 Emergency system
CN111353687A (en) * 2020-01-16 2020-06-30 黑龙江科技大学 Coal mine emergency rescue command information management system and method thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2403085A1 (en) * 1999-03-16 2000-09-21 Timothy John Brooking Tagging system and method
CN202684924U (en) * 2012-06-25 2013-01-23 防灾科技学院 Intelligent remote control system for mechanical arm and rescue robot using intelligent remote control system
CN103147789A (en) * 2013-03-07 2013-06-12 中国矿业大学 System and method for controlling underground coal mine rescue robot
CN206048216U (en) * 2016-09-06 2017-03-29 广东工业大学 A kind of domestic monitoring robot and its system
CN106514661A (en) * 2016-10-28 2017-03-22 天津城建大学 Underground fire disaster patrolling robot system
CN210829379U (en) * 2019-11-04 2020-06-23 开滦(集团)有限责任公司电信分公司 Emergency system
CN111353687A (en) * 2020-01-16 2020-06-30 黑龙江科技大学 Coal mine emergency rescue command information management system and method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
地震搜救机器人装备开发研制历程概述;尚红等;《中国应急救援》;20180520(第3期);第38-45页 *

Also Published As

Publication number Publication date
CN112248032A (en) 2021-01-22

Similar Documents

Publication Publication Date Title
Correa et al. Mobile robots navigation in indoor environments using kinect sensor
Dames et al. Autonomous localization of an unknown number of targets without data association using teams of mobile sensors
CN102288176B (en) Coal mine disaster relief robot navigation system based on information integration and method
CN202216696U (en) Coal mine disaster relief robot navigation device based on information integration
Winkvist et al. Towards an autonomous indoor aerial inspection vehicle
CN111070180A (en) Post-disaster rescue channel detection robot based on ROS
CN111753694B (en) Unmanned vehicle target searching system and method
Jin et al. A robust autonomous following method for mobile robots in dynamic environments
CN113029169A (en) Air-ground cooperative search and rescue system and method based on three-dimensional map and autonomous navigation
Ehlers et al. Map management approach for SLAM in large-scale indoor and outdoor areas
WO2020085142A1 (en) Measurement apparatus and measurement system
Zhang et al. Mobile sentry robot for laboratory safety inspection based on machine vision and infrared thermal imaging detection
Nickerson et al. An autonomous mobile robot for known industrial environments
CN112248032B (en) Life body feature detection and identification method for rescue robot
CN108107916A (en) A kind of Intelligent unattended machine hunting system
CN112286190A (en) Security patrol early warning method and system
Morra et al. Visual control through narrow passages for an omnidirectional wheeled robot
Hahn et al. Heat mapping for improved victim detection
Alhmiedat et al. A Systematic Approach for Exploring Underground Environment Using LiDAR-Based System.
CN107544504B (en) Disaster area rescue robot autonomous detection system and method for complex environment
Gustafson et al. Swarm technology for search and rescue through multi-sensor multi-viewpoint target identification
Bostelman et al. 3D range imaging for urban search and rescue robotics research
Nieuwenhuisen et al. Autonomous MAV navigation in complex GNSS-denied 3D environments
Jeon et al. Indoor/outdoor transition recognition based on door detection
Ho-Won et al. Using hybrid algorithms of human detection technique for detecting indoor disaster victims

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant