CN115205714A - Fire point positioning method and device, electronic equipment and storage medium - Google Patents

Fire point positioning method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115205714A
CN115205714A CN202210692336.3A CN202210692336A CN115205714A CN 115205714 A CN115205714 A CN 115205714A CN 202210692336 A CN202210692336 A CN 202210692336A CN 115205714 A CN115205714 A CN 115205714A
Authority
CN
China
Prior art keywords
target
flame
image
determining
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210692336.3A
Other languages
Chinese (zh)
Inventor
陈友明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Honghe Communication Group Co ltd
Original Assignee
Sichuan Honghe Communication Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Honghe Communication Group Co ltd filed Critical Sichuan Honghe Communication Group Co ltd
Priority to CN202210692336.3A priority Critical patent/CN115205714A/en
Publication of CN115205714A publication Critical patent/CN115205714A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Fire-Detection Mechanisms (AREA)

Abstract

The invention relates to a fire point positioning method and device, electronic equipment and a storage medium, and belongs to the technical field of positioning. According to the invention, the first detection image is used for acquiring the focal distance parameter, the flame target is focused based on the focal distance parameter, the target area containing the flame target can be accurately obtained, the thermal induction infrared emitter is used for identifying the fire point in the target area, and the infrared distance measurement is carried out on the fire point, so that the accuracy and the efficiency of fire point positioning are improved, and the manual intervention in the fire point positioning process can be effectively reduced.

Description

Fire point positioning method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of positioning technologies, and in particular, to a method and an apparatus for positioning a fire point, an electronic device, and a storage medium.
Background
The existing fire point positioning method mainly determines the position of a heat source fire point in a mode of detecting a fixed heat source, so that the accurate positioning of the fire point is difficult to realize, detected heat source information needs to be transmitted to a background, and the position and the distance of the fire point are estimated manually through naked eyes so as to realize the reliable positioning of the fire point and facilitate subsequent feedback responses such as fire extinguishment and the like.
Based on the technical analysis, the existing fire point positioning method has the defects of low positioning efficiency, low positioning accuracy and large degree of human intervention.
Disclosure of Invention
The invention provides a fire point positioning method, a fire point positioning device, electronic equipment and a storage medium, aiming at solving the defects of low positioning efficiency, low positioning accuracy and large human intervention degree of the existing fire point positioning method.
In a first aspect, to solve the above technical problem, the present invention provides a fire point locating method, including:
acquiring a first detection image taking the position of the flame target as the image center position;
determining a focal length parameter corresponding to the flame target based on the first detection image;
determining a target area where the flame target is located based on the focal length parameter;
the location of the fire point in the target area is determined using a thermally-induced infrared emitter.
The invention has the beneficial effects that: the method comprises the steps of acquiring a focal length parameter by utilizing a first detection image, focusing a flame target based on the focal length parameter, accurately obtaining a target area containing the flame target, identifying a fire point in the target area by utilizing a thermal induction infrared transmitter, and measuring the distance of the fire point, so that the accuracy and the efficiency of fire point positioning are improved, and manual intervention in the fire point positioning process can be effectively reduced.
Further, the determining the focal length parameter corresponding to the flame target based on the first detection image includes:
acquiring the image size of a first detection image;
determining the size of an anchor frame of a target anchor frame corresponding to the flame target based on a pre-trained flame target detection model and a first detection image;
determining the size ratio of the flame target in the first detection image based on the size of the anchor frame and the image size of the first detection image;
and determining a focal length parameter corresponding to the flame target based on the size ratio.
The beneficial effect who adopts above-mentioned improvement scheme is: the focal length parameter suitable for the flame target can be reasonably determined according to the size ratio of the flame target in the first detection image.
Further, the flame target detection model is obtained by training in the following way:
collecting videos corresponding to different scenes respectively;
collecting images containing flame targets from each video to obtain an effective data set;
performing data enhancement processing on the effective data set to obtain an expanded data set;
and performing iterative training on the initial flame target detection model based on the extended data set until the loss function value of the initial flame target detection model meets a preset training end condition, and determining the initial flame target detection model at the end of training as the flame target detection model, wherein the model structure of the initial flame target detection model is a Yolov5x model structure.
The beneficial effect who adopts above-mentioned improvement scheme is: the method comprises the steps of establishing a data set for model training by utilizing videos in different scenes, and performing data enhancement processing on the data set, so that the wide applicability of the flame target detection model is effectively improved.
Further, the above determining the location of the fire point in the target area using a thermally-induced infrared emitter, comprising:
acquiring a thermal sensing image of a target area by using a thermal sensing infrared emitter, wherein deflection angles of the thermal sensing infrared emitters corresponding to pixel points at different positions in the thermal sensing image are different;
acquiring target points meeting set conditions in the thermal induction image, wherein the set conditions are that the gray value of each pixel point is equal to the gray value of a target pixel point in the thermal induction image, the gray value of each pixel point in a set peripheral region corresponding to the target pixel point is greater than the set gray value, and the target pixel point is the pixel point corresponding to the maximum gray value in the gray values of the pixel points in the thermal induction image;
carrying out laser ranging on a target point for multiple times to obtain multiple target distances between the target point and a thermal induction infrared transmitter;
based on the respective target distances, a location of a fire point in the target area is determined.
The beneficial effect who adopts above-mentioned improvement scheme is: the fire point in the target area can be quickly found by utilizing the gray value in the thermal induction image, and the accuracy of locating the fire point can be further improved by measuring the distance of the fire point for multiple times.
Further, the determining a location of a fire point in the target area based on the respective target distances includes:
determining the average value of each target distance, and taking the average value as the fire point distance;
acquiring a target deflection angle of the thermal induction infrared transmitter corresponding to the target point;
determining a first coordinate of the fire point under a self coordinate system based on the pre-established self coordinate system, the fire point distance and the target deflection angle;
converting the first coordinate into a second coordinate under a world coordinate system, and acquiring longitude and latitude data and altitude coordinate data corresponding to the second coordinate to obtain the position of a fire point in a target area;
the target deflection angle comprises a horizontal plane rotation included angle, a side plane rotation included angle and a vertical rotation included angle, the original point of a self coordinate system is the position of the thermal induction infrared transmitter, and the first coordinate is determined by the following formula:
x h =0+l*sinα y *cosα z
y h =0+l*sinα x *cosα z
z h =0+l*sinα x *cosα y
wherein x is h 、y h And z h Coordinate values of the first coordinate in three axial directions of x, y and z of the coordinate system, wherein l represents the fire point distance, and alpha represents the fire point distance x Representing the angle of rotation of the horizontal plane, alpha y Indicating the angle of rotation of the side plane, alpha z Indicating the vertical included angle of rotation.
The beneficial effect who adopts above-mentioned improvement scheme is: the first coordinate of the fire point under the coordinate system of the fire point can be rapidly calculated through the target deflection angle and the fire point distance, and the longitude and latitude and the altitude information of the fire point are determined through a coordinate conversion mode based on the first coordinate, so that the fire point is positioned.
Further, the determining the target area where the flame target is located based on the focal length parameter includes:
based on the focal length parameter, randomly acquiring a set number of video frames for the flame target within a set time length to obtain a plurality of target video frames;
for each target video frame, detecting whether the target video frame is a flame image or not based on a trained flame target detection model, wherein the flame image is an image containing a flame target;
if the number of the detected flame images is larger than the set value, determining a target frame corresponding to the flame target in the flame images by using the flame target detection model and the flame images, and determining an area corresponding to the target frame as a target area where the flame target is located;
and if the number of the detected flame images is not more than a set value, acquiring the collected images of different areas until the first detection image with the position of the flame target as the image center position is acquired again.
The beneficial effect who adopts above-mentioned improvement scheme is: by carrying out multiple times of flame target identification on the video frame, the misjudgment rate of the flame target can be reduced, and the effectiveness of the subsequently acquired fire point position is ensured.
Further, the acquiring of the first detection image with the position of the flame target as the image center position includes:
acquiring a second detection image containing a flame target;
acquiring the central point position of a flame target in a second detection image to obtain a first central point position;
acquiring the central point position of a second detection image to obtain a second central point position;
detecting whether the position of the first center point is the same as the position of the second center point;
if the position of the first central point is the same as that of the second central point, determining the second detection image as the first detection image;
and if the first central point position is different from the second central point position, acquiring a first shooting angle corresponding to the second detection image, determining a rotation angle of the acquisition equipment based on distance information between the first central point position and the second central point position, determining a second shooting angle corresponding to the first detection image based on the first shooting angle and the rotation angle, and performing image acquisition on the flame target according to the second shooting angle to obtain the first detection image.
The beneficial effect who adopts above-mentioned improvement scheme is: by acquiring the first detection image taking the flame target as the image center, the integrity of the flame target contained in the first detection image can be ensured, and the subsequent positioning of the fire point is facilitated.
In a second aspect, the present invention provides a fire point locating device, comprising:
the acquisition module is used for acquiring a first detection image taking the position of the flame target as the image center position;
the first processing module is used for determining a focal length parameter corresponding to the flame target based on the first detection image;
the second processing module is used for determining a target area where the flame target is located based on the focal length parameter;
and the positioning module is used for determining the position of the fire point in the target area by utilizing the thermal induction infrared transmitter.
In a third aspect, the present invention provides a computer readable storage medium having stored therein instructions which, when run on a terminal device, cause the terminal device to perform all or part of the steps of the fire point locating method as in the first aspect.
In a fourth aspect, the present invention provides an electronic device comprising a memory, a processor and a program stored on the memory and running on the processor, the processor implementing all or part of the steps of the fire point locating method according to the first aspect when executing the program.
Drawings
Fig. 1 is a schematic flowchart of a fire point locating method according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of an ignition point positioning apparatus according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following examples are further illustrative and supplementary to the present invention and do not limit the present invention in any way.
A fire point locating method according to an embodiment of the present invention is described below with reference to the accompanying drawings.
Referring to fig. 1, the present invention provides a fire point locating method, including steps S1 to S4, wherein:
in step S1, a first detection image having the position of the flame target as the image center position is acquired.
In this embodiment, this first detection image is the visible light image, and its camera that sets up on the accessible unmanned aerial vehicle is gathered.
Optionally, in an embodiment, the obtaining of the first detection image with the position of the flame target as the image center position includes:
acquiring a second detection image containing a flame target;
acquiring the central point position of a flame target in a second detection image to obtain a first central point position;
acquiring the central point position of a second detection image to obtain a second central point position;
detecting whether the position of the first center point is the same as the position of the second center point;
if the position of the first central point is the same as that of the second central point, determining the second detection image as the first detection image;
and if the first central point position is different from the second central point position, acquiring a first shooting angle corresponding to the second detection image, determining a rotation angle of the acquisition equipment based on distance information between the first central point position and the second central point position, determining a second shooting angle corresponding to the first detection image based on the first shooting angle and the rotation angle, and performing image acquisition on the flame target according to the second shooting angle to obtain the first detection image.
Exemplarily, a self coordinate system comprising a vertical Z axis and a horizontal X axis is set by taking the unmanned aerial vehicle as a coordinate origin, so that the unmanned aerial vehicle acquires a detection image in real time in the flight process, and the acquired detection image is subjected to flame target detection. When a flame target is detected (i.e., a second detection image is acquired), a pixel separation distance (P) between a first center point position and a second center point position in a vertical direction (Z-axis direction) and a horizontal direction (X-axis direction) is acquired z ,P x ) And acquiring the rotation angle (alpha) of the camera of the unmanned aerial vehicle relative to the Z axis and the X axis when acquiring the second detection image z1 ,α x1 ) The angle of rotation (alpha) z1 ,α x1 ) The first shooting angle corresponding to the second detection image is obtained.
Mapping linear relation coefficient (k) generated based on early-stage statistics z ,k x ) And determining the angle (namely the rotation angle of the acquisition equipment) that the camera of the unmanned aerial vehicle should rotate. And superposing the first shooting angle and the rotation angle to obtain a second shooting angle. Keeping other parameters of the camera of the unmanned aerial vehicle unchanged, rotating the camera to the second shooting angle and then carrying out detection image acquisition, thereby obtaining a first detection image taking the position of the flame target as the image center position.
As a possible embodiment, the first recording angle is determined by the following formula:
Figure BDA0003700585670000071
wherein (alpha) z1 ,α x1 ) Indicates a first photographing angle, (k) z ,k x ) Represents the coefficient of the mapping linear relation, (P) z ,P x ) Representing the pixel separation distance between the first and second centroid positions.
In step S2, based on the first detection image, a focal length parameter corresponding to the flame target is determined.
It can be understood that the focal length parameter corresponding to the flame target is used to enable the flame target to have a better imaging effect, for example, a clear imaging image of the flame target can be obtained by performing focus shooting on the flame target based on the focal length parameter, so as to improve the identification precision of the flame target.
Optionally, in an embodiment, the implementation of determining the focal length parameter corresponding to the flame target based on the first detection image includes:
acquiring the image size of a first detection image;
determining the size of an anchor frame of a target anchor frame corresponding to the flame target based on the pre-trained flame target detection model and the first detection image;
determining the size ratio of the flame target in the first detection image based on the size of the anchor frame and the image size of the first detection image;
and determining a focal length parameter corresponding to the flame target based on the size ratio.
Illustratively, the focal length parameter corresponding to the flame target is determined by the following inequality:
Figure BDA0003700585670000081
wherein P represents the size proportion of the flame object in the first detection image, and 2-time zoom, 4-time zoom and 6-time zoom represent the type of the focal length parameter.
Optionally, in an embodiment, the flame target detection model is trained by:
collecting videos corresponding to different scenes respectively;
collecting images containing flame targets from each video to obtain an effective data set;
carrying out data enhancement processing on the effective data set to obtain an expanded data set;
and performing iterative training on the initial flame target detection model based on the extended data set until the loss function value of the initial flame target detection model meets a preset training ending condition, and determining the initial flame target detection model at the end of training as the flame target detection model, wherein the model structure of the initial flame target detection model is a Yolov5x model structure.
In the embodiment, an initial flame target detection model is established by adopting a Yolov5x model structure through a deep learning technology, and flame targets are trained and detected, so that the flame targets in visible light images shot by a camera are identified.
Illustratively, over fifty hours of flame video is respectively captured for a plurality of scenes, such that each flame video includes time periods of midday, night, evening, dawn, and the like. The individual flame videos may contain scenes such as forests, grasslands, roads, squares, and cells, and each flame video may contain a target collection distance of 10m to 5000m, which represents the distance between the flame target and the collection device. The time, the place and the target acquisition distance in each flame video are approximately evenly distributed so as to ensure the diversity and the balance of data sources. And respectively screening the flame images with the same set number for each flame video to obtain an effective data set with twenty-ten thousand flame images.
For the fitting nature that increases the later stage model to make the model maximize the camera color of different brand manufacturers and educate the condition, and can deal with the impaired distortion data of camera certain degree simultaneously, can carry out scene enhancement processing (data enhancement processing) to effective data set.
The scene enhancement processing comprises processing modes such as Gaussian blur, mean blur, maximum or minimum blur, median blur, bilateral blur, noise interference, fog effect, raindrop effect, gray-scale image conversion, color transformation, slight random distortion, brightness adjustment and the like, and twenty thousand flame images are proportionally expanded to more than eighty thousand flame images to obtain an expanded data set. Thereby enriching the data diversity and improving the wide applicability of the model.
The extended data set can be divided into a training set and a test set according to the proportion of 8. Inputting the training set into an initial flame target detection model for training, and completing model training through a training process that Batchsize is 256 and epochs (the number of times of training by using all samples in the training set) is more than 15000, wherein model convergence (namely loss function values meet preset training end conditions). Through test set tests, the model has a good effect, can be directly used for flame target detection, and has the following test results:
rate of accuracy Rate of accuracy Recall rate
94.9% 96.3% 95.6%
As a possible implementation, based on the size ratio of the flame target in the first detection image, the flame type corresponding to the flame target is determined, and the flame type is determined by the following inequality:
Figure BDA0003700585670000091
wherein P represents the size ratio of the flame target in the first detection image, and big fire, medium fire, small fire and mars represent the flame type.
In step S3, a target region where the flame target is located is determined based on the focal distance parameter.
In this embodiment, the obtained focal length parameter is used to keep the position of the camera unchanged, so as to obtain a collected image with a better imaging effect and with the flame target as the center of the image, and the target area where the flame target is located is determined based on the collected image.
Optionally, in an embodiment, the determining, based on the focal distance parameter, the target area where the flame target is located includes:
based on the focal length parameter, randomly acquiring a set number of video frames for the flame target within a set time length to obtain a plurality of target video frames;
for each target video frame, detecting whether the target video frame is a flame image or not based on a trained flame target detection model, wherein the flame image is an image containing a flame target;
if the number of the detected flame images is larger than a set value, determining a target frame corresponding to the flame target in the flame images by using the flame target detection model and the flame images, and determining an area corresponding to the target frame as a target area where the flame target is located;
and if the number of the detected flame images is not more than a set value, acquiring the collected images of different areas until the first detection image with the position of the flame target as the image center position is acquired again.
Illustratively, the position of the camera is kept unchanged, the flame target is subjected to continuous 2-second video acquisition by using the obtained focal length parameter, and 30 frames of images are randomly extracted from the acquired video as target video frames. And detecting the flame target of the 30 frames of images, if more than 80% of target video frames successfully identify the flame target, acquiring a target frame corresponding to the flame target in any one target video frame, otherwise, returning to a normal scanning mode, even if the unmanned aerial vehicle continuously moves, acquiring the images in real time to detect the flame target, and stopping moving the unmanned aerial vehicle and starting to acquire a first detection image until the flame target is detected again in the acquired images.
In step S4, the location of the fire point in the target area is determined using thermally induced infrared emitters.
It is understood that the fire and its surrounding area are high temperature areas, and the fire is the highest temperature point in the high temperature area, so the temperature characteristics of each point in the target area can be obtained by thermal induction, etc. to find the fire.
In the embodiment, the thermal induction infrared emitter is used for emitting heat source induction infrared rays for multiple times in the direction of the target area so as to detect a high-temperature area in the target area, and the distance between a fire point in the high-temperature area and the thermal induction infrared emitter is determined so as to determine the spatial position of the fire point.
Optionally, in an embodiment, the determining the location of the fire point in the target area using a thermally-induced infrared emitter includes:
acquiring a thermal sensing image of a target area by using a thermal sensing infrared emitter, wherein deflection angles of the thermal sensing infrared emitters corresponding to pixel points at different positions in the thermal sensing image are different;
acquiring target points meeting set conditions in the thermal induction image, wherein the set conditions are that the gray value of each pixel point is equal to the gray value of a target pixel point in the thermal induction image, and the gray value of each pixel point in a set peripheral region corresponding to the target pixel point is greater than the set gray value, wherein the target pixel point is the pixel point corresponding to the maximum gray value in the gray values of the pixel points in the thermal induction image;
carrying out laser ranging on the target point for multiple times to obtain multiple target distances between the target point and the thermal induction infrared transmitter;
based on the respective target distances, a location of a fire point in the target area is determined.
Exemplarily, a thermal induction infrared transmitter on the unmanned aerial vehicle is used for transmitting infrared rays and synchronously receiving visible absolute red light; carrying out linear detection on the absolute red light to detect the position of a terminal point; if the end point position is not in the target area, adjusting the deflection angle of the thermal induction infrared transmitter according to the end point position, the central point position of the target area and the current angle of infrared rays emitted by the thermal induction infrared transmitter (namely the current deflection angle of the thermal induction infrared transmitter) so as to enable the end point position of the infrared rays to fall into the target area; the heat-sensitive infrared emitter is transversely or longitudinally scanned in the target area of the frame at the minimum deflection angleAnd performing border crossing detection on each position point in the domain, if the border crossing is performed (namely the end point position of the infrared ray is outside the target area), directly jumping to the position point of the next row/column in the target area, otherwise, acquiring the gray value of the position point and the deflection angle of the heat-sensing infrared emitter corresponding to the position point, and storing the gray value in the memory, wherein the deflection angle of the heat-sensing infrared emitter comprises the deflection angles (alpha) generated by the heat-sensing infrared emitter relative to the X, Y and Z axes in the self coordinate system x ,α y ,α z )。
And acquiring a target pixel point with the maximum gray value in the thermal sensing image, wherein the target pixel point corresponds to a position point with the highest temperature in the target area. If the gray values of pixels in the peripheral area (such as the area with 4-4 pixels) of the target pixel point are all larger than 250, determining the target pixel point as the target point, and performing high-sensitivity laser range finder ranging on the target point to obtain the target distance l = v (speed of infrared ray) × T (round-trip time of infrared ray)/2 between the target point and the thermal sensing infrared emitter; continuously carrying out 3 times of laser ranging on the target point, and taking the average value of the obtained target distance as the fire point distance; and if the target point meeting the set condition cannot be found from the thermal induction image, judging that no fire point exists in the target area, and returning to an environment scanning state, even if the unmanned aerial vehicle continuously flies and detects the flame target in real time.
Optionally, in an embodiment, the determining the location of the fire point in the target area based on the respective target distances includes:
determining the average value of each target distance, and taking the average value as the fire point distance;
acquiring a target deflection angle of the thermal induction infrared transmitter corresponding to the target point;
determining a first coordinate of the fire point under the self coordinate system based on the pre-established self coordinate system, the fire point distance and the target deflection angle;
converting the first coordinate into a second coordinate under a world coordinate system, and acquiring longitude and latitude data and altitude coordinate data corresponding to the second coordinate to obtain the position of a fire point in a target area;
the target deflection angle comprises a horizontal plane rotation included angle, a side plane rotation included angle and a vertical rotation included angle, the original point of a self coordinate system is the position of the thermal induction infrared transmitter, and the first coordinate is determined through the following formula:
x h =0+l*sinα y *cosα z
y h =0+l*sinα x *cosα z
z h =0+l*sinα x *cosα y
wherein x is h 、y h And z h Coordinate values of the first coordinate in three axial directions of x, y and z of the coordinate system, wherein l represents the fire point distance, and alpha represents the fire point distance x Representing the angle of rotation of the horizontal plane, alpha y Indicating the angle of rotation of the side plane, alpha z Indicating the vertical included angle of rotation.
In the graphics, a coordinate conversion formula for converting a local coordinate system into a world coordinate system may be used to convert a first coordinate in the own coordinate system into a second coordinate in the world coordinate system, where the coordinate conversion formula is expressed as follows:
Figure BDA0003700585670000131
wherein v is Local Denotes the coordinates in the local coordinate system (own coordinate system), v Global Representing coordinates in the world coordinate system, R representing a rotation matrix, T representing a translation matrix, (p) x ,p y ,p z ) Denotes the origin of the local coordinate system (r) x ,r y ,r z ) A unit normal vector representing the r-axis of the local coordinate system.
That is, for the same object, its coordinates in the world coordinate system are equal to its origin coordinates in the local coordinate system plus the unit normal vector of the r-axis of the local coordinate system (which can be understood as the x-axis of the local coordinate system).
And finally, converting the second coordinates (x, y, z) into longitude and latitude data (N, W) and altitude coordinate data H through conversion formulas of national 80 and WGS84 coordinate systems, and further obtaining fire point coordinates (N, W, H).
The fire point positioning method provided by the embodiment realizes the function of 3D accurate positioning of the fire point by combining the GIS (geographic information system) position information, the CV (computer vision) target detection technology, the sensor technology and the comprehensive application of the coordinate system conversion algorithm, and provides technical support for subsequent intelligent operation.
By combining a target detection technology in the CV technology, and the combined action of a heat source sensor (a thermal induction infrared emitter) and a video frame, the flame target detection and judgment function is realized, the identification precision is greatly improved, and the false alarm condition is reduced; after the fire point is determined, the heat source sensor is used for returning distance positioning to measure and calculate the distance from the current infrared-emitting device, the GIS map information is combined, the accurate 3D position of the fire point is finally calculated, subsequently, the unmanned aerial vehicle can go to the specified target position and accurately extinguish the fire when arriving at the fire point, and the unmanned aerial vehicle can automatically sample the 3D position of the fire point in real time in the flying process to realize the functions of route updating and correction. Thereby solved because unmanned aerial vehicle can't accurate location fire point, lead to the unable accurate of airborne fire extinguishing equipment to implement the bullet of putting out a fire to cause the failure of putting out a fire, greatly increased conflagration out of control diffusion's possibility problem.
As shown in fig. 2, an embodiment of the present invention provides a fire point positioning apparatus, including:
an obtaining module 20, configured to obtain a first detection image with a position of the flame target as an image center position;
the first processing module 30 is configured to determine a focal length parameter corresponding to the flame target based on the first detection image;
the second processing module 40 is used for determining a target area where the flame target is located based on the focal length parameter;
a locating module 50 for determining the location of the fire point in the target area using thermally-induced infrared emitters.
Optionally, the first processing module 30 is specifically configured to obtain an image size of the first detection image; determining the size of an anchor frame of a target anchor frame corresponding to the flame target based on the pre-trained flame target detection model and the first detection image; determining the size ratio of the flame target in the first detection image based on the size of the anchor frame and the image size of the first detection image; and determining a focal length parameter corresponding to the flame target based on the size ratio.
Optionally, the first processing module 30 is further configured to acquire videos corresponding to different scenes; collecting images containing flame targets from each video to obtain an effective data set; performing data enhancement processing on the effective data set to obtain an expanded data set; and performing iterative training on the initial flame target detection model based on the extended data set until the loss function value of the initial flame target detection model meets a preset training ending condition, and determining the initial flame target detection model at the end of training as the flame target detection model, wherein the model structure of the initial flame target detection model is a Yolov5x model structure.
Optionally, the positioning module 50 is specifically configured to acquire a thermal image of the target area by using a thermal infrared emitter, where deflection angles of the thermal infrared emitters corresponding to pixel points at different positions in the thermal image are different; acquiring target points meeting set conditions in the thermal induction image, wherein the set conditions are that the gray value of each pixel point is equal to the gray value of a target pixel point in the thermal induction image, and the gray value of each pixel point in a set peripheral region corresponding to the target pixel point is greater than the set gray value, wherein the target pixel point is the pixel point corresponding to the maximum gray value in the gray values of the pixel points in the thermal induction image; carrying out laser ranging on the target point for multiple times to obtain multiple target distances between the target point and the thermal induction infrared transmitter; based on the respective target distances, a location of a fire point in the target area is determined.
Optionally, the positioning module 50 is further configured to determine a mean value of the distances of the targets, and use the mean value as the fire point distance; acquiring a target deflection angle of the thermal induction infrared transmitter corresponding to the target point; determining a first coordinate of the fire point under the self coordinate system based on the pre-established self coordinate system, the fire point distance and the target deflection angle; converting the first coordinate into a second coordinate under a world coordinate system, and acquiring longitude and latitude data and altitude coordinate data corresponding to the second coordinate to obtain the position of a fire point in a target area; the target deflection angle comprises a horizontal plane rotation included angle, a side plane rotation included angle and a vertical rotation included angle, the original point of a self coordinate system is the position of the thermal induction infrared transmitter, and the first coordinate is determined through the following formula:
x h =0+l*sinα y *cosα z
y h =0+l*sinα x *cosα z
z h =0+l*sinα x *cosα y
wherein x is h 、y h And z h Coordinate values of the first coordinate in the x, y and z axes of the coordinate system, i represents the fire point distance, and alpha x Representing the angle of rotation of the horizontal plane, alpha y Indicating the angle of rotation of the side plane, alpha z Indicating the vertical included angle of rotation.
Optionally, the second processing module 40 is specifically configured to randomly acquire a set number of video frames for the flame target within a set time length based on the focal length parameter, so as to obtain a plurality of target video frames; for each target video frame, detecting whether the target video frame is a flame image or not based on the trained flame target detection model, wherein the flame image is an image containing a flame target; if the number of the detected flame images is larger than a set value, determining a target frame corresponding to the flame target in the flame images by using the flame target detection model and the flame images, and determining an area corresponding to the target frame as a target area where the flame target is located; and if the number of the detected flame images is not more than a set value, acquiring the collected images of different areas until the first detection image with the position of the flame target as the image center position is acquired again.
Optionally, the obtaining module 20 is specifically configured to obtain a second detection image containing a flame target; acquiring the central point position of a flame target in a second detection image to obtain a first central point position; acquiring the central point position of a second detection image to obtain a second central point position; detecting whether the position of the first center point is the same as the position of the second center point; if the position of the first central point is the same as that of the second central point, determining the second detection image as the first detection image; and if the first central point position is different from the second central point position, acquiring a first shooting angle corresponding to the second detection image, determining a rotation angle of the acquisition equipment based on distance information between the first central point position and the second central point position, determining a second shooting angle corresponding to the first detection image based on the first shooting angle and the rotation angle, and performing image acquisition on the flame target according to the second shooting angle to obtain the first detection image.
An embodiment of the present invention provides a computer-readable storage medium, which stores instructions that, when executed on a terminal device, cause the terminal device to perform the steps of the fire point locating method in any one of the above embodiments.
As shown in fig. 3, an electronic device 500 according to an embodiment of the present invention includes a memory 510, a processor 520, and a program 530 stored in the memory 510 and running on the processor 520, where the processor 520 executes the program 530 to implement the steps of the fire point locating method according to any one of the embodiments.
In the electronic device 500, a computer, a mobile phone, etc. may be selected, and correspondingly, the program 530 is computer software or a mobile phone App, etc., and the above parameters and steps in the electronic device 500 according to the present invention may refer to the parameters and steps in the embodiment of the fire point locating method, which are not described herein again.
As will be appreciated by one skilled in the art, the present invention may be embodied as a system, method or computer program product. Accordingly, the present disclosure may be embodied in the form of: may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software, and may be referred to herein generally as a "circuit," module "or" system. Furthermore, in some embodiments, the invention may also be embodied in the form of a computer program product in one or more computer-readable media having computer-readable program code embodied in the medium.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are exemplary and not to be construed as limiting the present invention, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. A fire point locating method, comprising:
acquiring a first detection image taking the position of the flame target as the image center position;
determining a focal length parameter corresponding to the flame target based on the first detection image;
determining a target area where the flame target is located based on the focal length parameter;
determining a location of a fire point in the target area using a thermally-induced infrared emitter.
2. The method of claim 1, wherein determining the focal length parameter corresponding to the flame target based on the first detected image comprises:
acquiring the image size of the first detection image;
determining the size of an anchor frame of a target anchor frame corresponding to the flame target based on a pre-trained flame target detection model and the first detection image;
determining a size ratio of the flame target in the first detection image based on the anchor frame size and an image size of the first detection image;
and determining a focal length parameter corresponding to the flame target based on the size ratio.
3. The method of claim 2, wherein the flame target detection model is trained by:
collecting videos corresponding to different scenes respectively;
collecting images containing flame targets from each video to obtain an effective data set;
performing data enhancement processing on the effective data set to obtain an expanded data set;
and performing iterative training on an initial flame target detection model based on the extended data set until a loss function value of the initial flame target detection model meets a preset training ending condition, and determining the initial flame target detection model at the end of training as the flame target detection model, wherein the model structure of the initial flame target detection model is a Yolov5x model structure.
4. The method of claim 1, wherein said determining a location of a fire point in the target area using a thermally-induced infrared emitter comprises:
acquiring a thermal induction image of the target area by using a thermal induction infrared emitter, wherein deflection angles of the thermal induction infrared emitters corresponding to pixel points at different positions in the thermal induction image are different;
acquiring target points meeting set conditions in the thermal induction image, wherein the set conditions are that the gray value of each pixel point is equal to the gray value of a target pixel point in the thermal induction image, and the gray value of each pixel point in a set surrounding area corresponding to the target pixel point is greater than the set gray value, wherein the target pixel point is the pixel point corresponding to the maximum gray value in the gray values of the pixel points in the thermal induction image;
carrying out laser ranging on the target point for multiple times to obtain multiple target distances between the target point and the thermal induction infrared transmitter;
based on each of the target distances, a location of a fire point in the target area is determined.
5. The method of claim 4, wherein said determining a location of a fire point in said target area based on each of said target distances comprises:
determining the mean value of each target distance, and taking the mean value as the fire point distance;
acquiring a target deflection angle of the thermal induction infrared transmitter corresponding to the target point;
determining a first coordinate of the fire point under a self coordinate system based on the pre-established self coordinate system, the fire point distance and the target deflection angle;
converting the first coordinate into a second coordinate in a world coordinate system, and acquiring longitude and latitude data and altitude coordinate data corresponding to the second coordinate to obtain the position of a fire point in the target area;
the target deflection angle comprises a horizontal plane rotation included angle, a side plane rotation included angle and a vertical rotation included angle, the original point of the self coordinate system is the position of the thermal induction infrared transmitter, and the first coordinate is determined through the following formula:
x h =0+l*sinα y *cosα z
y h =0+l*sinα x *cosα z
z h =0+l*sinα x *cosα y
wherein x is h 、y h And z h Coordinate values of the first coordinate in the x, y and z axes of the coordinate system, i represents the fire point distance, and alpha x Representing the angle of rotation of the horizontal plane, alpha y Indicating angle of rotation of side plane,α z Indicating the vertical included angle of rotation.
6. The method of any one of claims 1 to 5, wherein determining a target area in which a flame target is located based on the focal distance parameter comprises:
based on the focal length parameter, randomly acquiring a set number of video frames for the flame target within a set time length to obtain a plurality of target video frames;
for each target video frame, detecting whether the target video frame is a flame image or not based on a trained flame target detection model, wherein the flame image is an image containing a flame target;
if the number of the detected flame images is larger than a set value, determining a target frame corresponding to a flame target in the flame images by using the flame target detection model and the flame images, and determining an area corresponding to the target frame as a target area where the flame target is located;
and if the number of the detected flame images is not more than the set value, acquiring the collected images of different areas until the first detection image with the position of the flame target as the image center position is acquired again.
7. The method according to any one of claims 1 to 5, wherein the acquiring a first detection image with a position of the flame target as an image center position includes:
acquiring a second detection image containing a flame target;
acquiring the central point position of the flame target in the second detection image to obtain a first central point position;
acquiring the central point position of the second detection image to obtain a second central point position;
detecting whether the first center point position and the second center point position are the same;
if the first center point position is the same as the second center point position, determining the second detection image as the first detection image;
if the first central point position is different from the second central point position, acquiring a first shooting angle corresponding to the second detection image, determining a rotation angle of the acquisition equipment based on distance information between the first central point position and the second central point position, determining a second shooting angle corresponding to the first detection image based on the first shooting angle and the rotation angle, and performing image acquisition on the flame target according to the second shooting angle to obtain the first detection image.
8. A fire point locating device, comprising:
the acquisition module is used for acquiring a first detection image taking the position of the flame target as an image center position;
the first processing module is used for determining a focal length parameter corresponding to the flame target based on the first detection image;
the second processing module is used for determining a target area where the flame target is located based on the focal length parameter;
and the positioning module is used for determining the position of the fire point in the target area by utilizing a thermal induction infrared transmitter.
9. A computer-readable storage medium having stored therein instructions which, when run on a terminal device, cause the terminal device to perform the steps of the fire localization method according to any one of claims 1 to 7.
10. An electronic device comprising a memory, a processor and a program stored on the memory and running on the processor, wherein the processor when executing the program performs the steps of the fire localization method of any of claims 1-7.
CN202210692336.3A 2022-06-17 2022-06-17 Fire point positioning method and device, electronic equipment and storage medium Pending CN115205714A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210692336.3A CN115205714A (en) 2022-06-17 2022-06-17 Fire point positioning method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210692336.3A CN115205714A (en) 2022-06-17 2022-06-17 Fire point positioning method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115205714A true CN115205714A (en) 2022-10-18

Family

ID=83576252

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210692336.3A Pending CN115205714A (en) 2022-06-17 2022-06-17 Fire point positioning method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115205714A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237199A (en) * 2023-11-15 2023-12-15 中国科学院长春光学精密机械与物理研究所 Method for generating simulation GMTI radar image based on unmanned aerial vehicle aerial photography

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237199A (en) * 2023-11-15 2023-12-15 中国科学院长春光学精密机械与物理研究所 Method for generating simulation GMTI radar image based on unmanned aerial vehicle aerial photography
CN117237199B (en) * 2023-11-15 2024-01-26 中国科学院长春光学精密机械与物理研究所 Method for generating simulation GMTI radar image based on unmanned aerial vehicle aerial photography

Similar Documents

Publication Publication Date Title
CN107707810B (en) Thermal infrared imager-based heat source tracking method, device and system
US10341647B2 (en) Method for calibrating a camera and calibration system
CN110142785A (en) A kind of crusing robot visual servo method based on target detection
CN111982291A (en) Fire point positioning method, device and system based on unmanned aerial vehicle
CN108416285A (en) Rifle ball linkage surveillance method, apparatus and computer readable storage medium
EP3620774B1 (en) Method and apparatus for monitoring plant health state
CN107886670A (en) Forest zone initial fire disaster quickly identifies and localization method, storage medium, electronic equipment
US20100318201A1 (en) Method and system for detecting effect of lighting device
CN110360877B (en) Intelligent auxiliary system and method for shooting training
CN107295230A (en) A kind of miniature object movement detection device and method based on thermal infrared imager
CN109816702A (en) A kind of multiple target tracking device and method
CN113299035A (en) Fire identification method and system based on artificial intelligence and binocular vision
EP3761629B1 (en) Information processing device, autonomous mobile body, information processing method, and program
CN115205714A (en) Fire point positioning method and device, electronic equipment and storage medium
CN114495416A (en) Fire monitoring method and device based on unmanned aerial vehicle and terminal equipment
CN103971479B (en) Forest fires localization method based on camera calibration technology
CN112257554A (en) Forest fire recognition method, system, program and storage medium
US10713527B2 (en) Optics based multi-dimensional target and multiple object detection and tracking method
CN113762161A (en) Intelligent obstacle monitoring method and system
CN109460077B (en) Automatic tracking method, automatic tracking equipment and automatic tracking system
CN114998771B (en) Display method and system for enhancing visual field of aircraft, aircraft and storage medium
CN116755104A (en) Method and equipment for positioning object based on three points and two lines
US20230415786A1 (en) System and method for localization of anomalous phenomena in assets
CN112556655B (en) Forestry fire prevention monocular positioning method and system
US20230045287A1 (en) A method and system for generating a colored tridimensional map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination