CN116206223A - Fire detection method and system based on unmanned aerial vehicle edge calculation - Google Patents

Fire detection method and system based on unmanned aerial vehicle edge calculation Download PDF

Info

Publication number
CN116206223A
CN116206223A CN202310148429.4A CN202310148429A CN116206223A CN 116206223 A CN116206223 A CN 116206223A CN 202310148429 A CN202310148429 A CN 202310148429A CN 116206223 A CN116206223 A CN 116206223A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
target
fire
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310148429.4A
Other languages
Chinese (zh)
Inventor
赵冬冬
柴晓晰
陈赢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202310148429.4A priority Critical patent/CN116206223A/en
Publication of CN116206223A publication Critical patent/CN116206223A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/52Scale-space analysis, e.g. wavelet analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention provides a fire detection method and a fire detection system based on unmanned aerial vehicle edge calculation, which belong to the technical field of target detection and comprise the following steps: collecting image data of a target detection area and flight state data of the unmanned aerial vehicle; inputting the target detection area image data and the unmanned aerial vehicle flight state data into a pre-trained fire detection model to obtain a fire ignition point detection result; the fire detection model is obtained by constructing an edge calculation deep learning model framework based on a Yolov3 algorithm and training the edge calculation deep learning model framework by adopting a target detection area image sample and a classification label. According to the invention, the unmanned aerial vehicle shoots and acquires the detection image of the target area, and the Yolov3 algorithm is adopted to carry out deep learning edge detection processing, so that the position information of the fire disaster can be rapidly and accurately detected, and the intelligent and efficient detection method has the advantages of intelligence and high efficiency.

Description

Fire detection method and system based on unmanned aerial vehicle edge calculation
Technical Field
The invention relates to the technical field of target detection, in particular to a fire detection method and system based on unmanned aerial vehicle edge calculation.
Background
In fire control, the timing of finding the fire point in time in early stage is very important, especially in complex environments such as forests, once the position of the fire point can be detected quickly and timely, the fire can be controlled in the sprouting stage, and the method has very important significance for protecting forest economy and property.
At present, in forest fire prevention and control, unmanned aerial vehicles and other low-altitude flying unmanned aerial vehicles are commonly used for inspecting forests, so that the defect that manual work is inconvenient in a forest environment is avoided, meanwhile, due to the rapid development of deep learning in the field of image recognition, many methods apply the deep learning to the field of image target detection, and currently, two main forest fire detection methods based on the deep learning are adopted, namely, the manual characteristics of flame or smoke such as the characteristics of color, texture, shape and the like are adopted, and then BP neural network or SVM is adopted for training. Detecting the video frame by using a training model, and detecting whether smoke or flame exists or not; the other is to adopt a CNN feature and a deep learning method, wherein the CNN feature extraction layer comprises a plurality of shared convolution kernels, can easily process high-dimensional data, and automatically extract features without adding other manual features. However, the two methods have the defects of too high processing efficiency, complex calculation and the like due to too much dependence on manually acquired image characteristics, and have weak practicability in scenes with high requirements on real-time performance and accuracy in forest fire detection.
Therefore, in order to meet the requirement of detecting the fire point of forest fires, a new fire detection method is needed.
Disclosure of Invention
The invention provides a fire detection method and a fire detection system based on unmanned aerial vehicle edge calculation, which are used for solving the defects of complex detection algorithm and low processing efficiency in forest fire ignition point detection in the prior art.
In a first aspect, the present invention provides a fire detection method based on unmanned aerial vehicle edge calculation, including:
collecting image data of a target detection area and flight state data of the unmanned aerial vehicle;
inputting the target detection area image data and the unmanned aerial vehicle flight state data into a pre-trained fire detection model to obtain a fire ignition point detection result;
the fire detection model is obtained by constructing an edge calculation deep learning model framework based on a Yolov3 algorithm and training the edge calculation deep learning model framework by adopting a target detection area image sample and a classification label.
According to the fire detection method based on unmanned aerial vehicle edge calculation, the acquisition target detection area image data and unmanned aerial vehicle flight state data comprise the following steps:
collecting a plurality of real-time images to be detected of a target detection area;
and acquiring three-dimensional space position information of the unmanned aerial vehicle when shooting images.
According to the fire detection method based on unmanned aerial vehicle edge calculation, the fire detection model is obtained through the following steps:
acquiring an image sample of the target detection area;
constructing a Yolov3 model comprising a Backbone part, a rock part and a Yolo Head part, and acquiring the classification tag;
inputting the target detection area image sample to the backstone part for feature extraction and downsampling to obtain an image downsampling feature map;
inputting the image downsampling feature images to the Neck part for upsampling to obtain feature images with different scales;
and inputting the feature images with different scales into a Yolo Head part to calculate loss, so as to obtain the fire detection model.
According to the fire detection method based on unmanned aerial vehicle edge calculation provided by the invention, the acquisition of the target detection area image sample comprises the following steps:
and acquiring a plurality of infrared image samples of different shooting angles, fire points, different shooting heights and different shielding degrees of the target detection area.
According to the fire detection method based on unmanned aerial vehicle edge calculation provided by the invention, the method for inputting the target detection area image sample into the backstone part for feature extraction and downsampling to obtain an image downsampling feature map comprises the following steps:
determining a Darknet-53 depth convolutional neural network of which the Backbone part comprises 52 convolutional layers and 1 full-connection layer, wherein the Darknet-53 depth convolutional neural network further comprises a plurality of residual modules, and the convolutional layers with preset convolutional kernel sizes and preset convolutional kernel step sizes are arranged among the plurality of residual modules;
and inputting the target detection area image sample into the Darknet-53 deep convolutional neural network to perform multi-scale downsampling, and obtaining the image downsampling characteristic map.
According to the fire detection method based on unmanned aerial vehicle edge calculation provided by the invention, the image downsampling feature map is input to the Neck part for upsampling to obtain feature maps with different scales, and the method comprises the following steps:
determining the Neck part as a path aggregation network;
inputting the image downsampling feature images into the path aggregation network to respectively obtain a first scale feature image, a second scale feature image and a third scale feature image, wherein the depths of the first scale feature image, the second scale feature image and the third scale feature image are the same, and the side lengths of the first scale feature image, the second scale feature image and the third scale feature image are sequentially doubled.
According to the fire detection method based on unmanned aerial vehicle edge calculation, the method for obtaining the fire detection model by inputting the feature images with different scales into a Yolo Head part for loss calculation comprises the following steps:
calculating target confidence loss values of all samples in the feature graphs with different scales by adopting binary cross entropy loss to obtain target probability of existence in a target detection frame;
calculating target class loss values of positive samples in the feature graphs with different scales by adopting binary cross entropy loss to obtain the existence probability of target classes in a target detection frame;
calculating target positioning loss values of positive samples in the feature graphs with different scales by adopting error square sum loss to obtain positioning probability of targets and true values in a target detection frame;
and correcting the Yolov3 model by integrating the target probability in the target detection frame, the target class existence probability in the target detection frame and the target and true value positioning probability in the target detection frame to obtain the fire detection model.
In a second aspect, the present invention also provides a fire detection system based on unmanned aerial vehicle edge calculation, including:
the acquisition module is used for acquiring image data of a target detection area and flight state data of the unmanned aerial vehicle;
the detection module is used for inputting the target detection area image data and the unmanned aerial vehicle flight state data into a pre-trained fire detection model to obtain a fire ignition point detection result;
the fire detection model is obtained by constructing an edge calculation deep learning model framework based on a Yolov3 algorithm and training the edge calculation deep learning model framework by adopting a target detection area image sample and a classification label.
In a third aspect, the present invention also provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the fire detection method based on unmanned aerial vehicle edge calculation as described in any one of the above when executing the program.
In a fourth aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a fire detection method based on unmanned aerial vehicle edge calculation as described in any of the above.
In a fifth aspect, the invention also provides a computer program product comprising a computer program which, when executed by a processor, implements a fire detection method based on unmanned aerial vehicle edge calculation as described in any of the above.
According to the fire detection method and system based on unmanned aerial vehicle edge calculation, the unmanned aerial vehicle is used for shooting to obtain the detection image of the target area, and the Yolov3 algorithm is adopted to carry out deep learning edge detection processing, so that the position information of a fire point of a fire can be rapidly and accurately detected, and the method and system have intelligentization and high efficiency.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a fire detection method based on unmanned aerial vehicle edge calculation;
FIG. 2 is a second flow chart of the fire detection method based on unmanned aerial vehicle edge calculation provided by the invention;
FIG. 3 is a block diagram of Darknet-53 provided by the present invention;
FIG. 4 is a schematic view of an infrared image containing a fire source provided by the present invention;
FIG. 5 is a schematic view of an infrared image provided by the present invention that does not contain a fire source;
FIG. 6 is a schematic diagram of the target detection effect provided by the present invention;
fig. 7 is a schematic structural diagram of a fire detection system based on unmanned aerial vehicle edge calculation provided by the invention;
fig. 8 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to solve the defects in the existing forest fire ignition point detection technology, the unmanned aerial vehicle is introduced to collect images of target detection areas, however, due to the load problem of the unmanned aerial vehicle, the performance of an onboard processor of the unmanned aerial vehicle is poorer than that of a ground platform, so that the calculation amount carried by the onboard processor cannot be too large, the calculation capacity of an edge device of a deep learning algorithm is limited, the calculation capacity of the edge device is effectively coordinated with the deep learning, the purpose of deep learning target detection is achieved on the premise that the enough load calculation capacity of the edge device is ensured, and whether a fire ignition point exists or not is detected in a picture shot by a remote sensing camera of the unmanned aerial vehicle, so that a forest fire detection system taking the edge device of the unmanned aerial vehicle as a calculation center is formed.
Fig. 1 is one of flow diagrams of a fire detection method based on unmanned aerial vehicle edge calculation according to an embodiment of the present invention, as shown in fig. 1, including:
step 100: collecting image data of a target detection area and flight state data of the unmanned aerial vehicle;
step 200: inputting the target detection area image data and the unmanned aerial vehicle flight state data into a pre-trained fire detection model to obtain a fire ignition point detection result;
the fire detection model is obtained by constructing an edge calculation deep learning model framework based on a Yolov3 algorithm and training the edge calculation deep learning model framework by adopting a target detection area image sample and a classification label.
Specifically, in order to achieve the purpose of autonomous perception of an unmanned aerial vehicle to calculate forest fires, the embodiment of the invention constructs an edge calculation deep learning model for detecting the fire points of the forest fires, namely a fire detection model. The fire detection model adopts a Yolov3 algorithm model, as shown in fig. 2, a target area image shot by an unmanned aerial vehicle remote sensing camera in real time is input, detection is carried out through a Yolov3 target detection algorithm, the Yolov3 algorithm model comprises a Backbone part based on a Darknet-53 convolutional neural network structure, a Neck part based on a path aggregation network PANet and three Yolo Head parts, and a final output detection result, namely the ignition point position information, is obtained.
The fire detection model is obtained by training and learning the constructed basic model based on the target detection area image sample and the classification label obtained by labeling the existing sample, the target detection area image data to be processed is input into the trained fire detection model by combining the unmanned aerial vehicle flight state data, and the final fire ignition point detection result is output.
According to the invention, the unmanned aerial vehicle shoots and acquires the detection image of the target area, and the Yolov3 algorithm is adopted to carry out deep learning edge detection processing, so that the position information of the fire disaster can be rapidly and accurately detected, and the intelligent and efficient detection method has the advantages of intelligence and high efficiency.
Based on the above embodiment, the collecting the image data of the target detection area and the flight status data of the unmanned aerial vehicle includes:
collecting a plurality of real-time images to be detected of a target detection area;
and acquiring three-dimensional space position information of the unmanned aerial vehicle when shooting images.
Specifically, for target data to be processed, in a detection stage, image data shot by a remote sensing camera of a cloud platform of the unmanned aerial vehicle in real time is received, and meanwhile, real-time flight state data of the unmanned aerial vehicle, namely three-dimensional space position information and the like of the unmanned aerial vehicle when the unmanned aerial vehicle shoots the image data, are collected and serve as input data of a subsequent model.
It can be understood that the embodiment of the invention combines the position data of the unmanned aerial vehicle with the information of the acquired image data, so that the detection result is more accurately corrected and positioned by the position information of the unmanned aerial vehicle, and a more accurate detection result is obtained.
Based on the above embodiment, the fire detection model is obtained by:
acquiring an image sample of the target detection area;
constructing a Yolov3 model comprising a Backbone part, a rock part and a Yolo Head part, and acquiring the classification tag;
inputting the target detection area image sample to the backstone part for feature extraction and downsampling to obtain an image downsampling feature map;
inputting the image downsampling feature images to the Neck part for upsampling to obtain feature images with different scales;
and inputting the feature images with different scales into a Yolo Head part to calculate loss, so as to obtain the fire detection model.
Wherein, the obtaining the target detection area image sample includes:
and acquiring a plurality of infrared image samples of different shooting angles, fire points, different shooting heights and different shielding degrees of the target detection area.
The step of inputting the target detection area image sample to the backstone part for feature extraction and downsampling to obtain an image downsampling feature map comprises the following steps:
determining a Darknet-53 depth convolutional neural network of which the Backbone part comprises 52 convolutional layers and 1 full-connection layer, wherein the Darknet-53 depth convolutional neural network further comprises a plurality of residual modules, and the convolutional layers with preset convolutional kernel sizes and preset convolutional kernel step sizes are arranged among the plurality of residual modules;
and inputting the target detection area image sample into the Darknet-53 deep convolutional neural network to perform multi-scale downsampling, and obtaining the image downsampling characteristic map.
The step of inputting the image downsampling feature map to the neg part for upsampling to obtain feature maps with different scales comprises the following steps:
determining the Neck part as a path aggregation network;
inputting the image downsampling feature images into the path aggregation network to respectively obtain a first scale feature image, a second scale feature image and a third scale feature image, wherein the depths of the first scale feature image, the second scale feature image and the third scale feature image are the same, and the side lengths of the first scale feature image, the second scale feature image and the third scale feature image are sequentially doubled.
The step of inputting the feature maps with different scales to a Yolo Head part for loss calculation to obtain the fire detection model comprises the following steps:
calculating target confidence loss values of all samples in the feature graphs with different scales by adopting binary cross entropy loss to obtain target probability of existence in a target detection frame;
calculating target class loss values of positive samples in the feature graphs with different scales by adopting binary cross entropy loss to obtain the existence probability of target classes in a target detection frame;
calculating target positioning loss values of positive samples in the feature graphs with different scales by adopting error square sum loss to obtain positioning probability of targets and true values in a target detection frame;
and correcting the Yolov3 model by integrating the target probability in the target detection frame, the target class existence probability in the target detection frame and the target and true value positioning probability in the target detection frame to obtain the fire detection model.
Specifically, the fire detection model training of the embodiment of the invention comprises three main steps;
in order to enable the model output result to be more accurate, the embodiment of the invention respectively collects a plurality of infrared image samples with different shooting angles, fire points, different shooting heights and different shielding degrees in the target detection area of the forest fire, and fully contains various situations in the actual fire detection scene as much as possible, so that the training result is more comprehensive and accurate.
Firstly, after a certain number of target detection area image samples are obtained, inputting the target detection area image samples into a Yolov3 model, and performing feature extraction on an image and multi-scale downsampling on the image by data firstly passing through a backbox part of the Yolov3 model to obtain a downsampled feature map of the image.
The backward portion of the Yolov3 model employs a deep convolutional neural network of dark net-53, which decomposes the image by downsampling multiple times to obtain feature maps of different scales, the structure of dark net-53 is shown in fig. 3, numeral 53 represents a total of 52 convolutional layers and the final connect layer (full connection layer) of the entire backward portion, and the dark net-53 stacks a plurality of residual blocks (Res units) including two convolutional layers, which perform some additional calculations on the image features and add the results to the input features to maintain advanced features of the image. The residual modules are separated by a convolution layer with kernel_size of 3×3 and stride of 2, and the function is mainly downsampling, so as to capture complex features on different scales in an image. The stride here is a stride, which is a sampling interval of the convolution kernel through the input feature map, and is set in order to reduce the number of input parameters and reduce the calculation amount.
And secondly, obtaining a downsampled feature map output by the back-bone part, connecting the downsampled feature map to a Neck part of Yolov3, and outputting a plurality of feature maps with different scales and different receptive fields.
The Neck part is mainly composed of a PANet network (path aggregation network), and features of different scales are synthesized by constructing a multi-path feature map aggregation network, so that the detection precision is improved. PANet is divided into two parts: feature Pyramid (Feature Pyramid) and path aggregation module (Path Aggregation Module). Feature pyramids are used to extract features of different sizes, the features of each layer are independent, and the features are reduced in size from top to bottom by the downsampling layer. The path aggregation module synthesizes features with different sizes through a convolution layer and an up-sampling layer from bottom to top, and improves the expression capability of the features through residual convolution so as to obtain finer feature information.
In the embodiment of the invention, yolov3 outputs 3 different scale feature graphs, fusion prediction object frames are respectively carried out on the two dimensional feature graphs, targets with different sizes are detected by adopting multiple scales, and finer grid cells (grid cells for recording the positions of objects) can detect finer objects. The 3 multi-scale feature maps each include:
(1) First scale feature map y1: the feature diagram of the scale 1 is convolved to directly obtain the boundary frame information;
(2) Second scale feature map y2: up-sampling convolution output by the scale 1, adding the up-sampled convolution with the feature map of the scale 2, and outputting boundary box information through convolution, wherein the size of the whole feature map is doubled relative to the scale 1;
(3) Third scale feature map y3: the same principle as the second scale feature map y2 processing.
Here, the depth of the three scale feature maps is 255, and the side lengths are 13:26:52.
and thirdly, transmitting the feature map with three scales to a final Yolo Head part, transmitting the original output of the network to a loss layer to calculate loss, outputting a result, correcting and the like to obtain a final prediction analysis result. Yolo Head is a decoder whose main structure is a conv+bn+act module convolved with a kernel_size= 1*1 classification layer.
Note that the loss function of Yolov3 is divided into three parts: target confidence loss, target category loss, and target location loss.
(1) The target confidence penalty can be understood as the probability of predicting the presence of a target within a target rectangular box, the target confidence penalty being a binary cross entropy penalty (Binary Cross Entropy):
L conf (o,c)=-Σ(o i ln(sigmoid(c i ))+(1-o i )ln(1-sigmoid(c i )))
wherein L is conf Indicating a target confidence loss function, o indicates the presence or absence of a target in the prediction box, o i Indicating whether an object really exists in a predicted object boundary box i, wherein 0 indicates absence and 1 indicates presence; c represents a predicted value, c i Indicating the targets present in the predicted target box i, sigmoid (c i ) Sigmoid probability indicating that there is a target in the predicted target rectangular box iThe rate.
(2) The object class loss also uses a binary cross entropy loss, and only a positive sample is lost, wherein in the object detection task, the positive sample generally refers to an image area containing an object:
L cla (o,c)=-∑ ij (o ij ln(sigmoid(c ij ))+(1-o ij )ln(1-sigmoid(c ij )))
wherein L is cla Representing a target confidence loss function, o ij Indicating whether a j-th class target really exists in a predicted target boundary box i, wherein 0 indicates absence and 1 indicates presence; c ij Indicating the presence of a class j object within the predicted object bounding box, sigmoid (c ij ) And (5) representing the sigmoid probability of the j-th class object in the boundary box i of the network prediction object.
(3) The target positioning loss adopts error square sum loss, and only positive samples have target positioning loss:
Figure BDA0004089935540000101
wherein L is loc Representing the target confidence loss function, x, y, w, h representing the xy-axis coordinates and length-width of the center point of the box, t, g representing the predicted target box and the true value bounding box respectively,
Figure BDA0004089935540000102
representing the coordinates and length and width values of the target box i, respectively, and representing the probability of predicting the presence of the target in the target box i by a sigma () function, +.>
Figure BDA0004089935540000103
And respectively representing the center point coordinates and the length and width information of the bounding box corresponding to the true value.
In summary, the three loss functions described above, the loss function overall L for Yolov3 is expressed as:
L=L conf +L cla +L loc
the method is characterized in that a forest park in a certain place is taken as an actual test place, and algorithm actual measurement is carried out by adopting test equipment comprising a Xinjiang unmanned aerial vehicle (model: M300), an onboard camera (model: H20T) and an onboard computer (model: manifold 2-G).
In order to accurately test the accuracy of the detection depth neural network, a data set for testing the accuracy of the model is acquired and shot in the forest park in the field. The specific acquisition process of the test data comprises the following steps:
1) 3-5 simulated carbon furnace fires are artificially set, and the fires need to be in an open area, partially covered by trees and completely covered by trees (note: in order to comply with relevant laws and regulations, a real fire scene cannot be arranged on site, so that a carbon furnace is adopted as a simulated fire source, and the carbon furnace and the simulated fire source have higher temperature and higher similarity;
2) For each set simulated fire point, the unmanned aerial vehicle carries the onboard computer to fly at a height of about 30m above the fire point, the infrared images of the fire point are shot at any shooting angle, the shot infrared test images are concentrated and uniformly stored in a camera disk, and after shooting is completed, the test images are extracted to a local computer to manufacture a test data set.
And finally, 671 effective test images are acquired together, and the whole test set comprises infrared images with different shooting angles, fire points, different shooting heights and different shielding degrees. Fig. 4 shows a schematic view of an infrared image containing a fire source, and fig. 5 shows a schematic view of an infrared image not containing a fire source, and then each test image is manually labeled to form a final test data set, and model training is performed to obtain an accurate fire detection model.
Further, the marked tag test images are input into a fire disaster fire point detection deep learning neural network, and the output result of each test image is recorded. And counting the correct detection quantity by comparing the real label of the test image with the model output result. Assuming that the number of detected correct images is n, the total number of images is m, and the final accuracy of the detection neural network accuracy is calculated according to the following formula:
Figure BDA0004089935540000111
the higher the accuracy is, the better the detection effect of the detection module is.
The high efficiency of the Yolov3 algorithm adopted by the invention enables calculation to be carried out on the unmanned aerial vehicle edge equipment, the final accuracy of the detection depth neural network is 90% through statistics in the test experiment stage, the expected target is achieved, the experimental result is shown as a diagram in fig. 6, 0.9996 in fig. 6 is the probability of identifying the fire, the forest fire is successfully detected by utilizing the self edge computing capability of the unmanned aerial vehicle in the unmanned aerial vehicle edge equipment, the fire information can be automatically detected without transmitting an image to a central server, the utility of the unmanned aerial vehicle is fully exerted, the communication distance is shortened, and a good foundation is laid for predicting the fire spreading and rescue emergency actions in the later stage.
The fire detection system based on unmanned aerial vehicle edge calculation provided by the invention is described below, and the fire detection system based on unmanned aerial vehicle edge calculation described below and the fire detection method based on unmanned aerial vehicle edge calculation described above can be correspondingly referred to each other.
Fig. 7 is a schematic structural diagram of a fire detection system based on unmanned aerial vehicle edge calculation according to an embodiment of the present invention, as shown in fig. 7, including: an acquisition module 71 and a detection module 72, wherein:
the acquisition module 71 is used for acquiring image data of a target detection area and flight state data of the unmanned aerial vehicle; the detection module 72 is configured to input the target detection area image data and the unmanned aerial vehicle flight status data into a pre-trained fire detection model, so as to obtain a fire ignition detection result;
the fire detection model is obtained by constructing an edge calculation deep learning model framework based on a Yolov3 algorithm and training the edge calculation deep learning model framework by adopting a target detection area image sample and a classification label.
Fig. 8 illustrates a physical structure diagram of an electronic device, as shown in fig. 8, which may include: processor 810, communication interface (Communications Interface) 820, memory 830, and communication bus 840, wherein processor 810, communication interface 820, memory 830 accomplish communication with each other through communication bus 840. The processor 810 may invoke logic instructions in the memory 830 to perform a fire detection method based on unmanned aerial vehicle edge calculations, the method comprising: collecting image data of a target detection area and flight state data of the unmanned aerial vehicle; inputting the target detection area image data and the unmanned aerial vehicle flight state data into a pre-trained fire detection model to obtain a fire ignition point detection result; the fire detection model is obtained by constructing an edge calculation deep learning model framework based on a Yolov3 algorithm and training the edge calculation deep learning model framework by adopting a target detection area image sample and a classification label.
Further, the logic instructions in the memory 830 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, the computer program product including a computer program, the computer program being storable on a non-transitory computer readable storage medium, the computer program, when executed by a processor, being capable of executing the fire detection method based on unmanned aerial vehicle edge calculation provided by the above methods, the method comprising: collecting image data of a target detection area and flight state data of the unmanned aerial vehicle; inputting the target detection area image data and the unmanned aerial vehicle flight state data into a pre-trained fire detection model to obtain a fire ignition point detection result; the fire detection model is obtained by constructing an edge calculation deep learning model framework based on a Yolov3 algorithm and training the edge calculation deep learning model framework by adopting a target detection area image sample and a classification label.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the unmanned aerial vehicle edge calculation-based fire detection method provided by the above methods, the method comprising: collecting image data of a target detection area and flight state data of the unmanned aerial vehicle; inputting the target detection area image data and the unmanned aerial vehicle flight state data into a pre-trained fire detection model to obtain a fire ignition point detection result; the fire detection model is obtained by constructing an edge calculation deep learning model framework based on a Yolov3 algorithm and training the edge calculation deep learning model framework by adopting a target detection area image sample and a classification label. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. The fire detection method based on unmanned aerial vehicle edge calculation is characterized by comprising the following steps of:
collecting image data of a target detection area and flight state data of the unmanned aerial vehicle;
inputting the target detection area image data and the unmanned aerial vehicle flight state data into a pre-trained fire detection model to obtain a fire ignition point detection result;
the fire detection model is obtained by constructing an edge calculation deep learning model framework based on a Yolov3 algorithm and training the edge calculation deep learning model framework by adopting a target detection area image sample and a classification label.
2. The unmanned aerial vehicle edge calculation-based fire detection method of claim 1, wherein the acquiring target detection area image data and unmanned aerial vehicle flight status data comprises:
collecting a plurality of real-time images to be detected of a target detection area;
and acquiring three-dimensional space position information of the unmanned aerial vehicle when shooting images.
3. The fire detection method based on unmanned aerial vehicle edge calculation according to claim 1, wherein the fire detection model is obtained by:
acquiring an image sample of the target detection area;
constructing a Yolov3 model comprising a Backbone part, a rock part and a Yolo Head part, and acquiring the classification tag;
inputting the target detection area image sample to the backstone part for feature extraction and downsampling to obtain an image downsampling feature map;
inputting the image downsampling feature images to the Neck part for upsampling to obtain feature images with different scales;
and inputting the feature images with different scales into a Yolo Head part to calculate loss, so as to obtain the fire detection model.
4. The unmanned aerial vehicle edge calculation-based fire detection method of claim 3, wherein the obtaining the target detection area image samples comprises:
and acquiring a plurality of infrared image samples of different shooting angles, fire points, different shooting heights and different shielding degrees of the target detection area.
5. The fire detection method based on unmanned aerial vehicle edge calculation of claim 3, wherein the inputting the target detection area image sample to the backbox portion for feature extraction and downsampling to obtain an image downsampling feature map comprises:
determining a Darknet-53 depth convolutional neural network of which the Backbone part comprises 52 convolutional layers and 1 full-connection layer, wherein the Darknet-53 depth convolutional neural network further comprises a plurality of residual modules, and the convolutional layers with preset convolutional kernel sizes and preset convolutional kernel step sizes are arranged among the plurality of residual modules;
and inputting the target detection area image sample into the Darknet-53 deep convolutional neural network to perform multi-scale downsampling, and obtaining the image downsampling characteristic map.
6. The fire detection method based on unmanned aerial vehicle edge calculation of claim 3, wherein the inputting the image downsampling feature map to the negk portion for upsampling to obtain different scale feature maps comprises:
determining the Neck part as a path aggregation network;
inputting the image downsampling feature images into the path aggregation network to respectively obtain a first scale feature image, a second scale feature image and a third scale feature image, wherein the depths of the first scale feature image, the second scale feature image and the third scale feature image are the same, and the side lengths of the first scale feature image, the second scale feature image and the third scale feature image are sequentially doubled.
7. The fire detection method based on unmanned aerial vehicle edge calculation of claim 3, wherein the inputting the different scale feature maps to the Yolo Head part for loss calculation, to obtain the fire detection model, comprises:
calculating target confidence loss values of all samples in the feature graphs with different scales by adopting binary cross entropy loss to obtain target probability of existence in a target detection frame;
calculating target class loss values of positive samples in the feature graphs with different scales by adopting binary cross entropy loss to obtain the existence probability of target classes in a target detection frame;
calculating target positioning loss values of positive samples in the feature graphs with different scales by adopting error square sum loss to obtain positioning probability of targets and true values in a target detection frame;
and correcting the Yolov3 model by integrating the target probability in the target detection frame, the target class existence probability in the target detection frame and the target and true value positioning probability in the target detection frame to obtain the fire detection model.
8. Fire detection system based on unmanned aerial vehicle edge calculation, characterized by comprising:
the acquisition module is used for acquiring image data of a target detection area and flight state data of the unmanned aerial vehicle;
the detection module is used for inputting the target detection area image data and the unmanned aerial vehicle flight state data into a pre-trained fire detection model to obtain a fire ignition point detection result;
the fire detection model is obtained by constructing an edge calculation deep learning model framework based on a Yolov3 algorithm and training the edge calculation deep learning model framework by adopting a target detection area image sample and a classification label.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the unmanned aerial vehicle edge calculation-based fire detection method of any of claims 1 to 7 when the program is executed.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the unmanned aerial vehicle edge calculation-based fire detection method of any of claims 1 to 7.
CN202310148429.4A 2023-02-20 2023-02-20 Fire detection method and system based on unmanned aerial vehicle edge calculation Pending CN116206223A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310148429.4A CN116206223A (en) 2023-02-20 2023-02-20 Fire detection method and system based on unmanned aerial vehicle edge calculation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310148429.4A CN116206223A (en) 2023-02-20 2023-02-20 Fire detection method and system based on unmanned aerial vehicle edge calculation

Publications (1)

Publication Number Publication Date
CN116206223A true CN116206223A (en) 2023-06-02

Family

ID=86514295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310148429.4A Pending CN116206223A (en) 2023-02-20 2023-02-20 Fire detection method and system based on unmanned aerial vehicle edge calculation

Country Status (1)

Country Link
CN (1) CN116206223A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116978207A (en) * 2023-09-20 2023-10-31 张家港江苏科技大学产业技术研究院 Multifunctional laboratory safety monitoring and early warning system
CN117073671A (en) * 2023-10-13 2023-11-17 湖南光华防务科技集团有限公司 Fire scene positioning method and system based on unmanned aerial vehicle multi-point measurement
CN117315028A (en) * 2023-10-12 2023-12-29 北京多维视通技术有限公司 Method, device, equipment and medium for positioning fire point of outdoor fire scene
CN117315028B (en) * 2023-10-12 2024-04-30 北京多维视通技术有限公司 Method, device, equipment and medium for positioning fire point of outdoor fire scene

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116978207A (en) * 2023-09-20 2023-10-31 张家港江苏科技大学产业技术研究院 Multifunctional laboratory safety monitoring and early warning system
CN116978207B (en) * 2023-09-20 2023-12-01 张家港江苏科技大学产业技术研究院 Multifunctional laboratory safety monitoring and early warning system
CN117315028A (en) * 2023-10-12 2023-12-29 北京多维视通技术有限公司 Method, device, equipment and medium for positioning fire point of outdoor fire scene
CN117315028B (en) * 2023-10-12 2024-04-30 北京多维视通技术有限公司 Method, device, equipment and medium for positioning fire point of outdoor fire scene
CN117073671A (en) * 2023-10-13 2023-11-17 湖南光华防务科技集团有限公司 Fire scene positioning method and system based on unmanned aerial vehicle multi-point measurement
CN117073671B (en) * 2023-10-13 2023-12-22 湖南光华防务科技集团有限公司 Fire scene positioning method and system based on unmanned aerial vehicle multi-point measurement

Similar Documents

Publication Publication Date Title
CN110059558B (en) Orchard obstacle real-time detection method based on improved SSD network
CN113065558A (en) Lightweight small target detection method combined with attention mechanism
CN111080693A (en) Robot autonomous classification grabbing method based on YOLOv3
CN110889324A (en) Thermal infrared image target identification method based on YOLO V3 terminal-oriented guidance
Wang et al. A deep-learning-based sea search and rescue algorithm by UAV remote sensing
CN116206223A (en) Fire detection method and system based on unmanned aerial vehicle edge calculation
CN111582234B (en) Large-scale oil tea tree forest fruit intelligent detection and counting method based on UAV and deep learning
CN113076871B (en) Fish shoal automatic detection method based on target shielding compensation
CN111079739B (en) Multi-scale attention feature detection method
CN113850242B (en) Storage abnormal target detection method and system based on deep learning algorithm
CN112528974B (en) Distance measuring method and device, electronic equipment and readable storage medium
CN113962282A (en) Improved YOLOv5L + Deepsort-based real-time detection system and method for ship engine room fire
CN113537180B (en) Tree obstacle identification method and device, computer equipment and storage medium
CN113706480A (en) Point cloud 3D target detection method based on key point multi-scale feature fusion
CN114241296A (en) Method for detecting meteorite crater obstacle during lunar landing, storage medium and electronic device
CN114565842A (en) Unmanned aerial vehicle real-time target detection method and system based on Nvidia Jetson embedded hardware
CN110321867B (en) Shielded target detection method based on component constraint network
CN116385958A (en) Edge intelligent detection method for power grid inspection and monitoring
CN111062950A (en) Method, storage medium and equipment for multi-class forest scene image segmentation
CN117218545A (en) LBP feature and improved Yolov 5-based radar image detection method
CN114519819B (en) Remote sensing image target detection method based on global context awareness
CN115240168A (en) Perception result obtaining method and device, computer equipment and storage medium
Wu et al. Research on Asphalt Pavement Disease Detection Based on Improved YOLOv5s
CN111160219B (en) Object integrity evaluation method and device, electronic equipment and storage medium
CN112329550A (en) Weak supervision learning-based disaster-stricken building rapid positioning evaluation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination