CN112906481A - Method for realizing forest fire detection based on unmanned aerial vehicle - Google Patents

Method for realizing forest fire detection based on unmanned aerial vehicle Download PDF

Info

Publication number
CN112906481A
CN112906481A CN202110094522.2A CN202110094522A CN112906481A CN 112906481 A CN112906481 A CN 112906481A CN 202110094522 A CN202110094522 A CN 202110094522A CN 112906481 A CN112906481 A CN 112906481A
Authority
CN
China
Prior art keywords
fire
forest
detection
aerial vehicle
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110094522.2A
Other languages
Chinese (zh)
Inventor
吴武勋
段洪琳
杜渐
宋建斌
张凯
陈卫强
吴云飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhaoshang Xinzhi Technology Co ltd
Original Assignee
Zhaoshang Xinzhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhaoshang Xinzhi Technology Co ltd filed Critical Zhaoshang Xinzhi Technology Co ltd
Priority to CN202110094522.2A priority Critical patent/CN112906481A/en
Publication of CN112906481A publication Critical patent/CN112906481A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Fire-Detection Mechanisms (AREA)
  • Image Analysis (AREA)

Abstract

One or more embodiments of the present disclosure provide a method for implementing forest fire detection based on an unmanned aerial vehicle, which classifies sample pictures of a forest and a flame, labels the sample pictures in the area, extracts the sample pictures based on classification and labeling results, trains a detection model and a classification model, detects a forest image by using the trained model to obtain a possible fire area, re-identifies the possible fire area, and finally obtains a position coordinate of a fire occurrence area.

Description

Method for realizing forest fire detection based on unmanned aerial vehicle
Technical Field
One or more embodiments of the present specification relate to the technical field of forest fire prevention and control, and in particular, to a method for implementing forest fire detection based on an unmanned aerial vehicle.
Background
According to the latest data published by the State statistics office, the forest area of China is increased year by year, the forest coverage rate of China reaches 23% as of 2018, but the number of forest fires per year also gradually increases in recent years, the fire accidents cause losses of people besides billions of economic losses every year, and if the accidents can be timely detected at the early stage of forest fire occurrence, the economic losses and the safety accidents can be reduced.
With the continuous and deep research of the computer vision field, the security field can also realize unmanned supervision and early warning by means of the research results. For example, in the field of forest fires, if the forest area is large or the road is rugged, potential safety hazards are possible, fixed-point and timed cruising can be achieved through an unmanned aerial vehicle, the obtained image and scene are analyzed in real time through a computer vision technology, possible fire points are early warned, and position coordinates are sent to inform security protection personnel of corresponding fire prevention measures, so that the fire can be prevented from happening in the bud.
The infrared sensor is adopted at first in traditional forest fire detection, because infrared sensor's circuit is simple, and the preparation is convenient, but the shortcoming receives the influence of light too big, especially in the forest illumination difference, probably has the misdetection, can not range finding moreover, and the response is also relatively slow. After the convolutional neural network appears, forest pictures can be collected through a camera, the images are analyzed, gray processing is carried out on the images according to RGB components, and fire detection is carried out by obtaining edge characteristics and color characteristics of flames. However, in some areas where the color of the leaves is similar to that of the fire, there are cases of misidentification, such as maple forest, red plastic bags, red small animals, etc. At present, a target detection algorithm is commonly used, an early target detection algorithm is to acquire shallow features of an image through convolution, pooling and other element operations, and due to the limitation of hardware equipment, a network cannot be made deeper and more complicated, so that the method has certain limitation and lower detection accuracy in a forest which is a relatively complicated application scene.
Disclosure of Invention
In view of this, an object of one or more embodiments of the present specification is to provide a method for implementing forest fire detection based on an unmanned aerial vehicle, so as to solve the problem of low accuracy in detecting forest fire.
Based on the above purpose, one or more embodiments of the present specification provide a method for implementing forest fire detection based on an unmanned aerial vehicle, including the following steps:
collecting sample pictures of forests and flames by using an unmanned aerial vehicle, wherein the sample pictures comprise various forest backgrounds and a plurality of pictures in various time and illumination environments; synthesizing a data set using pictures of various flames, smoke, and the collected pictures;
preprocessing and classifying the sample picture, marking the region where the target is located in the sample picture, and dividing the data set into a training set and a testing set;
performing detection model and classification model training on the classified sample pictures based on the labels to obtain available detection model parameters and classification model parameters;
collecting a forest image shot by an unmanned aerial vehicle;
detecting the collected forest image by using the trained detection model to obtain a possible fire area in the forest image;
and (4) identifying the possible fire area again by using the classification model, judging whether the fire occurs or not, and calculating the position coordinate of the fire occurrence area.
Preferably, the training of the detection model and the classification model further comprises: and dividing the data set into a training set and a testing set according to the proportion of 8:2, training detection model parameters and classification model parameters by using samples in the training set, and testing the effects of the detection model and the classification model by using the samples in the testing set.
Preferably, after the forest image shot by the unmanned aerial vehicle is collected, the method further comprises the step of preprocessing the collected forest image, and the preprocessing comprises the following steps:
the method comprises the steps of using flame, a dense smoke picture and the collected picture to carry out random position, random size and stretching, Gaussian noise adding, random cutting and planting, rotation, left-right interchange and other methods to generate a data set.
Preferably, the detecting the collected forest image by using the trained detection model to obtain a possible fire area in the forest image includes: the FasterRCNN algorithm is introduced to detect the collected forest images, including,
proposing suggestions for possible target positions in the images of the key frames to obtain candidate area images possibly containing targets;
extracting candidate region images possibly containing targets by adopting a proper feature model to obtain feature representation;
judging whether each candidate area contains a target of a specific type or not by means of a classifier and a GPU;
and obtaining a target detection frame as a possible fire area through frame position regression post-processing operation.
Preferably, the extracting of the candidate region image possibly containing the target by using the suitable feature model comprises:
cutting out possible candidate areas from the original image;
and identifying the cut images by adopting a resnet101 depth residual error network model to obtain characteristic representation.
Preferably, judging whether each candidate area contains a target of a specific type by means of a classifier and the GPU;
calculating the category of each feature representation through a full connection layer and a softmax function, and outputting a probability vector;
and the position offset of each feature representation is obtained by using bounding box regression again.
Preferably, the recognizing again using the classification model for the possible fire area detected and determining whether the fire occurs, and the calculating the position coordinates of the fire area includes:
setting a proper confidence threshold;
comparing the confidence coefficient of the target detection frame obtained by detection with a confidence coefficient threshold value, screening, and reserving the target detection frame with the confidence coefficient higher than the confidence coefficient threshold value;
traversing all the target detection frames, judging the IOU value between every two overlapped target detection frames, and rejecting the target detection frame with the larger IOU value;
and counting target detection frame data finally obtained in the forest image, taking the target detection frame data as a fire occurrence area, and calculating the size of the position of the fire occurrence area according to the coordinates of the unmanned aerial vehicle.
Preferably, the method further comprises:
when a fire is judged to occur, an early warning is given out and the position of a fire occurrence area is reported.
As can be seen from the above, in the method for detecting forest fire based on an unmanned aerial vehicle provided in one or more embodiments of the present disclosure, sample pictures of a forest and a flame are used to classify the forest and the flame, the regions where the forest and the flame are located are labeled, a detection model and a classification model are trained based on the classification and labeling results, the forest image is detected by using the trained model to obtain possible fire regions, the possible fire regions are identified again, and finally, the position coordinates of the fire occurrence regions are obtained.
Drawings
In order to more clearly illustrate one or more embodiments or prior art solutions of the present specification, the drawings that are needed in the description of the embodiments or prior art will be briefly described below, and it is obvious that the drawings in the following description are only one or more embodiments of the present specification, and that other drawings may be obtained by those skilled in the art without inventive effort from these drawings.
Fig. 1 is a schematic flow chart of a method for implementing forest fire detection based on an unmanned aerial vehicle according to one or more embodiments of the present disclosure;
FIG. 2 is a schematic diagram of a test model training process according to one or more embodiments of the present disclosure;
FIG. 3 is a schematic diagram illustrating a video key frame selection process according to one or more embodiments of the present disclosure;
FIG. 4 is a schematic diagram of a flame detection process according to one or more embodiments of the disclosure;
FIG. 5 is a schematic diagram of a flame matching process according to one or more embodiments of the disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure more apparent, the present disclosure is further described in detail below with reference to specific embodiments.
It is to be noted that unless otherwise defined, technical or scientific terms used in one or more embodiments of the present specification should have the ordinary meaning as understood by those of ordinary skill in the art to which this disclosure belongs. The use of "first," "second," and similar terms in one or more embodiments of the specification is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
For forest fire detection or monitoring, the following scheme is mainly adopted in the prior art:
a. segmenting a candidate flame region of a frame image by using an image processing technology to obtain flame blocks, inputting the flame blocks into a flame characteristic analysis model to obtain flame identification marks, and taking the candidate flame blocks with flames as preferred flame blocks; acquiring an optical flow histogram of any one of the preferred flame blocks of any one of the frame images, and acquiring entropy of the optical flow histogram; and acquiring a flame identification result based on the entropy of any one of the preferred flame blocks.
b. Collecting environmental parameters by adopting a sensor group; analyzing the environmental parameters through an artificial neural network, and obtaining a judgment result of whether a fire exists; if the judgment result is yes, an instant alarm is carried out, and if the judgment result is no, the operation is finished.
c. The CCD visual information acquisition module and the DP bus information transmission module of the bottom control station are designed, the function of the intelligent fire multi-state visual detection system is added, the mean value of fire detection images is used as an initial background, background images are acquired according to different fire events to be updated in real time, different threshold values are set according to target characteristics on a binary image obtained by a DSP processor, and the type of fire is judged.
However, the above solution has the following disadvantages:
a. flame detection is performed only based on the morphological characteristics of flame, and therefore, erroneous recognition may occur.
b. The response of the sensor is slow and is affected by the light.
c. Equipment is relatively fixed, arranges the complicacy moreover, and not only unmanned aerial vehicle cruises conveniently.
To this end, this specification provides a method for realizing forest fire detection based on unmanned aerial vehicle, including the following steps:
s101, collecting sample pictures of forests and flames by using an unmanned aerial vehicle, wherein the sample pictures comprise various forest backgrounds and a plurality of pictures in various time and illumination environments; synthesizing a data set using pictures of various flames, smoke, and the collected pictures;
for example, more than 2000 sample pictures are selected.
S102, preprocessing and classifying sample pictures, labeling areas where targets are located in the sample pictures, and dividing a data set into a training set and a testing set;
for example, areas and categories of pictures are labeled using the labelImg tool.
S103, training a detection model and a classification model on the classified sample pictures based on the labels to obtain available detection model parameters and classification model parameters;
s104, collecting a forest image shot by an unmanned aerial vehicle;
s105, detecting the collected forest image by using the trained detection model to obtain a possible fire area in the forest image;
s106, the possible fire area is detected, the classification model is used for re-identification, whether fire occurs is judged, and the position coordinate of the fire area is calculated.
The method for realizing forest fire detection based on the unmanned aerial vehicle provided by the embodiment of the specification comprises the steps of classifying forest and flame sample pictures, labeling the classified forest and flame sample pictures, extracting the forest fire sample pictures based on classification and labeling results, training a detection model and a classification model, detecting forest images by using the trained models to obtain possible fire areas, re-identifying the possible fire areas, and finally obtaining position coordinates of fire occurrence areas.
As an embodiment, performing detection model and classification model training further comprises: and dividing the data set into a training set and a testing set according to the proportion of 8:2, training detection model parameters and classification model parameters by using samples in the training set, and testing the effects of the detection model and the classification model by using the samples in the testing set.
In the process of labeling the region, since the target may appear at any position of the image, and the size and aspect ratio of the target are also uncertain, initially traversing the whole image by adopting a sliding window strategy requires setting different scales and different aspect ratios. The exhaustive strategy contains all possible positions of the target, and the detection accuracy can be improved.
As an implementation manner, after acquiring the forest image shot by the unmanned aerial vehicle, the method further includes preprocessing the acquired forest image, including:
analyzing a video stream transmitted by the unmanned aerial vehicle, and analyzing video stream information into image frames;
and extracting key frames from the analyzed image frames.
Generally, there are three types of frames in the video stream transmitted by the drone: the method comprises the following steps of intra-frame coding (I frame), predictive coding (P frame) and bidirectional coding (B frame), wherein the compression ratio of the I frame is low and is the basis of the P frame and the B frame, so the I frame is preferentially selected for analysis, and then the P frame containing more information is selected, if the extraction strategy is the I frame with the highest remaining integrity, a proper P frame is selected, and the B frame is discarded, namely the P frame needs to be further detected.
By extracting the key frames, the calculation amount of the system can be greatly reduced.
As an embodiment, detecting the acquired forest image by using a trained detection model, and obtaining a possible fire area in the forest image includes: the FasterRCNN algorithm is introduced to detect the collected forest images, including,
proposing suggestions for possible target positions in the images of the key frames to obtain candidate area images possibly containing targets;
extracting candidate region images possibly containing targets by adopting a proper feature model to obtain feature representation;
and judging whether each candidate region contains a specific type of target or not by means of a classifier and a GPU, wherein a region generation network (RPN network) is used for generating the candidate detection frame, and the classifier is a FasterRCNN model based on Softmax.
And obtaining a target detection frame as a possible fire area through frame position regression post-processing operation.
In the embodiment, the FasterRCNN algorithm which is optimal at present is introduced for detection, so that the detection speed is ensured to meet the real-time requirement, and the accuracy of detection is advanced, namely, flame detection.
For example, a resnet101 depth residual error network may be adopted to extract candidate region images that may contain targets, obtain feature representation maps, calculate, by using the obtained feature representation maps, which category each feature representation specifically belongs to through a full connection layer (full connection layer) and a softmax function, and output a probability vector cls _ prob; meanwhile, the position offset bbox _ pred represented by each feature is obtained by using frame regression (bounding box regression) again, and the obtained position offset bbox _ pred is used for regression of a more accurate target detection frame.
The convolution characteristics are shared by region setting, classification and regression, so that the detection speed is increased, and the detection accuracy is guaranteed. Other target detection algorithms may also be used in this example. Such as inclusion v2, SSD, and YOLO algorithms.
As an embodiment, the step of performing re-recognition using a classification model for a possible fire area detected to determine whether a fire has occurred or not and calculating the position coordinates of the fire area includes:
setting a proper confidence coefficient threshold value c according to multiple experiments;
comparing the confidence coefficient of the target detection frame obtained by detection with a confidence coefficient threshold value, screening, and reserving the target detection frame with the confidence coefficient higher than the confidence coefficient threshold value;
traversing all the target detection frames, judging the IOU value between every two overlapped target detection frames, rejecting the target detection frames with larger IOU values, separating the detection frames in different areas as much as possible, and avoiding the repeated detection of the fire in one area;
and counting target detection frame data finally obtained in the forest image, taking the target detection frame data as a fire occurrence area, and calculating the size of the position of the fire occurrence area according to the coordinates of the unmanned aerial vehicle.
The process is flame matching and aims to screen detected fire areas and judge whether a fire occurs, the step is a key of analysis, the number of fire occurrence points is judged by judging the number of fire detection frames, the calculated amount is small, and the matching effect is good.
As an embodiment, the method further comprises:
when a fire is judged to occur, an early warning is given out and the position of a fire occurrence area is reported.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples; within the spirit of the present disclosure, features from the above embodiments or from different embodiments may also be combined, steps may be implemented in any order, and there are many other variations of different aspects of one or more embodiments of the present description as described above, which are not provided in detail for the sake of brevity.
In addition, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown in the provided figures, for simplicity of illustration and discussion, and so as not to obscure one or more embodiments of the disclosure. Furthermore, devices may be shown in block diagram form in order to avoid obscuring the understanding of one or more embodiments of the present description, and this also takes into account the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform within which the one or more embodiments of the present description are to be implemented (i.e., specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the disclosure, it should be apparent to one skilled in the art that one or more embodiments of the disclosure can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative instead of restrictive.
It is intended that the one or more embodiments of the present specification embrace all such alternatives, modifications and variations as fall within the broad scope of the appended claims. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of one or more embodiments of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (8)

1. A method for realizing forest fire detection based on an unmanned aerial vehicle is characterized by comprising the following steps:
collecting sample pictures of forests and flames by using an unmanned aerial vehicle, wherein the sample pictures comprise a plurality of forest backgrounds and a plurality of pictures in various time and illumination environments; synthesizing a data set using pictures of various flames, smoke, and the collected pictures;
preprocessing and classifying the sample picture, marking the region where the target is located in the sample picture, and dividing the data set into a training set and a testing set;
performing detection model and classification model training on the classified sample pictures based on the labels to obtain available detection model parameters and classification model parameters;
collecting a forest image shot by an unmanned aerial vehicle;
detecting the collected forest image by using the trained detection model to obtain a possible fire area in the forest image;
and (4) identifying the possible fire area again by using the classification model, judging whether the fire occurs or not, and calculating the position coordinate of the fire occurrence area.
2. The method for implementing forest fire detection based on unmanned aerial vehicle as claimed in claim 1, wherein the training of the detection model and the classification model further comprises: and dividing the data set into a training set and a testing set according to the proportion of 8:2, training detection model parameters and classification model parameters by using samples in the training set, and testing the effects of the detection model and the classification model by using the samples in the testing set.
3. The method for realizing forest fire detection based on the unmanned aerial vehicle as claimed in claim 1, wherein after the forest image shot by the unmanned aerial vehicle is collected, the method further comprises preprocessing the collected forest image, comprising:
and (3) generating a data set by using the flame, the smoke picture and the collected picture to carry out random position, random size and expansion, Gaussian noise addition, random cutting and planting, rotation and left-right interchange.
4. The method for realizing forest fire detection based on the unmanned aerial vehicle as claimed in claim 3, wherein the detecting the collected forest image by using the trained detection model to obtain the possible fire area in the forest image comprises: the FasterRCNN algorithm is introduced to detect the collected forest images, including,
proposing suggestions for possible target positions in the images of the key frames to obtain candidate area images possibly containing targets;
extracting candidate region images possibly containing targets by adopting a proper feature model to obtain feature representation;
judging whether each candidate area contains a target of a specific type or not by means of a classifier and a GPU;
and obtaining a target detection frame as a possible fire area through frame position regression post-processing operation.
5. The method for realizing forest fire detection based on unmanned aerial vehicle as claimed in claim 4, wherein the extracting the candidate area image possibly containing the target by adopting a suitable feature model comprises:
cutting out possible candidate areas from the original image;
and identifying the cut images by adopting a resnet101 depth residual error network model to obtain characteristic representation.
6. The unmanned aerial vehicle-based forest fire detection method according to claim 4, wherein the judgment of whether each candidate area contains a target of a specific type is performed by means of a classifier and a GPU;
calculating the category of each feature representation through a full connection layer and a softmax function, and outputting a probability vector;
and the position offset of each feature representation is obtained by using bounding box regression again.
7. The method for realizing forest fire detection based on unmanned aerial vehicle as claimed in claim 4, wherein the step of re-identifying the possible fire area by using the classification model and judging whether the fire occurs or not comprises the following steps:
setting a proper confidence threshold;
comparing the confidence coefficient of the target detection frame obtained by detection with a confidence coefficient threshold value, screening, and reserving the target detection frame with the confidence coefficient higher than the confidence coefficient threshold value;
traversing all the target detection frames, judging the IOU value between every two overlapped target detection frames, and rejecting the target detection frame with the larger IOU value;
and counting target detection frame data finally obtained in the forest image, taking the target detection frame data as a fire occurrence area, and calculating the size of the position of the fire occurrence area according to the coordinates of the unmanned aerial vehicle.
8. The method for implementing forest fire detection based on unmanned aerial vehicle as claimed in claim 1, wherein the method further comprises:
when a fire is judged to occur, an early warning is given out and the position of a fire occurrence area is reported.
CN202110094522.2A 2021-01-23 2021-01-23 Method for realizing forest fire detection based on unmanned aerial vehicle Pending CN112906481A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110094522.2A CN112906481A (en) 2021-01-23 2021-01-23 Method for realizing forest fire detection based on unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110094522.2A CN112906481A (en) 2021-01-23 2021-01-23 Method for realizing forest fire detection based on unmanned aerial vehicle

Publications (1)

Publication Number Publication Date
CN112906481A true CN112906481A (en) 2021-06-04

Family

ID=76117320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110094522.2A Pending CN112906481A (en) 2021-01-23 2021-01-23 Method for realizing forest fire detection based on unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN112906481A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113553985A (en) * 2021-08-02 2021-10-26 中再云图技术有限公司 High-altitude smoke detection and identification method based on artificial intelligence, storage device and server
CN113723300A (en) * 2021-08-31 2021-11-30 平安国际智慧城市科技股份有限公司 Artificial intelligence-based fire monitoring method and device and storage medium
CN113988222A (en) * 2021-11-29 2022-01-28 东北林业大学 Forest fire detection and identification method based on fast-RCNN
CN114037910A (en) * 2021-11-29 2022-02-11 东北林业大学 Unmanned aerial vehicle forest fire detecting system
CN115512307A (en) * 2022-11-23 2022-12-23 中国民用航空飞行学院 Wide-area space infrared multi-point real-time fire detection method and system and positioning method
CN115546672A (en) * 2022-11-30 2022-12-30 广州天地林业有限公司 Forest picture processing method and system based on image processing
CN116362944A (en) * 2023-05-31 2023-06-30 四川三思德科技有限公司 Anti-flight anti-operation interference processing method, device and medium based on difference

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017137393A1 (en) * 2016-02-10 2017-08-17 Tyco Fire & Security Gmbh A fire detection system using a drone
CN108416963A (en) * 2018-05-04 2018-08-17 湖北民族学院 Forest Fire Alarm method and system based on deep learning
CN108764142A (en) * 2018-05-25 2018-11-06 北京工业大学 Unmanned plane image forest Smoke Detection based on 3DCNN and sorting technique
CN108898069A (en) * 2018-06-05 2018-11-27 辽宁石油化工大学 Video flame detecting method based on multiple Classifiers Combination
CN110147758A (en) * 2019-05-17 2019-08-20 电子科技大学成都学院 A kind of forest fire protection method based on deep learning
CN110969205A (en) * 2019-11-29 2020-04-07 南京恩博科技有限公司 Forest smoke and fire detection method based on target detection, storage medium and equipment
CN111553200A (en) * 2020-04-07 2020-08-18 北京农业信息技术研究中心 Image detection and identification method and device
CN111814638A (en) * 2020-06-30 2020-10-23 成都睿沿科技有限公司 Security scene flame detection method based on deep learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017137393A1 (en) * 2016-02-10 2017-08-17 Tyco Fire & Security Gmbh A fire detection system using a drone
CN108416963A (en) * 2018-05-04 2018-08-17 湖北民族学院 Forest Fire Alarm method and system based on deep learning
CN108764142A (en) * 2018-05-25 2018-11-06 北京工业大学 Unmanned plane image forest Smoke Detection based on 3DCNN and sorting technique
CN108898069A (en) * 2018-06-05 2018-11-27 辽宁石油化工大学 Video flame detecting method based on multiple Classifiers Combination
CN110147758A (en) * 2019-05-17 2019-08-20 电子科技大学成都学院 A kind of forest fire protection method based on deep learning
CN110969205A (en) * 2019-11-29 2020-04-07 南京恩博科技有限公司 Forest smoke and fire detection method based on target detection, storage medium and equipment
CN111553200A (en) * 2020-04-07 2020-08-18 北京农业信息技术研究中心 Image detection and identification method and device
CN111814638A (en) * 2020-06-30 2020-10-23 成都睿沿科技有限公司 Security scene flame detection method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
江洋 等: "基于RetinaNet 深度学习模型的火焰检测研究", 《海南大学学报自然科学版》, vol. 37, no. 4, pages 1 - 7 *
黄杰 等: "基于Faster R-CNN的颜色导向火焰检测", 《计算机应用》, pages 1471 - 1474 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113553985A (en) * 2021-08-02 2021-10-26 中再云图技术有限公司 High-altitude smoke detection and identification method based on artificial intelligence, storage device and server
CN113723300A (en) * 2021-08-31 2021-11-30 平安国际智慧城市科技股份有限公司 Artificial intelligence-based fire monitoring method and device and storage medium
CN113988222A (en) * 2021-11-29 2022-01-28 东北林业大学 Forest fire detection and identification method based on fast-RCNN
CN114037910A (en) * 2021-11-29 2022-02-11 东北林业大学 Unmanned aerial vehicle forest fire detecting system
CN115512307A (en) * 2022-11-23 2022-12-23 中国民用航空飞行学院 Wide-area space infrared multi-point real-time fire detection method and system and positioning method
CN115546672A (en) * 2022-11-30 2022-12-30 广州天地林业有限公司 Forest picture processing method and system based on image processing
CN115546672B (en) * 2022-11-30 2023-03-24 广州天地林业有限公司 Forest picture processing method and system based on image processing
CN116362944A (en) * 2023-05-31 2023-06-30 四川三思德科技有限公司 Anti-flight anti-operation interference processing method, device and medium based on difference
CN116362944B (en) * 2023-05-31 2023-07-28 四川三思德科技有限公司 Anti-flight anti-operation interference processing method, device and medium based on difference

Similar Documents

Publication Publication Date Title
CN112906481A (en) Method for realizing forest fire detection based on unmanned aerial vehicle
CN109977812B (en) Vehicle-mounted video target detection method based on deep learning
Shao et al. Cloud detection in remote sensing images based on multiscale features-convolutional neural network
De Charette et al. Real time visual traffic lights recognition based on spot light detection and adaptive traffic lights templates
WO2020253308A1 (en) Human-machine interaction behavior security monitoring and forewarning method for underground belt transportation-related personnel
US8340420B2 (en) Method for recognizing objects in images
US20090309966A1 (en) Method of detecting moving objects
CN113139521B (en) Pedestrian boundary crossing monitoring method for electric power monitoring
KR101697161B1 (en) Device and method for tracking pedestrian in thermal image using an online random fern learning
CA3094424A1 (en) Safety monitoring and early-warning method for man-machine interaction behavior of underground conveyor belt operator
CN112825192B (en) Object identification system and method based on machine learning
CN112149512A (en) Helmet wearing identification method based on two-stage deep learning
CN111126293A (en) Flame and smoke abnormal condition detection method and system
CN113065578A (en) Image visual semantic segmentation method based on double-path region attention coding and decoding
CN112464797A (en) Smoking behavior detection method and device, storage medium and electronic equipment
CN115620178A (en) Real-time detection method for abnormal and dangerous behaviors of power grid of unmanned aerial vehicle
CN101930540A (en) Video-based multi-feature fusion flame detecting device and method
CN108563997B (en) Method and device for establishing face detection model and face recognition
CN113657250A (en) Flame detection method and system based on monitoring video
CN113052140A (en) Video-based substation personnel and vehicle violation detection method and system
CN117475353A (en) Video-based abnormal smoke identification method and system
CN101930541A (en) Video-based flame detecting device and method
KR102085070B1 (en) Apparatus and method for image registration based on deep learning
Klammsteiner et al. Vision Based Stationary Railway Track Monitoring System
CN113223081A (en) High-altitude parabolic detection method and system based on background modeling and deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination