CN116503715B - Forest fire detection method based on cascade network - Google Patents

Forest fire detection method based on cascade network Download PDF

Info

Publication number
CN116503715B
CN116503715B CN202310685352.4A CN202310685352A CN116503715B CN 116503715 B CN116503715 B CN 116503715B CN 202310685352 A CN202310685352 A CN 202310685352A CN 116503715 B CN116503715 B CN 116503715B
Authority
CN
China
Prior art keywords
layer
network
forest
images
local
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310685352.4A
Other languages
Chinese (zh)
Other versions
CN116503715A (en
Inventor
夏景明
麻学岚
谈玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202310685352.4A priority Critical patent/CN116503715B/en
Publication of CN116503715A publication Critical patent/CN116503715A/en
Application granted granted Critical
Publication of CN116503715B publication Critical patent/CN116503715B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
    • Y02A40/28Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture specially adapted for farming

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a forest fire detection method based on a cascade network, which is characterized in that a smoke flame detection network comprising a global feature extraction network and a local feature extraction network is designed by an MC-YOLOv5s structure, forest fire local images and suspicious mist local images are detected aiming at forest shooting images, primary forest fire detection is realized, and according to the conditions that the forest fire local images do not exist and the suspicious mist local images exist, the preset meteorological element values of a forest area when the suspicious mist local images are shot are combined, a smoke classification network of an SCN structure design is further applied, classification analysis is carried out aiming at the suspicious mist local images, and secondary forest fire detection is realized by distinguishing smoke or mist, so that forest fire false alarm phenomena are effectively avoided, and forest fire detection precision is improved.

Description

Forest fire detection method based on cascade network
Technical Field
The invention relates to a forest fire detection method based on a cascade network, and belongs to the technical field of computer vision image recognition.
Background
Forest fires are a serious hazard to natural resources and human safety, and early warning systems can help people to better grasp fire hazard levels and take effective precautions early so as to reduce the loss caused by the fires. However, for areas with wide forest coverage and cloud mist, the satellite early warning system is not beneficial to direct earth observation.
Disclosure of Invention
The invention aims to solve the technical problem of providing a forest fire detection method based on a cascade network, which utilizes a designed MC-YOLOv5s network to detect a forest fire picture, and utilizes the strong correlation between meteorological parameters and cloud smoke generation to realize the classification of the cloud and the smoke, thereby effectively avoiding the false alarm phenomenon of forest fires and improving the forest fire detection precision.
The invention adopts the following technical scheme for solving the technical problems: the invention designs a forest fire detection method based on a cascade network, which comprises the following steps of A to B, and realizes forest fire detection of forest shooting images;
step A, based on the fact that the pre-training is well performed, forest fire local images and suspicious fog local images in the forest shooting images are used as input, a smoke flame detection network with an MC-YOLOv5s structure and comprising a global feature extraction network and a local feature extraction network is used as output, processing analysis is performed on the forest shooting images, whether the forest fire local images exist in the forest shooting images or not is judged, if yes, the occurrence of fire conditions in a forest area corresponding to the forest shooting images is judged, and an alarm is triggered; otherwise, if the suspicious mist local image exists, the step B is entered; if the suspicious fog-like local image does not exist, judging that no fire occurs in the forest area corresponding to the forest shooting image;
step B, based on pre-training, taking a suspicious mist local image and preset meteorological element values of a forest area when the suspicious mist local image corresponds to shooting as input, taking the suspicious mist to be classified into an output smoke classification network according to the smoke or the cloud, processing and analyzing the suspicious mist local image according to the preset meteorological element values of the forest area when the suspicious mist local image corresponds to shooting, and if the suspicious mist is judged to be smoke, judging that a fire appears in the forest area corresponding to the forest shooting image, and triggering an alarm; if the suspicious mist is determined to be cloud mist, determining that no fire occurs in the forest area corresponding to the forest shooting image.
As a preferred technical scheme of the invention: the smoke flame detection network further comprises a feature fusion network, a neck network and a detection head;
the input end of the global feature extraction network is connected with the input end of the local feature extraction network to form the input end of the smoke flame detection network, and the global feature extraction network and the local feature extraction network respectively receive forest shooting images and perform feature extraction processing; the output of the preset three types of size global feature images in the global feature extraction network serial structure form three output ends of a global feature extraction network, the output of the preset three types of size local feature images in the local feature extraction network serial structure form three output ends of a local feature extraction network, and the sizes of the global feature images corresponding to the three output ends of the global feature extraction network are the same as the sizes of the local feature images corresponding to the three output ends of the local feature extraction network in a one-to-one correspondence manner;
the three output ends of the global feature extraction network and the three output ends of the local extraction network are respectively connected with the input ends of the feature fusion network, and feature fusion processing is carried out on the global feature images and the local feature images with the same size from the global feature extraction network and the local feature images by the feature fusion network respectively to obtain three types of feature fusion images with the same size; the output end of the feature fusion network is connected with the input end of the detection head through the neck network, detection frames which are contained in the detection head and respectively correspond to the sizes of the three feature fusion graphs output by the feature fusion network one by one are respectively used for receiving and processing the corresponding feature fusion graphs, and forest fire partial images and suspicious mist partial images in the forest shooting images are output.
As a preferred technical scheme of the invention: the global feature extraction network comprises a first Conv layer, a first Conv+C3 layer, a second Conv+C3 layer, a third Conv+C3 layer, a fourth Conv+C3 layer and a first SPPF layer which are sequentially connected in series according to the image receiving and transmitting direction, wherein the input end of the first Conv layer forms the input end of the global feature extraction network, and the output end of the second Conv+C3 layer, the output end of the third Conv+C3 layer and the output end of the first SPPF layer respectively correspond to the output ends of the preset three types of global feature graphs in the global feature extraction network series structure to form three output ends of the global feature extraction network; the structure of each Conv+C3 layer is communicated with each other, each Conv+C3 layer comprises Conv layers and C3 layers which are connected in series according to the image receiving and transmitting direction, the input end of the Conv layer in the Conv+C3 layers forms the input end of the Conv+C3 layer, and the output end of the C3 layer in the Conv+C3 layers forms the output end of the Conv+C3 layer.
As a preferred technical scheme of the invention: the local feature extraction network comprises a second Conv layer, a first GhostConv+C3 layer, a second GhostConv+C3 layer, a third GhostConv+C3 layer, a fourth GhostConv+C3 layer and a second SPPF layer which are sequentially connected in series according to the image receiving and transmitting direction, wherein the input end of the second Conv layer forms the input end of the local feature extraction network, and the output end of the second GhostConv+C3 layer, the output end of the third GhostConv+C3 layer and the output end of the second SPPF layer respectively correspond to the local feature images with three preset sizes in the local feature extraction network series structure form three output ends of the local feature extraction network; the structures of the GhostConv+C3 layers are mutually communicated, each GhostConv+C3 layer comprises a GhostConv layer and a C3 layer which are connected in series according to the image receiving and transmitting direction, the input end of the GhostConv layer in the GhostConv+C3 layer forms the input end of the GhostConv+C3 layer, and the output end of the C3 layer in the GhostConv+C3 layer forms the output end of the GhostConv+C3 layer.
As a preferred technical scheme of the invention: in the structure of the feature fusion network, firstly, two input ends of the feature fusion network respectively receive local feature graphs F with the same size from a global feature extraction network and a local feature extraction network 1 And global feature map F 2
Next, local feature map F 1 The local feature map F is obtained by sequentially passing through a first average pooling AP layer and a third Conv layer for aggregating the spatial information of the local feature map and then passing through a first Softmax layer 1 Weights of (2)Further to the local feature map F 1 Weight of +.>Combining global feature map F 2 Conveying the obtained product to a first multiplication fusion layer to carry out multiplication processing to obtain a global fusion feature map F' 2
At the same time, global feature map F 2 The global feature map F is obtained by sequentially passing through a first average pooling AP layer and a fourth Conv layer for aggregating the spatial information of the global feature map and then passing through a second Softmax layer 2 Weights of (2)Further to global feature map F 2 Weight of +.>Combining local feature maps F 1 Conveying the partial fusion feature map F 'to a second multiplication fusion layer for multiplication processing to obtain a partial fusion feature map F' 1
Then, the feature map F 'is locally fused' 1 Fusing feature map F 'with Global' 2 Conveying the image to a first Concat layer for splicing, and carrying out convolution dimension reduction processing through a fifth Conv layer to obtain a spliced fusion feature map F 3
Finally, splice and fuse the feature map F 3 Is respectively conveyed to a second average pooling AP layer and a first maximum pooling MP layer for processing, the output end of the second average pooling AP layer is butted with the output end of the first maximum pooling MP layer to the input end of a second Concat layer, the output end of the second Concat layer is sequentially connected with a sixth Conv layer, a third Softmax layer and the output end of the third Softmax layer in seriesThe input end of the third multiplication fusion layer is abutted, the input end of the third multiplication fusion layer simultaneously receives and splices the fusion feature images, and multiplication processing is carried out on the third splicing fusion feature images to obtain a feature fusion image F 4
As a preferred technical scheme of the invention: based on the identifier, forest fire sample images with a preset number of fire conditions and forest fog sample images with a preset number of smoke or cloud, forest fire local areas, suspicious fog local images and fog characteristic images which correspond to the known forest fog sample images respectively in the forest fire sample images are known;
according to each forest fire sample image, training an MC-YOLOv5s network by taking the forest fire sample image as input and taking a forest fire local image and a suspicious fog local image in the forest fire sample image as output;
meanwhile, according to each forest fog sample image, based on an antagonism network formed by a local feature extraction network and a discriminator in the MC-YOL0v5s network, training is carried out on the local feature extraction network in the MC-YOL0v5s network according to a loss function of the antagonism network by taking the forest fog sample image as an input and taking a fog feature image corresponding to the forest fog sample image as an output, and then the trained MC-YOL0v5s network is obtained, namely a pre-trained smoke flame detection network is formed.
As a preferred technical scheme of the invention: based on the training of the MC-YOL0v5s network and the training of the local feature extraction network in the MC-YOL 5s network, the loss function L corresponding to the antagonism network formed by the local feature extraction network and the discriminator adv The following are provided:
L adv =E[log(1-D(G(I g )))]+E[log D(G(I t ))]
with log D (G (I) t ) Minimization, i.e. L adv Maximizing the goal, performing training of a local feature extraction network in the MC-YOLOv5s network; wherein E represents the expected value of the distribution function, I t Representing a forest fire sample image, I g Representing a forest fog sample image, G representing local feature extraction in a MC-YOLOv5s networkOperation of the network, D denotes operation of the discriminator, G (I t ) Represents a fog feature image generated by the forest fire sample image after the operation of the local feature extraction network operation, G (I) g ) Fog feature images generated by the operation of local feature extraction network operation of the forest fog sample images are represented, and G (I) -based g ) The corresponding label is 0 in the discriminator, and G (I t ) In the discriminator, the corresponding label is 1, d (G (I) t ) Representing the discrimination of the fog feature image in the forest fire sample image by the discriminator, D (G (I) g ) Represents the determination of the fog feature image in the forest fog sample image by the discriminator, log d (G (I) t ) Represents the probability that the discriminator determines the fog feature image in the forest fire sample image as 1, log (1-D (G (I) g ) A) represents the probability that the discriminator determines the fog feature image in the forest fog sample image as 0.
As a preferred technical scheme of the invention: the discriminator comprises a seventh Conv layer, an eighth Conv layer, a ninth Conv layer and a first fully-connected FC layer which are connected in series in sequence from the input end to the output end of the discriminator.
As a preferred technical scheme of the invention: the smoke classification network comprises a third Concat layer, a second full-connection FC layer and two channels;
one of the channels comprises seven Conv layers which are sequentially connected in series from the input end to the output end of the channel, and the input end of the channel forms one input end of the smoke classification network and is used for receiving suspicious mist partial images; the other channel comprises two Conv layers which are sequentially connected in series from the input end to the output end of the channel, and the input end of the channel forms the other input end of the smoke classification network and is used for receiving preset meteorological element values of a forest area when the suspicious fog partial image corresponds to shooting; the output ends of the two channels are butted with the input ends of a third Concat layer, and the third Concat layer aims at the output from the two channels and
and the azimuth angles of cameras corresponding to the forest shooting images to which the suspicious fog partial images belong are spliced, the output end of the third Concat layer is butted with the input end of the second full-connection FC layer, and the output end of the second full-connection FC layer forms the output end of the fog classification network.
As a preferred technical scheme of the invention: the preset meteorological elements comprise temperature, humidity, wind direction, wind speed, atmospheric pressure and precipitation.
Compared with the prior art, the forest fire detection method based on the cascade network has the following technical effects:
(1) According to the forest fire detection method based on the cascade network, a smoke flame detection network comprising a global feature extraction network and a local feature extraction network is designed by applying a MC-YOLOv5s structure, forest fire local images and suspicious mist local images are detected according to forest shooting images, primary forest fire detection is achieved, and according to the conditions that no forest fire local images exist and suspicious mist local images exist, preset meteorological element values of forest areas when the suspicious mist local images correspond to shooting are combined, a smoke classification network of SCN structure design is further applied, classification analysis is carried out according to the suspicious mist local images, and secondary forest fire detection is achieved through distinguishing smoke or mist, so that forest fire false alarm phenomena are effectively avoided, and forest fire detection accuracy is improved.
Drawings
FIG. 1 is a flow chart of a forest fire detection method based on a cascade network designed by the invention;
FIG. 2 is a diagram of a neural network method architecture for forest fire intelligent detection of the present invention;
FIG. 3 is a schematic diagram of MC-YOLOv5s in the design of the present invention;
FIG. 4 is a schematic diagram of a feature fusion network in accordance with the present invention;
FIG. 5 is a graph of a comparison effect of a dual-channel feature extraction network according to an embodiment of the present invention;
FIG. 6 is a graph of MC-YOLOv5s ablative experiments of the present invention;
FIG. 7 is a graph of an SCN ablation experiment of the present invention;
fig. 8 is a view of the forest fire detection effect of the present invention.
Detailed Description
The following describes the embodiments of the present invention in further detail with reference to the drawings.
In practical application, as shown in fig. 1 and 2, the method for detecting forest fires based on the cascade network specifically executes the following steps a to B to realize forest fire detection of forest shot images.
Step A, based on the fact that the pre-training is well performed, forest fire local images and suspicious fog local images in the forest shooting images are used as input, a smoke flame detection network with an MC-YOLOv5s structure and comprising a global feature extraction network and a local feature extraction network is used as output, processing analysis is performed on the forest shooting images, whether the forest fire local images exist in the forest shooting images or not is judged, if yes, the occurrence of fire conditions in a forest area corresponding to the forest shooting images is judged, and an alarm is triggered; otherwise, if the suspicious mist local image exists, the step B is entered; if the suspicious fog local image does not exist, judging that no fire occurs in the forest area corresponding to the forest shooting image.
In practical applications, as shown in fig. 3, the smoke flame detection network includes a global feature extraction network and a local feature extraction network, and the specific design further includes a feature fusion network, a neck network, and a detection head; the input end of the global feature extraction network is connected with the input end of the local feature extraction network to form the input end of the smoke flame detection network, and the global feature extraction network and the local feature extraction network respectively receive forest shooting images and perform feature extraction processing, such as RGB three-channel forest shooting images with 640 multiplied by 3; the output of the preset three types of size global feature images in the global feature extraction network serial structure form three output ends of the global feature extraction network, the output of the preset three types of size local feature images in the local feature extraction network serial structure form three output ends of the local feature extraction network, and the sizes of the global feature images corresponding to the three output ends of the global feature extraction network are the same as the sizes of the local feature images corresponding to the three output ends of the local feature extraction network one by one.
As shown in fig. 3, three output ends of the global feature extraction network and three output ends of the local extraction network are respectively abutted to each input end of the feature fusion network, and feature fusion processing is respectively carried out on the global feature map and the local feature map with the same size from the global feature extraction network and the local feature extraction network by the feature fusion network to obtain three types of feature fusion maps with the same size; the output end of the feature fusion network is connected with the input end of the detection head through the neck network, detection frames which are contained in the detection head and respectively correspond to the sizes of the three feature fusion graphs output by the feature fusion network one by one are respectively used for receiving and processing the corresponding feature fusion graphs, and forest fire partial images and suspicious mist partial images in the forest shooting images are output.
As shown in fig. 3, in practical application, the above structural designs about the global feature extraction network, the local feature extraction network, the feature fusion network, the neck network, and the detection head included in the smoke flame detection network are further specifically designed, where the global feature extraction network includes, in an image receiving and transmitting direction, a first Conv layer, a first conv+c3 layer, a second conv+c3 layer, a third conv+c3 layer, a fourth conv+c3 layer, and a first SPPF layer that are sequentially connected in series, where an input end of the first Conv layer forms an input end of the global feature extraction network, and an output end of the second conv+c3 layer, an output end of the third conv+c3 layer, and an output end of the first SPPF layer that are respectively corresponding to preset three types of global feature graphs output in the global feature extraction network serial structure form three output ends of the global feature extraction network; the structure of each Conv+C3 layer is communicated with each other, each Conv+C3 layer comprises Conv layers and C3 layers which are connected in series according to the image receiving and transmitting direction, the input end of the Conv layer in the Conv+C3 layers forms the input end of the Conv+C3 layer, and the output end of the C3 layer in the Conv+C3 layers forms the output end of the Conv+C3 layer.
As shown in fig. 3, the local feature extraction network includes, in the image receiving and transmitting direction, a second Conv layer, a first GhostConv+c3 layer, a second GhostConv+c3 layer, a third GhostConv+c3 layer, a fourth GhostConv+c3 layer, and a second SPPF layer that are sequentially connected in series, where the size of the second Conv layer is 6×6, an input end of the second Conv layer forms an input end of the local feature extraction network, and an output end of the second GhostConv+c3 layer, an output end of the third GhostConv+c3 layer, and an output end of the second SPPF layer, which respectively correspond to the local feature images with preset three types of sizes, in the local feature extraction network serial structure form three output ends of the local feature extraction network; the structures of the GhostConv+C3 layers are mutually communicated, each GhostConv+C3 layer comprises a GhostConv layer and a C3 layer which are connected in series according to the image receiving and transmitting direction, the input end of the GhostConv layer in the GhostConv+C3 layer forms the input end of the GhostConv+C3 layer, and the output end of the C3 layer in the GhostConv+C3 layer forms the output end of the GhostConv+C3 layer.
In practical application, as shown in fig. 4, the structure of the feature fusion network is specifically designed as follows:
firstly, two input ends of a feature fusion network respectively receive local feature graphs F with the same size from a global feature extraction network and a local feature extraction network 1 And global feature map F 2
Next, local feature map F 1 The method comprises the steps of firstly sequentially passing through a first average pooling AP layer and a third Conv layer with the size of 3 multiplied by 3 for aggregating the spatial information of the local feature map, and then obtaining a local feature map F through a first Softmax layer 1 Weights of (2)Further to the local feature map F 1 Weight of +.>Combining global feature map F 2 Conveying the obtained product to a first multiplication fusion layer to carry out multiplication processing to obtain a global fusion feature map F' 2 The method comprises the steps of carrying out a first treatment on the surface of the At the same time, global feature map F 2 The global feature map F is obtained by sequentially passing through a first average pooling AP layer and a fourth Conv layer with the size of 3 multiplied by 3 for aggregating the spatial information of the global feature map and then passing through a second Softmax layer 2 Weights of (2)Further to global feature map F 2 Weight of +.>Combining local feature maps F 1 Delivered to the firstThe multiplication fusion layer performs multiplication processing to obtain a local fusion feature map F' 1
Then, the feature map F 'is locally fused' 1 Fusing feature map F 'with Global' 2 Delivering to a first Concat layer for splicing, and carrying out convolution dimension reduction processing by a fifth Conv layer with the size of 3 multiplied by 3 to obtain a spliced fusion feature map F 3
Finally, splice and fuse the feature map F 3 Respectively conveying the two layers to a second average pooling AP layer and a first maximum pooling MP layer for processing, butting the output end of the second average pooling AP layer with the output end of the first maximum pooling MP layer to the input end of a second Concat layer, sequentially connecting the output end of the second Concat layer with a sixth Conv layer and a third Softmax layer in series, butting the output end of the third Softmax layer with the input end of a third multiplication fusion layer, simultaneously receiving and splicing the fusion feature images by the input end of the third multiplication fusion layer, and multiplying the fusion feature images by the third splicing fusion feature images to obtain a feature fusion image F 4
The architecture process of the feature fusion network is summarized as follows:
F 3 =f 3×3 ([F 1 ′;F 2 ′])
in the above, F 1 Representing a local feature map, F 2 Representing a global feature map, f 3×3 F represents a convolution operation with a convolution kernel size of 3×3 1×1 Representing a convolution operation with a convolution kernel size of 1 x 1,representing the corresponding element multiplication operations, the AP represents the average pooling operation of the average pooling AP layer, and the MP represents the maximum pooling operation of the maximum pooling layer MP layer.
In practical implementation, input of a forest shot image of 640×640×3 RGB three channels is connected, for example, a design local feature extraction network process is performed, firstly, a second Conv layer with a size of 6×6 and a step length of 2 is processed, the obtained feature map is 320×320×64, then, the feature map with a size of 80×80×256, 40×40×512, 20×20×1024 is sequentially output through the first, second and fourth Conv layers, and finally, the feature map with a size of 20×20×1024 is sequentially output through the second Conv layer, wherein the size of the Ghostconv layer in each Ghostconv+c3 layer is 3×3 and the step length is 2; similarly, the output end of the second conv+c3 layer, the output end of the third conv+c3 layer and the first SPPF layer in the global feature extraction network sequentially output feature graphs with the sizes of 80×80×256, 40×40×512 and 20×20×1024, and further regarding the fusion of the feature fusion network with respect to the global feature extraction network output and the local feature extraction network output, that is, the feature graphs with the same sizes from the global feature extraction network output and the local feature extraction network output are fused according to the above designs, so as to obtain feature fusion graphs with the sizes of 80×80×256, 40×40×512 and 20×20×1024.
Regarding the smoke flame detection network with the designed MC-YOLOv5s (Multi-channel YOLOv5 s) structure, in practical application, the smoke flame detection network is specifically based on a Discriminator, forest fire sample images with a preset number of fire occurrence, forest mist sample images with a preset number of smoke occurrence or cloud occurrence, forest fire local areas and suspicious mist local images in each forest fire sample image and mist characteristic images corresponding to each known forest mist sample image respectively, training is performed in the following manner, and a trained smoke flame detection network with forest shot images as input and forest fire local images and suspicious mist local images in the forest shot images as output is obtained.
According to each forest fire sample image, training an MC-YOLOv5s network by taking the forest fire sample image as input and taking a forest fire local image and a suspicious fog local image in the forest fire sample image as output; meanwhile, according to each forest fog sample image, training is carried out on the local feature extraction network in the MC-YOLOv5s network based on an antagonism network formed by the local feature extraction network in the MC-YOLOv5s network and the Discriminator, according to the condition that the forest fog sample image is taken as input, the fog feature image corresponding to the forest fog sample image is taken as output, a loss function of the antagonism network is contacted, and then the trained MC-YOLOv5s network is obtained, namely a pre-trained smoke flame detection network is formed.
And regarding training of the MC-YOLOv5s network and training of the local feature extraction network in the MC-YOLOv5s network at the same time, the loss function L corresponding to the antagonism network constituted by the local feature extraction network and the Discriminator adv The following are provided:
L adv =E[log(1-D(G(I g )))]+E[logD(G(I t ))]
with log D (G (I) t ) Minimization, i.e. L adv Maximizing the goal, performing training of a local feature extraction network in the MC-YOLOv5s network; wherein E represents the expected value of the distribution function, I t Representing a forest fire sample image, I g Representing a forest fog sample image, G representing an operation of a local feature extraction network in the MC-YOLOv5s network, D representing an operation of a Discriminator, G (I) t ) Represents a fog feature image generated by the forest fire sample image after the operation of the local feature extraction network operation, G (I) g ) Representing forest mist sample image passing officeMist feature images generated after partial feature extraction network operation are based on G (I) g ) The corresponding label in the Discriminator is 0, and G (I t ) In the Discriminator, the corresponding label is 1, d (G (I) t ) A Discriminator for discriminating a fog feature image in a forest fire sample image, D (G (I) g ) A Discriminator for discriminating a fog feature image in a forest fog sample image, log d (G (I) t ) A Discriminator to determine the probability of 1 for the fog feature image in the forest fire sample image, log (1-D (G (I) g ) A) represents the probability that the identifier judges the fog feature image in the forest mist sample image to be 0, and the capability of the identifier for extracting the fog feature information in the forest fire data is stronger along with the increase of training times.
In practical applications, the Discriminator structure is specifically designed to include, in order from its input end to its output end, a seventh Conv layer, an eighth Conv layer, a ninth Conv layer, and a first fully-connected FC layer connected in series.
Step B, based on pre-training, taking a suspicious mist local image and preset meteorological element values such as temperature, humidity, wind direction, wind speed, atmospheric pressure and precipitation of a forest area when the suspicious mist local image corresponds to shooting as input, taking the suspicious mist as an output smoke classification network according to the suspicious mist and the smoke or cloud area, processing and analyzing the suspicious mist local image according to the preset meteorological element values such as temperature, humidity, wind direction, wind speed, atmospheric pressure and precipitation of the forest area when the suspicious mist local image corresponds to shooting, and if the suspicious mist is judged to be smoke, judging that the forest area corresponding to the forest shooting image has fire, and triggering an alarm; if the suspicious mist is determined to be cloud mist, determining that no fire occurs in the forest area corresponding to the forest shooting image.
In practical application, as shown in fig. 2, the specific design structure of the smoke classification network herein includes a third Concat layer, a second fully-connected FC layer, and two channels; one of the channels comprises seven Conv layers of size such as 3 x 3 connected in series in sequence from the input end to the output end thereof, and the input end of the channel forms one of the input ends of the smoke classification network for receiving suspicious mist partial images; the other channel comprises two Conv layers which are sequentially connected in series from the input end to the output end of the channel, and the input end of the channel forms the other input end of the smoke classification network and is used for receiving preset meteorological element values of a forest area when the suspicious fog partial image corresponds to shooting; the output ends of the two channels are butted with the input end of a third Concat layer, the third Concat layer performs splicing processing on the output from the two channels and the azimuth angle of a camera corresponding to the forest shooting image to which the suspicious fog partial image belongs, the output end of the third Concat layer is butted with the input end of a second full-connection FC layer, and the output end of the second full-connection FC layer forms the output end of the smoke classification network.
Regarding the preset temperature, humidity, wind direction, wind speed, atmospheric pressure and precipitation of each meteorological element value of a forest area when the suspicious mist local image corresponds to shooting, in practical application, a forest shooting image corresponding to the suspicious mist local image is designed, 3km×3km space resolution grid data division is adopted, each meteorological element value of each grid point is respectively extracted for each meteorological element in a 64km rectangular area in the forest shooting image taking a camera as a center point, further, the meteorological element values representing the forest shooting image are obtained in an averaging mode, namely, each meteorological element value corresponding to the forest area is obtained, and 6 meteorological elements of the temperature, humidity, wind direction, wind speed, atmospheric pressure and precipitation are related, then the total dimension of the obtained WRF meteorological elements is 6×21×21, and convolution operation is carried out through two Conv layers with the size of 3×3 in series.
The forest fire detection method based on the cascade network is applied to practice, and in order to evaluate the accuracy of the MC-YOLOv5s network, three indexes, namely an average precision value (mean average precision, mAP), precision and recall rate (recall), are used, wherein the specific calculation method is shown as the following formula:
TP represents a true example, TN represents a true case, FP represents a false case, FN represents a false case, N represents all class numbers, averagePrecision represents average precision, and N (class) represents all class numbers corresponding to MC-YOLOv5s network output; the experiment of the invention uses Intel i9-10900X processor, 32GB memory and NVIDIA GeForce GTX 2080Ti GPU, and runs on Ubuntu18.04 operating system, programming language is Python3.7, and deep learning framework is Pytorch. In the experiment, 100 iterations are performed by adopting a neural network method, SGD is used as an optimizer in the training process, the batch-size is set to 16, the input picture size is 640 multiplied by 640, and the learning rate is set to 0.01.
The key of forest fire detection is to extract key features of flame and smoke from forest fire data. Because the early manifestation form of forest fire is mainly smoke rising, the accurate extraction of smoke features is an important means for early discovery and prevention of forest fire. For this purpose, the MC-YOLOv5s module adopts a dual channel structure. In contrast to the YOLOv5 s-based network, this architecture allows for both global and local features. Through five-layer convolution operation, the MC-YOLOv5s network can extract almost different features, and most of the features are tiny features, some of the features focus on edges, and some of the features focus on the whole. As shown in fig. 5, the feature visualization result of the forest fire image after convolution operation is displayed.
In MC-YOLOv5s, the invention designs a local feature extraction network and a cross feature fusion module LGFFN based on generating an countermeasure structure based on a classical YOLOv5s network. In order to verify the forward roles of the GhostModel (GM) module, the Discriminator Discriminator (DIS) module and the feature fusion network LGFFN in the local feature extraction network, we sequentially add the GM module (GM scheme), the Discriminator dispersor module (GM-DIS scheme) and the LGFFN module (GM-DIS-LGFFN scheme) in the original YOLOv5s network model to perform an ablation experiment, the results of the ablation experiment are shown in table 1, and the evaluation index and the network training convergence process of the ablation experiment are shown in fig. 6.
TABLE 1
Method LGFFN Disiminator(DIS) GhostModel(GM) Accuracy rate of Recall rate of recall
Classical YOLOv5s 91.26% 83.84%
GM protocol 91.46% 83.65%
GM-DIS protocol 92.64% 83.26%
GM-DIS-LGFFN scheme 94.06% 93.14%
Experimental results show that the design of the local feature extraction network and the feature fusion network LGFFN in MC-YOLOv5s has a remarkable improvement effect on the accuracy of forest fire detection tasks. By cross fusion of global and local characteristic information, the model can better utilize the characteristic information of flame and smoke targets, obtain more outstanding characteristic information and improve detection precision.
In order to verify the effectiveness of the MC-YOLOv5s forest fire detection algorithm, the invention carries out a comparison experiment on the MC-YOLOv5s forest fire detection algorithm and other target detection algorithms, including a single multi-boundary-box detector SSD, a YOLO series target detection network and a Faster R-CNN, the experimental results are shown in a table 2, and compared with other algorithms, the MC-YOLOv5s forest fire detection accuracy is respectively improved by 15.8%, 5.54%, 5.6%, 5.43%, 5.97%, 2.8% and 2.68%. This is because the MC-YOLOv5s algorithm employs a local feature extraction network LFEN and a feature cross fusion network LGFFN to extract features, where the GhostModel module and the discriminator module interact to extract more complete smoke target feature information. The LGFFN can obtain more prominent forest fire target characteristic information by utilizing global and local characteristic information in forest fire data, so that detection accuracy is guaranteed. In addition, the frame rate of the MC-YOLOv5s algorithm reaches 101 frames, real-time feedback and fluency detection can be realized, the detection effect of the MC-YOLOv5s algorithm is shown in fig. 8, wherein a smoke mark box represents a detected smoke target, and a fire mark box represents a detected flame target.
TABLE 2
Model Accuracy rate of Recall rate of recall Frame rate FPS
SSD 78.26% 71.1% 22
YOLOv3 88.52% 78.02% 36
YOLOv4 88.46% 78.56% 35
YOLOv4-tiny 88.63% 79.62% 42
FasterR-CNN 88.09% 85.91% 28
YOLOv5s 91.26% 83.84% 80
YOLOv5s-Transformer 91.05% 86.36% 71
MC-YOLOv5s 94.06% 93.14% 101
As shown in fig. 7, the experimental results of the SCN network in the embodiment of the present invention show the experimental results of smoke and cloud classification performed after WRF meteorological elements are added to the smoke and cloud classification network model SCN. It can be seen that after the WRF meteorological elements are added, the classification accuracy and loss of the model are obviously improved, and the importance of the meteorological elements on smoke and cloud classification is proved.
The embodiments of the present invention have been described in detail with reference to the drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the spirit of the present invention.

Claims (8)

1. A forest fire detection method based on a cascade network is characterized by comprising the following steps of: the method comprises the following steps of A to B, and forest fire detection of forest shooting images is achieved;
step A, based on the fact that the pre-training is well performed, forest fire local images and suspicious fog local images in the forest shooting images are used as input, a smoke flame detection network with an MC-YOLOv5s structure and comprising a global feature extraction network and a local feature extraction network is used as output, processing analysis is performed on the forest shooting images, whether the forest fire local images exist in the forest shooting images or not is judged, if yes, the occurrence of fire conditions in a forest area corresponding to the forest shooting images is judged, and an alarm is triggered; otherwise, if the suspicious mist local image exists, the step B is entered; if the suspicious fog-like local image does not exist, judging that no fire occurs in the forest area corresponding to the forest shooting image;
the smoke flame detection network also comprises a feature fusion network, a neck network and a detection head; the input end of the global feature extraction network is connected with the input end of the local feature extraction network to form the input end of the smoke flame detection network, and the global feature extraction network and the local feature extraction network respectively receive forest shooting images and perform feature extraction processing; the output of the preset three types of size global feature images in the global feature extraction network serial structure form three output ends of a global feature extraction network, the output of the preset three types of size local feature images in the local feature extraction network serial structure form three output ends of a local feature extraction network, and the sizes of the global feature images corresponding to the three output ends of the global feature extraction network are the same as the sizes of the local feature images corresponding to the three output ends of the local feature extraction network in a one-to-one correspondence manner; the three output ends of the global feature extraction network and the three output ends of the local extraction network are respectively connected with the input ends of the feature fusion network, and feature fusion processing is carried out on the global feature images and the local feature images with the same size from the global feature extraction network and the local feature images by the feature fusion network respectively to obtain three types of feature fusion images with the same size; the output end of the feature fusion network is connected with the input end of the detection head through the neck network, and detection frames which are contained in the detection head and respectively correspond to the sizes of the three feature fusion graphs output by the feature fusion network one by one are used for respectively receiving and processing the corresponding feature fusion graphs and outputting forest fire partial images and suspicious mist partial images in the forest shooting images;
step B, based on pre-training, taking a suspicious mist local image and preset meteorological element values of a forest area when the suspicious mist local image corresponds to shooting as input, taking the suspicious mist to be classified into an output smoke classification network according to the smoke or the cloud, processing and analyzing the suspicious mist local image according to the preset meteorological element values of the forest area when the suspicious mist local image corresponds to shooting, and if the suspicious mist is judged to be smoke, judging that a fire appears in the forest area corresponding to the forest shooting image, and triggering an alarm; if the suspicious mist is determined to be cloud mist, determining that no fire occurs in the forest area corresponding to the forest shooting image.
2. The forest fire detection method based on the cascade network according to claim 1, wherein the method comprises the following steps: the global feature extraction network comprises a first Conv layer, a first Conv+C3 layer, a second Conv+C3 layer, a third Conv+C3 layer, a fourth Conv+C3 layer and a first SPPF layer which are sequentially connected in series according to the image receiving and transmitting direction, wherein the input end of the first Conv layer forms the input end of the global feature extraction network, and the output end of the second Conv+C3 layer, the output end of the third Conv+C3 layer and the output end of the first SPPF layer respectively correspond to the output ends of the preset three types of global feature graphs in the global feature extraction network series structure to form three output ends of the global feature extraction network; the structure of each Conv+C3 layer is communicated with each other, each Conv+C3 layer comprises Conv layers and C3 layers which are connected in series according to the image receiving and transmitting direction, the input end of the Conv layer in the Conv+C3 layers forms the input end of the Conv+C3 layer, and the output end of the C3 layer in the Conv+C3 layers forms the output end of the Conv+C3 layer.
3. The forest fire detection method based on the cascade network according to claim 1, wherein the method comprises the following steps: the local feature extraction network comprises a second Conv layer, a first GhostConv+C3 layer, a second GhostConv+C3 layer, a third GhostConv+C3 layer, a fourth GhostConv+C3 layer and a second SPPF layer which are sequentially connected in series according to the image receiving and transmitting direction, wherein the input end of the second Conv layer forms the input end of the local feature extraction network, and the output end of the second GhostConv+C3 layer, the output end of the third GhostConv+C3 layer and the output end of the second SPPF layer respectively correspond to the local feature images with three preset sizes in the local feature extraction network series structure form three output ends of the local feature extraction network; the structures of the GhostConv+C3 layers are mutually communicated, each GhostConv+C3 layer comprises a GhostConv layer and a C3 layer which are connected in series according to the image receiving and transmitting direction, the input end of the GhostConv layer in the GhostConv+C3 layer forms the input end of the GhostConv+C3 layer, and the output end of the C3 layer in the GhostConv+C3 layer forms the output end of the GhostConv+C3 layer.
4. The forest fire detection method based on the cascade network according to claim 1, wherein the method comprises the following steps: in the structure of the feature fusion network, firstly, two input ends of the feature fusion network respectively receive local feature graphs F with the same size from a global feature extraction network and a local feature extraction network 1 And global feature map F 2
Next, local feature map F 1 The local feature map F is obtained by sequentially passing through a first average pooling AP layer and a third Conv layer for aggregating the spatial information of the local feature map and then passing through a first Softmax layer 1 Weights of (2)Further to the local feature map F 1 Weight of +.>Combining global feature map F 2 Conveying the obtained product to a first multiplication fusion layer to carry out multiplication processing to obtain a global fusion feature map F' 2
At the same time, global feature map F 2 The method comprises the steps of sequentially passing through a first average pooling AP layer and a fourth Conv layer for aggregating global feature graphsIs passed through a second Softmax layer to obtain a global feature map F 2 Weights of (2)Further to global feature map F 2 Weight of +.>Combining local feature maps F 1 Conveying the partial fusion feature map F 'to a second multiplication fusion layer for multiplication processing to obtain a partial fusion feature map F' 1
Then, the feature map F 'is locally fused' 1 Fusing feature map F 'with Global' 2 Conveying the image to a first Concat layer for splicing, and carrying out convolution dimension reduction processing through a fifth Conv layer to obtain a spliced fusion feature map F 3
Finally, splice and fuse the feature map F 3 Respectively conveying the two layers to a second average pooling AP layer and a first maximum pooling MP layer for processing, butting the output end of the second average pooling AP layer with the output end of the first maximum pooling MP layer to the input end of a second Concat layer, sequentially connecting the output end of the second Concat layer with a sixth Conv layer and a third Softmax layer in series, butting the output end of the third Softmax layer with the input end of a third multiplication fusion layer, simultaneously receiving and splicing the fusion feature images by the input end of the third multiplication fusion layer, and multiplying the fusion feature images by the third splicing fusion feature images to obtain a feature fusion image F 4
5. A cascade network-based forest fire detection method as recited in any one of claims 1 to 4, wherein: based on the identifier, forest fire sample images with a preset number of fire conditions and forest fog sample images with a preset number of smoke or cloud, forest fire local areas, suspicious fog local images and fog characteristic images which correspond to the known forest fog sample images respectively in the forest fire sample images are known;
according to each forest fire sample image, training an MC-YOLOv5s network by taking the forest fire sample image as input and taking a forest fire local image and a suspicious fog local image in the forest fire sample image as output;
meanwhile, according to the forest mist sample images, based on an antagonism network formed by a local feature extraction network and a discriminator in the MC-YOLOv5s network, training is carried out on the local feature extraction network in the MC-YOLOv5s network according to a loss function of the antagonism network by taking the forest mist sample images as input and the mist feature images corresponding to the forest mist sample images as output, and the trained MC-YOLOv5s network is further obtained, namely the pre-trained smoke flame detection network is formed.
6. The cascade network-based forest fire detection method as claimed in claim 5, wherein: the discriminator comprises a seventh Conv layer, an eighth Conv layer, a ninth Conv layer and a first fully-connected FC layer which are connected in series in sequence from the input end to the output end of the discriminator.
7. The forest fire detection method based on the cascade network according to claim 1, wherein the method comprises the following steps: the smoke classification network comprises a third Concat layer, a second full-connection FC layer and two channels;
one of the channels comprises seven Conv layers which are sequentially connected in series from the input end to the output end of the channel, and the input end of the channel forms one input end of the smoke classification network and is used for receiving suspicious mist partial images; the other channel comprises two Conv layers which are sequentially connected in series from the input end to the output end of the channel, and the input end of the channel forms the other input end of the smoke classification network and is used for receiving preset meteorological element values of a forest area when the suspicious fog partial image corresponds to shooting; the output ends of the two channels are butted with the input end of a third Concat layer, the third Concat layer performs splicing processing on the output from the two channels and the azimuth angle of a camera corresponding to the forest shooting image to which the suspicious fog partial image belongs, the output end of the third Concat layer is butted with the input end of a second full-connection FC layer, and the output end of the second full-connection FC layer forms the output end of the smoke classification network.
8. The forest fire detection method based on the cascade network according to claim 1, wherein the method comprises the following steps: the preset meteorological elements comprise temperature, humidity, wind direction, wind speed, atmospheric pressure and precipitation.
CN202310685352.4A 2023-06-12 2023-06-12 Forest fire detection method based on cascade network Active CN116503715B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310685352.4A CN116503715B (en) 2023-06-12 2023-06-12 Forest fire detection method based on cascade network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310685352.4A CN116503715B (en) 2023-06-12 2023-06-12 Forest fire detection method based on cascade network

Publications (2)

Publication Number Publication Date
CN116503715A CN116503715A (en) 2023-07-28
CN116503715B true CN116503715B (en) 2024-01-23

Family

ID=87318551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310685352.4A Active CN116503715B (en) 2023-06-12 2023-06-12 Forest fire detection method based on cascade network

Country Status (1)

Country Link
CN (1) CN116503715B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117609942A (en) * 2023-11-22 2024-02-27 中山大学 Estimation method and system for tropical cyclone movement path

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101237089B1 (en) * 2011-10-12 2013-02-26 계명대학교 산학협력단 Forest smoke detection method using random forest classifier method
CN110503021A (en) * 2019-08-19 2019-11-26 温州大学 Fire hazard smoke detecting method based on time compression track characteristic identification
CN111091072A (en) * 2019-11-29 2020-05-01 河海大学 YOLOv 3-based flame and dense smoke detection method
CN114330503A (en) * 2021-12-06 2022-04-12 北京无线电计量测试研究所 Smoke flame identification method and device
CN114998737A (en) * 2022-06-08 2022-09-02 徐州才聚智能科技有限公司 Remote smoke detection method, system, electronic equipment and medium
CN115170894A (en) * 2022-09-05 2022-10-11 深圳比特微电子科技有限公司 Smoke and fire detection method and device
CN115661611A (en) * 2022-11-14 2023-01-31 西安镭映光电科技有限公司 Infrared small target detection method based on improved Yolov5 network
CN115690564A (en) * 2022-11-18 2023-02-03 南京林业大学 Outdoor fire smoke image detection method based on Recursive BIFPN network
CN115761332A (en) * 2022-11-14 2023-03-07 深圳小湃科技有限公司 Smoke and flame detection method, device, equipment and storage medium
CN116152658A (en) * 2023-01-06 2023-05-23 北京林业大学 Forest fire smoke detection method based on domain countermeasure feature fusion network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113776165B (en) * 2021-09-10 2023-03-21 西安建筑科技大学 YOLOv5l algorithm-based multi-region artificial fog pipe network intelligent control method and system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101237089B1 (en) * 2011-10-12 2013-02-26 계명대학교 산학협력단 Forest smoke detection method using random forest classifier method
CN110503021A (en) * 2019-08-19 2019-11-26 温州大学 Fire hazard smoke detecting method based on time compression track characteristic identification
CN111091072A (en) * 2019-11-29 2020-05-01 河海大学 YOLOv 3-based flame and dense smoke detection method
CN114330503A (en) * 2021-12-06 2022-04-12 北京无线电计量测试研究所 Smoke flame identification method and device
CN114998737A (en) * 2022-06-08 2022-09-02 徐州才聚智能科技有限公司 Remote smoke detection method, system, electronic equipment and medium
CN115170894A (en) * 2022-09-05 2022-10-11 深圳比特微电子科技有限公司 Smoke and fire detection method and device
CN115661611A (en) * 2022-11-14 2023-01-31 西安镭映光电科技有限公司 Infrared small target detection method based on improved Yolov5 network
CN115761332A (en) * 2022-11-14 2023-03-07 深圳小湃科技有限公司 Smoke and flame detection method, device, equipment and storage medium
CN115690564A (en) * 2022-11-18 2023-02-03 南京林业大学 Outdoor fire smoke image detection method based on Recursive BIFPN network
CN116152658A (en) * 2023-01-06 2023-05-23 北京林业大学 Forest fire smoke detection method based on domain countermeasure feature fusion network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
A lightweight ship target detection model based on improved YOLOv5s algorithm;Yuanzhou Zheng 等;《PLOS ONE》;第18卷(第4期);1-23 *
DBGA-Net: Dual-Branch Global–Local Attention Network for Remote Sensing Scene Classification;Jingming Xia 等;《 IEEE Geoscience and Remote Sensing Letters。;第20卷;1-5 *
Real-Time Video Fire Detection via Modified YOLOv5 Network Model;Zongsheng Wu 等;《Fire Technology》;2377-2403 *
YOLOv5s-Fog: An Improved Model Based on YOLOv5s for Object Detection in Foggy Weather Scenarios;Xianglin Meng 等;《sensors》;1-16 *
基于YOLO模型的遥感影像飞机目标检测技术研究;徐佰祺;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》(第2023(02)期);C028-463 *
基于改进YOLOv5s的火灾烟雾检测算法研究;蔡静 等;《智能计算机与应用》;第13卷(第5期);75-81 *

Also Published As

Publication number Publication date
CN116503715A (en) 2023-07-28

Similar Documents

Publication Publication Date Title
US20220197281A1 (en) Intelligent decision-making method and system for unmanned surface vehicle
Yuan et al. Fire detection using infrared images for UAV-based forest fire surveillance
Yuan et al. Vision-based forest fire detection in aerial images for firefighting using UAVs
CN116503715B (en) Forest fire detection method based on cascade network
Lestari et al. Fire hotspots detection system on CCTV videos using you only look once (YOLO) method and tiny YOLO model for high buildings evacuation
CN107577241B (en) Fire-fighting unmanned aerial vehicle track planning method based on obstacle avoidance system
CN112068111A (en) Unmanned aerial vehicle target detection method based on multi-sensor information fusion
CN105825207B (en) The high-voltage line detection method and device of fragmentation
CN106056624A (en) Unmanned aerial vehicle high-definition image small target detecting and tracking system and detecting and tracking method thereof
US11145089B2 (en) Method for measuring antenna downtilt based on multi-scale detection algorithm
CN111639825A (en) Method and system for indicating escape path of forest fire based on A-Star algorithm
Fan et al. Lightweight forest fire detection based on deep learning
CN116416576A (en) Smoke/flame double-light visual detection method based on V3-YOLOX
CN114998737A (en) Remote smoke detection method, system, electronic equipment and medium
Sadi et al. Forest fire detection and localization using thermal and visual cameras
Kinaneva et al. Application of artificial intelligence in UAV platforms for early forest fire detection
Meena et al. RCNN Architecture for Forest Fire Detection
CN107323677A (en) Unmanned plane auxiliary landing method, device, equipment and storage medium
Chandana et al. Autonomous drones based forest surveillance using Faster R-CNN
Zhang et al. Pyramid attention based early forest fire detection using UAV imagery
CN112836608A (en) Forest fire source estimation model training method, estimation method and system
CN114578846A (en) AGIMM tracking method based on maneuver detection sorting
Qiao et al. FireFormer: an efficient Transformer to identify forest fire from surveillance cameras
Kabir et al. Deep learning inspired vision based frameworks for drone detection
Ummah et al. A simple fight decision support system for BVR air combat using fuzzy logic algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant