CN114399734A - Forest fire early warning method based on visual information - Google Patents

Forest fire early warning method based on visual information Download PDF

Info

Publication number
CN114399734A
CN114399734A CN202210049405.9A CN202210049405A CN114399734A CN 114399734 A CN114399734 A CN 114399734A CN 202210049405 A CN202210049405 A CN 202210049405A CN 114399734 A CN114399734 A CN 114399734A
Authority
CN
China
Prior art keywords
smoke
forest
forest fire
detection model
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210049405.9A
Other languages
Chinese (zh)
Inventor
刘军清
熊小豪
李菁
康维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Three Gorges University CTGU
Original Assignee
China Three Gorges University CTGU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Three Gorges University CTGU filed Critical China Three Gorges University CTGU
Priority to CN202210049405.9A priority Critical patent/CN114399734A/en
Publication of CN114399734A publication Critical patent/CN114399734A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/005Fire alarms; Alarms responsive to explosion for forest fires, e.g. detecting fires spread over a large or outdoors area
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/10Actuation by presence of smoke or gases, e.g. automatic alarm devices for analysing flowing fluid materials by the use of optical means
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/12Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
    • G08B17/125Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Emergency Management (AREA)
  • Business, Economics & Management (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Multimedia (AREA)
  • Fire-Detection Mechanisms (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a forest fire early warning method based on visual information, which comprises the following steps: inputting images in a data set to be detected into a forest fire detection model for training; arranging monitoring cameras in a forest, and numbering all the monitoring cameras; and inputting video data acquired by the monitoring camera into a forest fire detection model for classification, sending a smoke alarm if the classification result is that smoke is detected, providing the serial number of the monitoring camera of the video source, and finishing forest fire early warning. The invention utilizes the attention mechanism to make the network more sensitive to small-scale smoke and better extract the characteristics of early forest fire smoke. The invention integrates a characteristic-level and decision-level classification detection module, improves the discrimination capability of the smoke and similar objects, particularly the discrimination of the smoke and the fog. According to the invention, a negative sample of fog is added into the data set, so that the network is suitable for detecting the smoke in the foggy environment.

Description

Forest fire early warning method based on visual information
Technical Field
The invention belongs to the field of forest fire prediction, and particularly relates to a forest fire early warning method based on visual information.
Background
Forest fires occur 22 thousands of times every year in the world, the disaster area reaches 1000 hectares, countless property losses and casualties of rare animals are caused, and the prevention of the forest fires is at hand. In the early stage of fire occurrence, smoke is always generated before open fire, so that the smoke can be detected in time to effectively prevent the fire from deteriorating.
For forest fires, the most easily realized method is to detect smoke through a monitoring camera, and the traditional image-based forest smoke detection is to recognize according to the shape, color, texture and motion characteristics of smoke and generate a suspected smoke candidate area by using a foreground extraction or motion area extraction method. And then designing artificial features, extracting smoke feature vectors of the candidate regions, and finally inputting the extracted features into a classifier for classification to obtain a final detection result. However, the forest fire monitoring camera is wide in visual field, and generally acquires a long-distance fire smoke video, so that a smoke area is often small, the detection effect of the traditional detection algorithm based on images is poor, the missed detection condition is serious, and the forest fire monitoring camera cannot be applied to multi-scene tasks.
In recent years, deep learning develops rapidly, a large number of experiments prove that the smoke detection algorithm based on deep learning is more effective and practical, a large number of smoke data sets are trained through a neural network and a loss function, and a high-efficiency smoke detection model is obtained. Initially, researchers used AlexNet for feature extraction to obtain better feature maps than the conventional algorithm, but the accuracy and the missing report rate were not satisfactory. With the continuous development of convolutional neural networks, the feature extraction of images is more and more accurate, for example, VGG, RCNN and YOLO series networks can already meet many application requirements, but the VGG, RCNN and YOLO series networks, which are used as the basic networks for target detection, are not good in smoke detection effect directly, and the most important reason is that smoke in the early stage of fire occurrence occupies a small proportion in the whole visual field, the pixel information is little, and the neural networks are difficult to identify; secondly, because the real environment is relatively complex, the identification of fire smoke in the foggy weather is a key problem; the data set in the specific field of fire smoke detection is few, most of public data sets have no labels, and a single sample is not representative, so that the early-stage smoke fire cannot be effectively detected through a network trained by the data set, and great inconvenience is brought to scientific researchers.
Disclosure of Invention
The invention aims to provide a forest fire early warning method based on visual information, and the method is used for solving the problems in the prior art.
On one hand, in order to achieve the purpose, the invention provides a forest fire early warning method based on visual information, which comprises the following steps:
establishing a data set to be tested and a forest fire detection model, inputting images in the data set to be tested into the forest fire detection model for training, and obtaining the trained forest fire detection model;
arranging monitoring cameras in a forest, and numbering all the monitoring cameras;
and inputting the video data acquired by the monitoring camera into the trained forest fire detection model for classification, sending a smoke alarm if the classification result is smoke detection, providing the serial number of the monitoring camera with a video source, and finishing forest fire early warning.
Optionally, the process of constructing the data set to be tested includes:
collecting forest smoke pictures and forest pictures with only fog, which are crawled from a network;
artificially manufacturing smoke with a forest as a background, remotely shooting by a camera, and storing shot videos into pictures with a uniform format frame by frame as forest smoke pictures;
randomly adding fog interference to part of the forest smoke picture to obtain a forest smoke picture with fog interference;
and synthesizing the forest smoke picture with fog interference and the forest picture with fog only into the data set to be detected.
Optionally, the process of constructing the data set to be tested further includes:
and marking the forest smoke picture with the fog interference to obtain the classification information of the smoke and the coordinate size information of the real frame.
Optionally, the process of constructing the forest fire detection model includes:
and constructing the forest fire detection model by adopting a YOLOv5 network model, and performing feature extraction on the forest fire detection model through four convolutional layers.
Optionally, the process of inputting the images in the data set to be tested into the forest fire detection model for training includes:
preprocessing the image in the data set to be detected;
extracting the characteristics of the image after finishing the preprocessing by an attention mechanism, and extracting a characteristic graph;
and carrying out feature fusion and decision classification on the feature graph to obtain a final feature graph, and carrying out detection based on the final feature graph.
Optionally, the process of preprocessing the image in the data set to be detected includes:
performing mosaic data enhancement, adaptive anchor frame calculation and adaptive image scaling on the image in the data set to be detected;
and splicing the images in the data set to be detected by the mosaic data enhancement in a random scaling, random cutting and random arrangement mode.
Optionally, the feature extraction is performed on the image after the preprocessing is finished through an attention mechanism, and the process of extracting the feature map includes:
finding out a smoke area in the image after finishing the preprocessing by spatial attention, and reducing the weight of other background areas;
and extracting the feature map by emphasizing the feature channel representing the smoke through channel attention and reducing the weight of other channels.
Optionally, the process of performing feature fusion and decision classification on the feature map includes:
acquiring data of a second convolution layer, a third convolution layer and a fourth convolution layer of the forest fire detection model, fusing the data to acquire fusion characteristics;
and classifying the fusion characteristics to obtain a final characteristic diagram.
The invention has the technical effects that:
(1) according to the invention, a CBAM attention mechanism is embedded in a YOLOv5 network tack part, so that the network is more sensitive to small-scale smoke, and the characteristics of early forest fire smoke are better extracted.
(2) The invention integrates the classification detection modules of characteristic level and decision level into the Backbone part of YOLOv5, thereby improving the discrimination capability of smoke and similar objects, in particular to the discrimination of the characteristics of smoke and fog.
(3) According to the invention, a negative sample of fog is added into the data set, so that the network is suitable for detecting the smoke in the foggy environment.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application. In the drawings:
fig. 1 is a network structure diagram of YOLOv5s in the embodiment of the present invention;
FIG. 2 is a structural diagram of YOLOv5 after CBAM is fused in the embodiment of the present invention;
FIG. 3 is a structural diagram of a CBAM in an embodiment of the present invention;
FIG. 4 is a block diagram of a fusion classification detection module according to an embodiment of the present invention;
fig. 5 is a structural diagram of a YOLOv5+ fusion classification detection module in the embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
Example one
The embodiment of the invention provides a forest fire early warning method based on visual information, which comprises the following steps:
(1) collecting forest smoke pictures crawled from a network, and converting the pictures into a fixed format.
(2) The smoke with the forest as the background is artificially manufactured, the smoke is shot by a camera at a long distance, and the shot video is stored into pictures with a uniform format frame by frame.
(3) And adding fog interference to a part of forest smoke pictures through a program.
(4) And combining the processed pictures and the fog-only pictures into a data set, and manually labeling all the images containing the smoke through labeling software to obtain the classification information of the smoke and the coordinate size information of the real frame.
(5) And building an improved YOLOv5 network model at the edge server. The improved YOLOv5 network was trained using the created dataset.
(6) And arranging monitoring cameras in each area of the forest, and numbering each monitoring camera. If the mountain is a mountain forest, monitoring can be carried out on one mountain through 3 cameras, the monitoring is arranged at a position which is 3 kilometers away from the mountain waist, and the horizontal visual angle of each camera is 120 degrees, so that all-around monitoring can be carried out on one mountain; if the monitoring is arranged on a forest lookout tower in plain forests, one lookout tower also uses 3 cameras to form 360-degree monitoring around.
(7) And receiving video data transmitted by the monitoring camera at the edge server, and sending the video data into the trained network model for classification.
(8) If the classification result is that smoke is detected, a smoke alarm is sent out, the serial number of the monitoring camera of the video source is provided, and then the step 6 is repeated; if no smoke is detected as a result of the classification, step 6 is repeated.
The model in the embodiment of the invention adopts a YOLOv5 network model, YOLOv5 is a single-stage end-to-end target detection framework, the model has four versions of s, x, m and l, the model is changed from small to large, the target detection accuracy is gradually improved, but the detection speed is also gradually reduced, the YOLOv5 model is continuously updated all the time, and concretely, the invention uses the version S of YOLOv5-5.0, because the forest fire smoke detection requires rapid response, the fire spread can be prevented in a short time. The structure of YOLOv5s is shown in fig. 1 and consists of four parts: an input part, a backsbone part, a Neck part and a Detect part.
The input part mainly comprises a mosaic data enhancement part, an adaptive anchor frame calculation part and an adaptive image scaling part. Mosaic data enhancement is spliced in the modes of random scaling, random cutting and random arrangement, and the small-scale detection is well achieved; the proper setting of the anchor frame can obtain higher intersection ratio, which is beneficial to improving the accuracy of the model, and the prior frame can better learn the shapes suitable for different objects in the training process. The self-adaptive anchor frame design aims at different data sets and has an anchor frame with the length and the width initially set. In the network training, the network outputs a prediction frame on the basis of an initial anchor frame, and then compares the prediction frame with a real frame, calculates the difference between the prediction frame and the real frame, and then reversely updates and iterates network parameters. Adaptive image scaling is where many pictures differ in aspect ratio when actually used. Therefore, after the scaling and filling, the sizes of the black edges at the two ends are different, and if the filling is more, information redundancy exists, which affects the reasoning speed. Therefore, the black edges with the least number of the black edges are added to the original image in a self-adaptive mode, the black edges at two ends of the image height are reduced, the calculated amount is reduced during reasoning, and the target detection speed is improved.
The Backbone part uses a CSP1 network structure to extract features, and is mainly used for solving the problem of gradient replication in a larger convolutional layer, and in addition, a Focus structure which is not available in YOLOv4 is added in the YOLOv5 series.
The neutral part is used for Feature fusion, and an up-sampling and down-sampling process is realized by adopting a Feature Pyramid Network (FPN) structure and a cognitive adversarial Network (PAN) structure. The FPN is top-down and uses an upsampling method to transmit and fuse information to obtain a predicted feature map. The PAN uses a bottom-up feature pyramid. FPN uses top-down feature convolution to upsample 2 times from 19 x 19 to 38 x 38 on the higher feature level using interpolation, while feature levels are connected horizontally by 1x1 convolution, changing the number of bottom channels on the lower feature level. Unlike YOLOv4, the Neck of YOLOv4 adopts common convolution operation, and the Neck structure of YOLOv5 adopts the CSP2 structure designed by referring to CSPNet to enhance the capability of network feature fusion.
The Detect part mainly comprises a Bounding box loss function and NMS non-maximum inhibition, wherein the Bounding box loss function is GIOU _ loss, and the formula is as follows:
Figure BDA0003473362790000081
the distance between the real frame and the predicted frame can be reflected by subtracting the area of the predicted frame from the area of the real frame and comparing the area of the predicted frame with the area of the C, so that the problems that when the predicted frame and the real frame do not intersect, the distance information between the predicted frame and the real frame is lost, and the IOU is equal to 0 are effectively solved. NMS non-maximum suppression may select the highest IOU of the prediction boxes to make the model more accurate.
In the early stage of forest fire occurrence, smoke is less, the occupation ratio of the whole monitoring visual field is small, the characteristics finally extracted through the multilayer convolutional neural network are very less than the background, the characteristic expression capability is weak, the detection of small targets is reduced, and the detection of the small targets in both a normal environment and a foggy environment is a challenge. In order to solve the problem, the invention integrates the space attention and the channel attention into the YOLOv5 network, which is helpful for the network to learn the characteristics of the smoke and reduce the interference of the background, the network structure after the CBAM is integrated is shown in figure 2, and the CBAM module is respectively placed in front of the convolution layers of three detection heads with different scales, thus being capable of acting on the targets with various scales.
The CBAM module structure is shown in fig. 3, and is connected by spatial attention and channel attention, where the spatial attention gives a higher weight to a region with the highest probability of containing an object, i.e. a smoke region, when a picture is found, and meanwhile, the weights of other background regions are reduced; in the process of extracting the features through convolution, the original picture is changed into a multi-channel feature map, the channel focuses on emphasizing the feature channels representing smoke and giving higher weight to the feature channels, and the weights of other channels are reduced. The combined attention mechanism can pay more attention to the smoke position, reduce background interference and improve the accuracy of small-scale smoke detection.
The existence of fog brings great difficulty to smoke detection, the misjudgment rate is high because the fog looks similar to smoke, and the fog influences the quality of photos monitored by the terminal, so that the feature extraction becomes more difficult. To obtain more discriminative features, real-time classification is implemented. A fusion classification detection module of a characteristic level and a decision level is integrated into YOLOv5, so that the network has the capability of distinguishing smoke and fog, and the misjudgment rate is reduced. As shown in fig. 4, the feature fusion part obtains three groups of features with better discrimination ability, including detailed edge texture information and high-level semantic information of smoke, by feature-level fusion of a similar and Feature Pyramid Network (FPN) structure, so as to improve discrimination ability between smoke and similar objects.
And performing decision layer fusion on the basis of three classification layers, namely a convolutional layer, a dropout layer and a global pooling layer. This block has three inputs and one output, and here the second, third, and four Conv blocks of Yolov5 are used as inputs, and the output is used as the input of the SPP block; the first half part is feature fusion, convolutional layer information with different scales is taken out independently, namely detection on targets with different scales is enhanced, detection on targets with uneven distribution and fuzzy boundary profiles, such as smoke, can also be enhanced, three extracted features with different scales are input into a decision fusion module behind, the module is subjected to weighted fusion through a classifier consisting of a convolutional layer, a dropout layer and a global pooling layer, and the features with the three scales contain different target information and have different importance in classification. The YOLOv5 network with the fused classification detection module added is shown in fig. 5.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. A forest fire early warning method based on visual information is characterized by comprising the following steps:
establishing a data set to be tested and a forest fire detection model, inputting images in the data set to be tested into the forest fire detection model for training, and obtaining the trained forest fire detection model;
arranging monitoring cameras in a forest, and numbering all the monitoring cameras;
and inputting the video data acquired by the monitoring camera into the trained forest fire detection model for classification, sending a smoke alarm if the classification result is smoke detection, providing the serial number of the monitoring camera with a video source, and finishing forest fire early warning.
2. The method of claim 1, wherein constructing the dataset under test comprises:
collecting forest smoke pictures and forest pictures with only fog, which are crawled from a network;
artificially manufacturing smoke with a forest as a background, remotely shooting by a camera, and storing shot videos into pictures with a uniform format frame by frame as forest smoke pictures;
randomly adding fog interference to part of the forest smoke picture to obtain a forest smoke picture with fog interference;
and synthesizing the forest smoke picture with fog interference and the forest picture with fog only into the data set to be detected.
3. The method of claim 2, wherein constructing the dataset under test further comprises:
and marking the forest smoke picture with the fog interference to obtain the classification information of the smoke and the coordinate size information of the real frame.
4. The method of claim 1, wherein constructing the forest fire detection model comprises:
and constructing the forest fire detection model by adopting a YOLOv5 network model.
5. The method of claim 1, wherein inputting the images in the dataset under test into the forest fire detection model for training comprises:
preprocessing the image in the data set to be detected;
extracting the characteristics of the image after finishing the preprocessing by an attention mechanism, and extracting a characteristic graph;
and carrying out feature fusion and decision classification on the feature graph to obtain a final feature graph, and carrying out detection based on the final feature graph.
6. The method of claim 5, wherein preprocessing the image in the dataset under test comprises:
performing mosaic data enhancement, adaptive anchor frame calculation and adaptive image scaling on the image in the data set to be detected;
and splicing the images in the data set to be detected by the mosaic data enhancement in a random scaling, random cutting and random arrangement mode.
7. The method according to claim 5, wherein the extracting the feature map by performing feature extraction on the image after finishing the preprocessing through an attention mechanism comprises:
finding out a smoke area in the image after finishing the preprocessing by spatial attention, and reducing the weight of other background areas;
and extracting the feature map by emphasizing the feature channel representing the smoke through channel attention and reducing the weight of other channels.
8. The method according to claim 4 or 5, wherein the process of performing feature fusion and decision classification on the feature map comprises:
acquiring data of a second convolution layer, a third convolution layer and a fourth convolution layer of the forest fire detection model, fusing the data to acquire fusion characteristics;
and classifying the fusion characteristics to obtain a final characteristic diagram.
CN202210049405.9A 2022-01-17 2022-01-17 Forest fire early warning method based on visual information Pending CN114399734A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210049405.9A CN114399734A (en) 2022-01-17 2022-01-17 Forest fire early warning method based on visual information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210049405.9A CN114399734A (en) 2022-01-17 2022-01-17 Forest fire early warning method based on visual information

Publications (1)

Publication Number Publication Date
CN114399734A true CN114399734A (en) 2022-04-26

Family

ID=81230243

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210049405.9A Pending CN114399734A (en) 2022-01-17 2022-01-17 Forest fire early warning method based on visual information

Country Status (1)

Country Link
CN (1) CN114399734A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115273017A (en) * 2022-04-29 2022-11-01 桂林电子科技大学 Traffic sign detection recognition model training method and system based on Yolov5
CN117253031A (en) * 2023-11-16 2023-12-19 应急管理部天津消防研究所 Forest fire monitoring method based on multi-element composite deep learning
CN117876874A (en) * 2024-01-15 2024-04-12 西南交通大学 Forest fire detection and positioning method and system based on high-point monitoring video

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115273017A (en) * 2022-04-29 2022-11-01 桂林电子科技大学 Traffic sign detection recognition model training method and system based on Yolov5
CN117253031A (en) * 2023-11-16 2023-12-19 应急管理部天津消防研究所 Forest fire monitoring method based on multi-element composite deep learning
CN117253031B (en) * 2023-11-16 2024-01-30 应急管理部天津消防研究所 Forest fire monitoring method based on multi-element composite deep learning
CN117876874A (en) * 2024-01-15 2024-04-12 西南交通大学 Forest fire detection and positioning method and system based on high-point monitoring video

Similar Documents

Publication Publication Date Title
CN112216049B (en) Construction warning area monitoring and early warning system and method based on image recognition
CN114399734A (en) Forest fire early warning method based on visual information
CN112084869B (en) Compact quadrilateral representation-based building target detection method
CN111951212A (en) Method for identifying defects of contact network image of railway
CN111414807B (en) Tidal water identification and crisis early warning method based on YOLO technology
CN111222478A (en) Construction site safety protection detection method and system
CN112287983B (en) Remote sensing image target extraction system and method based on deep learning
CN111582074A (en) Monitoring video leaf occlusion detection method based on scene depth information perception
Qiang et al. Forest fire smoke detection under complex backgrounds using TRPCA and TSVB
CN113033315A (en) Rare earth mining high-resolution image identification and positioning method
CN114708566A (en) Improved YOLOv 4-based automatic driving target detection method
Sun et al. IRDCLNet: Instance segmentation of ship images based on interference reduction and dynamic contour learning in foggy scenes
Zhao et al. FSDF: A high-performance fire detection framework
CN114463624A (en) Method and device for detecting illegal buildings applied to city management supervision
CN115880765A (en) Method and device for detecting abnormal behavior of regional intrusion and computer equipment
CN113052139A (en) Deep learning double-flow network-based climbing behavior detection method and system
CN113158963A (en) High-altitude parabolic detection method and device
CN112926426A (en) Ship identification method, system, equipment and storage medium based on monitoring video
CN116798118A (en) Abnormal behavior detection method based on TPH-yolov5
CN113902744B (en) Image detection method, system, equipment and storage medium based on lightweight network
CN114998686A (en) Smoke detection model construction method, device, equipment, medium and detection method
CN114359705A (en) Geological disaster monitoring method and device
CN111160255B (en) Fishing behavior identification method and system based on three-dimensional convolution network
CN114445726A (en) Sample library establishing method and device based on deep learning
CN112215122A (en) Fire detection method, system, terminal and storage medium based on video image target detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination