CN113688748B - Fire detection model and method - Google Patents

Fire detection model and method Download PDF

Info

Publication number
CN113688748B
CN113688748B CN202110998341.2A CN202110998341A CN113688748B CN 113688748 B CN113688748 B CN 113688748B CN 202110998341 A CN202110998341 A CN 202110998341A CN 113688748 B CN113688748 B CN 113688748B
Authority
CN
China
Prior art keywords
picture
flame
fire
color information
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110998341.2A
Other languages
Chinese (zh)
Other versions
CN113688748A (en
Inventor
严国建
李志强
王彬
杨阳
梁瑞凡
许璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WUHAN DAQIAN INFORMATION TECHNOLOGY CO LTD
Original Assignee
WUHAN DAQIAN INFORMATION TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WUHAN DAQIAN INFORMATION TECHNOLOGY CO LTD filed Critical WUHAN DAQIAN INFORMATION TECHNOLOGY CO LTD
Priority to CN202110998341.2A priority Critical patent/CN113688748B/en
Publication of CN113688748A publication Critical patent/CN113688748A/en
Application granted granted Critical
Publication of CN113688748B publication Critical patent/CN113688748B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Fire-Detection Mechanisms (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a fire detection model and a method thereof, wherein the fire detection model is improved based on a general target detection framework of deep learning, and the improvement is that a flame color information diagram input branch is added on the basis of original image input of the general target detection framework, and a replacement Loss function is Focal Loss. The model for fire detection provided by the invention can rapidly and accurately judge whether the picture to be detected has flame or not, namely whether the picture is a real fire picture or not after processing and analyzing the picture to be detected after learning and training, thereby providing a judging basis for judging whether the fire occurs or not.

Description

Fire detection model and method
Technical Field
The invention relates to the field of computer vision in artificial intelligence, in particular to a model and a method for realizing fire detection by identifying flames in an image through artificial intelligence.
Background
Fire has been a major threat to life and property safety and natural environmental safety from ancient times. The early stage of fire disaster occurrence can effectively detect the occurrence of fire disaster, and has positive significance for timely rescuing related personnel, eliminating threat and reducing loss.
Fire sensors are widely used for fire detection. The fire sensor judges the occurrence of fire through built-in heat, smoke sensor and the like, thereby giving an alarm. However, fire detectors have certain limitations in practical use. Because the fire detector is a nature of contact detection, the detector can identify the fire as occurring after a large amount of fire smoke is generated after a certain time of the fire, and the size, detailed position and fire occurrence process of the fire cannot be clearly perceived, so that the detector has no help in subsequent investigation of the fire occurrence. With the increase of video monitoring, streets, corridor, house, warehouse and the like are covered in monitoring, so that the occurrence and development of fire can be detected through video images, fire fighters can be warned and assisted in the fastest and optimal mode to deal with fire crisis, and false alarm and missing alarm phenomena are reduced to the greatest extent.
In recent years, many scholars have studied digital image processing methods to detect fire in a monitoring video, and they detect the fire through the red flame and the irregular movement shape thereof, but the scheme is extremely easy to be influenced by environmental wind and light, and has poor adaptability to the environment; there are also scholars who perform fire detection by improving the general target detection network YOLOv3, they utilize the excellent target detection performance of deep learning to improve the adaptability of the algorithm to the environment through data training, but the general target detection network is general to irregular target detection effect of flame, in the improvement method, only the network model is improved, flame characteristics are not combined, the YOLO frame is known at a speed, and the accuracy of the YOLO frame is difficult to meet the actual application requirements.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a fire detection model and a fire detection method, wherein the fire detection model can rapidly and accurately judge whether a picture to be detected has flame or not after processing and analyzing an image to be detected after learning and training, thereby providing a judgment basis for judging whether a fire occurs or not.
The technical scheme adopted for realizing the purpose of the invention is as follows: a model of fire detection that is an improvement over a deep learning based universal target detection framework, the improvement comprising: and adding a flame color information diagram input branch and a replacement Loss function as Focal Loss on the basis of original image input of the general target detection framework.
In the technical proposal, the flame color information graph is generated according to the following formula group,
R(x,y)>R mean
R(x,y)>G(x,y)>B(x,y)
0.25≤G(x,y)/(R(x,y)+1)≤0.65
0.05≤B(x,y)/(R(x,y)+1)≤0.45
0.20≤B(x,y)/(G(x,y)+1)≤0.60
wherein R (x, y) represents the red component pixel value at coordinates (x, y) in the picture; r is R mean Representing the average of the red component of the pixel value of the entire picture. K represents the number of pixel values of the whole picture; g (x, y) represents the green component pixel value at coordinates (x, y) in the picture; b (x, y) represents the blue component pixel value at coordinates (x, y) in the picture.
The formula group collects pixel values of a plurality of pixel points in a flame area and a non-flame area in a picture to be detected, sorts the collected data, finds out a critical value of the pixel values in the flame area and the non-flame area, and sets the pixel value of the pixel point which simultaneously meets the formula as a gray value converted from RGB, otherwise, the pixel value is 0; the obtained gray level image is the flame color information image.
In the above technical solution, the original image input and flame color information image input branch includes a backup module, the main layer of the backup module includes 5 sub-modules, the sub-modules are composed of a conventional convolution layer and a depth separable convolution structure, the 1 st sub-module of the original image input branch is composed of two conventional convolution layers with the channel number of 32, the 2 nd sub-module is composed of two bottleck with the channel number of 64, the 3 rd sub-module is composed of three bottleck with the channel number of 128, and the 4 th and 5 th sub-modules are both composed of two bottleck with the channel number of 256 and a conventional convolution layer with the channel number of 256; the structure of the color information graph branch is the same as that of the original graph input branch, and the number of channels is correspondingly reduced by half.
In addition, the invention also provides a fire detection method based on the fire detection model, which comprises the following steps:
s1, training the fire detection model to obtain a trained fire detection model;
s2, taking a picture to be detected as an original picture input and taking a flame color information picture of the picture to be detected as another branch input, judging whether the picture to be detected has flame after the training is finished by reasoning of a fire detection model, and outputting a flame coordinate position if the picture to be detected has flame, so as to judge fire and the position and the size of the fire; if there is no flame, no fire occurs.
Further, the step S1 includes:
s1.1, collecting m real fire pictures as pictures to be trained, and n Zhang Yi negative samples;
s1.2, constructing a Backbone module, and pre-training the Backbone module by using an ImageNet data set;
s1.3, modifying a refindet network structure, replacing an original module by using a pre-trained backstone in the step S1.2, and replacing a default Loss function by using a Focal Loss;
s1.4, modifying a training strategy to enable negative samples to be in the Batch for each training, and calculating positive and negative samples Anchor together with the pictures of the whole Batch, wherein other parameters and flows use schemes in a refine det algorithm;
s1.5, respectively obtaining flame color information diagrams of the picture to be trained and the negative sample picture according to the formula set by using the picture to be trained and the negative sample picture obtained in the step S1.1;
s1.6, taking the original pictures of the pictures to be trained and the negative samples as original picture input branches, and taking flame color information pictures of the pictures to be trained and the negative samples as other branch inputs to be input into the fire detection model for training, so that a training completed model is obtained.
The invention has the following advantages:
1. color information graph branches are added on the basis of a general target detection frame, so that flame characteristics are highlighted by the strong characteristics;
2. the real-time monitoring requirement is met by designing the number of main body layer channels of a back bone (basic model).
3. The training strategy is modified, and Focal Loss (prior art) is used to replace the original Loss function, so that a flameless target picture can be added in the training process.
Drawings
FIG. 1 is a frame diagram of a fire detection model of the present invention.
FIG. 2 is a schematic diagram of the channel number structure of a depth separable convolution structure.
Fig. 3 is a schematic diagram of a structure of a Backbone main layer in an original input branch of the present invention.
Fig. 4 is a schematic diagram of a structure of a main layer of a backup in an input branch of a flame color information map according to the present invention.
Detailed Description
The invention will now be described in further detail with reference to the drawings and to specific examples.
As shown in fig. 1, the fire detection model of the present invention includes a general target detection framework based on deep learning, where the general target detection framework based on deep learning is used for fire in the prior art, and includes original image input, a deep convolutional neural network and a loss function, and the specific structure and implementation process are not described herein.
Compared with the general target detection framework used in the prior art, the fire detection model is improved in that: still further included is a flame color information map input branch, and the original Loss function is replaced with a Loss function of Focal Loss.
Because the fire disaster occurs in a complex environment, a better detection effect is difficult to achieve in the complex environment by simply using a digital image processing method; and the flame shape becomes phantom, the general target detection network is difficult to detect all irregular flames. The invention combines the advantages of the two, and adds the color information graph on the basis of the general target detection network. The image to be detected is generated into a color information graph according to the following formula group 1, for example, in this embodiment, the collected data are sorted by collecting pixel values of a plurality of (1000) pixels in a flame area and a non-flame area, and a formula group critical value is found. Meanwhile, the pixel value of the pixel point meeting the following formula is set as the gray value converted from RGB, otherwise, the pixel value is set as 0.
R(x,y)>R mean
R(x,y)>G(x,y)>B(x,y)
0.25≤G(x,y)/(R(x,y)+1)≤0.65
0.05≤B(x,y)/(R(x,y)+1)≤0.45
0.20≤B(x,y)/(G(x,y)+1)≤0.60
Formula set 1
The generated grayscale image and the original are input together into the model for training, that is, the original is input as one branch (original input branch), and the grayscale image (flame color information map) is input as the other branch (flame color information map input branch).
As shown in fig. 2, the depth separable convolution structure is shown, because the back box part of the conventional general purpose target detection network refinishet algorithm is originally a VGG16 model, the excellent performance of the back box is often used as the back box, but when the back box of the refinishet algorithm is used, the algorithm model is difficult to meet the requirement of multipath real-time detection, and the practical combat application is poor. To solve this problem, in this embodiment, the original input and flame color information map input branches include a backup module, respectively, the main layer of the backup module includes 5 sub-modules, each sub-module is composed of a conventional convolution layer and a depth separable convolution structure (each depth separable convolution structure may be referred to as a bottleck, the depth separable convolution structure is the prior art, and is not described in detail here), as shown in fig. 3, the original input branch 1 st sub-module is composed of two conventional convolution layers (Conv 3-32) with 32 channel numbers, the 2 nd sub-module is composed of two bottleck (bottleck-64) with 64 channel numbers, the 3 rd sub-module is composed of three bottleck (bottleck-128) with 128 channel numbers, and the 4 th and 5 th sub-modules are each composed of two bottleck (bottleck-256) with 256 channel numbers and a conventional convolution layer (Conv 3-256) with 1 channel number; as shown in fig. 4, the color information map branches have the same structure as the original input branches, and the number of channels is correspondingly reduced by half. The invention adopts a backup with a small number of channels, and because the flame detection type is single (only flame is detected, VGG16 is usually used for detecting 20 or even hundreds of types of targets), and the color information graph is added as strong characteristic input, the accuracy requirement can be met by using a small model with a small number of channels, so that the detection efficiency is improved.
In practical application, the scenes such as gorgeous lamplight, sunset and the like are found to have great influence on fire detection precision, and false detection is extremely easy to generate. Although the model of the present invention has been trained using considerable data, it is difficult to mitigate false positives like this scenario using only flame pictures for training. Therefore, in order to solve the problem of false detection, a part of negative sample (flameless light, sunset, etc.) pictures need to be added to training, but the negative sample (flameless target) pictures cannot be added to the conventional training strategy of the detection network, for example, in the conventional training strategy, although each training process is trained by using 8 pictures as 1 Batch together, each picture is performed separately when calculating the positive and negative sample Anchor. This results in the picture not having a positive sample Anchor (flame target) to participate in the training.
To this end, the present invention provides another preferred embodiment, unlike conventional training strategies, in which 25% of negative sample pictures are added to the annotated flame dataset, which are all prone to false detection of lights, sunset, etc. and often occur in practical application scenarios. And 25 percent (2 pieces) of negative sample pictures are added into each training Batch, and in the process of calculating the Loss, the positive and negative samples Anchor are calculated together by the whole Batch, so that the flameless pictures can also participate in training. Because the flameless pictures are added for training, the problem of unbalance of the Anchor proportion of the positive and negative samples is also brought. Therefore, in the invention, focal Loss is used for replacing the original Loss function, so that the problem of imbalance of the positive and negative sample Anchor ratio caused by adding a flameless picture is solved. In the target detection algorithm, according to pre-designed parameters, a 'candidate frame' with different size and aspect ratios is drawn on an input picture, whether the candidate frame contains a target to be detected or not is respectively judged, and an accurate position of the target to be detected is obtained by using a regression algorithm, wherein the 'candidate frame' is called an Anchor, and is also called an Anchor frame.
The method for detecting the fire disaster based on the fire disaster detection model comprises the following steps:
s1, training through the fire detection model, wherein the training process is as follows:
s1.1, collecting m real fire pictures as pictures to be trained, and n Zhang Yi negative samples. For example, collecting an open-source fire data set, acquiring fire images on a network through a web crawling technology, and marking the fire images with flame frames, wherein 17865 marked images are available; and then picking 5955 and Zhang Yi the negative sample pictures from the actual application scene. The error-prone negative sample picture is an error-prone picture without flame in the image.
S1.2, constructing a Backbone according to the number of main layer channels of the Backbone module, and pre-training the Backbone model by using an ImageNet data set by referring to a VGG16 model.
S1.3, modifying the refindet network structure, replacing the original module by using the pre-trained backstone in the step S1.2, and replacing the default Loss function by using the Focal Loss.
S1.4, modifying the training strategy to enable 25% of the Batch to be negative samples in each training, and calculating positive and negative samples Anchor together with the whole Batch picture, wherein other parameters and processes use schemes in a refine det algorithm.
S1.5, respectively obtaining flame color information diagrams of the picture to be trained and the negative sample picture according to the formula set 1 by using the picture to be trained and the negative sample picture obtained in the step S1.1.
S1.6, taking the original pictures of the pictures to be trained and the negative samples as original picture input branches, and taking flame color information pictures of the pictures to be trained and the negative samples as another branch input (flame color information picture input branches) to be input into the fire detection model for training, thereby obtaining a trained model
S2, detecting the picture to be detected, and judging whether the picture to be detected is a real fire picture or not.
S2.1, obtaining a flame color information diagram of the picture to be detected according to the formula set 1.
S2.2, taking the original picture of the picture to be detected as an original picture input branch, and taking the flame color information picture of the picture to be detected as another branch (flame color information picture input branch) to input the flame color information picture into the trained model for reasoning calculation.
S2.3, outputting whether a picture to be detected has flame by the trained model, and outputting a flame coordinate position if the picture has flame, so as to judge the occurrence of fire, the position of the fire and the size of the flame; if there is no flame, no fire occurs.
According to fire characteristics, color information is used as an additional strong feature, and a model algorithm customized for a fire detection body is designed by combining a target detection network based on deep learning. In order to meet the requirement of real-time performance, according to the characteristic of few fire detection task categories, a backup with fewer channels is customized, and the reasoning speed is improved from 52ms to 31ms at the weak cost of only 0.1 (67.9 to 67.8) of model mAP, so that the effect is remarkable; meanwhile, in order to solve the problem of false detection of lamplight and the like, a training strategy is modified, a default Loss function is replaced by using a Focal Loss, so that an error-prone negative sample can participate in training, the model mAP is improved by 3.1 (67.8 to 70.9), and the requirements of actual combat scenes are met.

Claims (2)

1. A fire detection method, comprising:
s1, training a fire detection model to obtain a trained fire detection model; the fire detection model is improved based on a deep learning general target detection frame, a flame color information diagram input branch is added on the basis of original image input of the general target detection frame, and a replacement Loss function is Focal Loss;
s2, taking a picture to be detected as an original picture input and taking a flame color information picture of the picture to be detected as another branch input, judging whether the picture to be detected has flame after the training is finished by reasoning of a fire detection model, and outputting a flame coordinate position if the picture to be detected has flame, so as to judge fire and the position and the size of the fire; if no flame exists, no fire occurs;
the flame color information map is generated according to the following set of formulas,
R(x,y)>R mean
R(x,y)>G(x,y)>B(x,y)
0.25≤G(x,y)/(R(x,y)+1)≤0.65
0.05≤B(x,y)/(R(x,y)+1)≤0.45
0.20≤B(x,y)/(G(x,y)+1)≤0.60
wherein R (x, y) represents the red component pixel value at coordinates (x, y) in the picture; r is R mean Representing the average value of the red component of the pixel value of the whole picture; k represents the number of pixel values of the whole picture; g (x, y) represents the green component pixel value at coordinates (x, y) in the picture; b (x, y) represents the blue component pixel value at coordinates (x, y) in the picture;
the formula group collects pixel values of a plurality of pixel points in a flame area and a non-flame area in a picture to be detected, sorts the collected data, finds out a critical value of the pixel values in the flame area and the non-flame area, and sets the pixel value of the pixel point which simultaneously meets the formula group as a gray value converted from RGB, otherwise, the pixel value is 0; the obtained gray level image is the flame color information image;
the flame color information graph comprises a backbox module, wherein a main body layer of the backbox module comprises 5 sub-modules, each sub-module consists of a conventional convolution layer and a depth separable convolution structure, an original image input branch 1 st sub-module consists of two conventional convolution layers with the number of channels being 32, a 2 nd sub-module consists of two bottlenecks with the number of channels being 64, a 3 rd sub-module consists of three bottlenecks with the number of channels being 128, and the 4 th and 5 th sub-modules consist of two bottlenecks with the number of channels being 256 and one conventional convolution layer with the number of channels being 256; the structure of the color information graph branch is the same as that of the original graph input branch, and the number of channels is correspondingly reduced by half.
2. The fire detection method according to claim 1, wherein the step S1 includes:
s1.1, collecting m real fire pictures as pictures to be trained, and n Zhang Yi negative samples;
s1.2, constructing a Backbone module, and pre-training the Backbone module by using an ImageNet data set;
s1.3, modifying a refindet network structure, replacing an original module by using a pre-trained backstone in the step S1.2, and replacing a default Loss function by using a Focal Loss;
s1.4, modifying a training strategy to enable negative samples to be in the Batch for each training, and calculating positive and negative samples Anchor together with the pictures of the whole Batch, wherein other parameters and flows use schemes in a refine det algorithm;
s1.5, respectively obtaining flame color information diagrams of the picture to be trained and the negative sample picture according to the formula set by using the picture to be trained and the negative sample picture obtained in the step S1.1;
s1.6, taking the original pictures of the pictures to be trained and the negative samples as original picture input branches, and taking flame color information pictures of the pictures to be trained and the negative samples as another input branches to be input into a fire detection model together for training, so that a training completed model is obtained.
CN202110998341.2A 2021-08-27 2021-08-27 Fire detection model and method Active CN113688748B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110998341.2A CN113688748B (en) 2021-08-27 2021-08-27 Fire detection model and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110998341.2A CN113688748B (en) 2021-08-27 2021-08-27 Fire detection model and method

Publications (2)

Publication Number Publication Date
CN113688748A CN113688748A (en) 2021-11-23
CN113688748B true CN113688748B (en) 2023-08-18

Family

ID=78583673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110998341.2A Active CN113688748B (en) 2021-08-27 2021-08-27 Fire detection model and method

Country Status (1)

Country Link
CN (1) CN113688748B (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101679148B1 (en) * 2015-06-15 2016-12-06 동의대학교 산학협력단 Detection System of Smoke and Flame using Depth Camera
CN108009479A (en) * 2017-11-14 2018-05-08 中电数通科技有限公司 Distributed machines learning system and its method
CN108447219A (en) * 2018-05-21 2018-08-24 中国计量大学 System and method for detecting fire hazard based on video image
CN108830305A (en) * 2018-05-30 2018-11-16 西南交通大学 A kind of real-time fire monitoring method of combination DCLRN network and optical flow method
CN109376747A (en) * 2018-12-11 2019-02-22 北京工业大学 A kind of video flame detecting method based on double-current convolutional neural networks
CN109829920A (en) * 2019-02-25 2019-05-31 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN110135269A (en) * 2019-04-18 2019-08-16 杭州电子科技大学 A kind of fire image detection method based on blend color model and neural network
KR102067994B1 (en) * 2019-05-20 2020-01-20 한밭대학교 산학협력단 System for detecting flame of embedded environment using deep learning
WO2020086217A1 (en) * 2018-10-26 2020-04-30 Siemens Aktiengesellschaft Learning keypoints and matching rgb images to cad models
CN111311475A (en) * 2020-02-21 2020-06-19 广州腾讯科技有限公司 Detection model training method and device, storage medium and computer equipment
CN111444801A (en) * 2020-03-18 2020-07-24 成都理工大学 Real-time detection method for infrared target of unmanned aerial vehicle
CN111814638A (en) * 2020-06-30 2020-10-23 成都睿沿科技有限公司 Security scene flame detection method based on deep learning
CN112132090A (en) * 2020-09-28 2020-12-25 天地伟业技术有限公司 Smoke and fire automatic detection and early warning method based on YOLOV3
CN112396024A (en) * 2020-12-01 2021-02-23 杭州叙简科技股份有限公司 Forest fire alarm method based on convolutional neural network
CN112686276A (en) * 2021-01-26 2021-04-20 重庆大学 Flame detection method based on improved RetinaNet network
CN112836608A (en) * 2021-01-25 2021-05-25 南京恩博科技有限公司 Forest fire source estimation model training method, estimation method and system
CN112861635A (en) * 2021-01-11 2021-05-28 西北工业大学 Fire and smoke real-time detection method based on deep learning
CN112884090A (en) * 2021-04-14 2021-06-01 安徽理工大学 Fire detection and identification method based on improved YOLOv3
CN112906463A (en) * 2021-01-15 2021-06-04 上海东普信息科技有限公司 Image-based fire detection method, device, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11132780B2 (en) * 2020-02-14 2021-09-28 Huawei Technologies Co., Ltd. Target detection method, training method, electronic device, and computer-readable medium

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101679148B1 (en) * 2015-06-15 2016-12-06 동의대학교 산학협력단 Detection System of Smoke and Flame using Depth Camera
CN108009479A (en) * 2017-11-14 2018-05-08 中电数通科技有限公司 Distributed machines learning system and its method
CN108447219A (en) * 2018-05-21 2018-08-24 中国计量大学 System and method for detecting fire hazard based on video image
CN108830305A (en) * 2018-05-30 2018-11-16 西南交通大学 A kind of real-time fire monitoring method of combination DCLRN network and optical flow method
WO2020086217A1 (en) * 2018-10-26 2020-04-30 Siemens Aktiengesellschaft Learning keypoints and matching rgb images to cad models
CN109376747A (en) * 2018-12-11 2019-02-22 北京工业大学 A kind of video flame detecting method based on double-current convolutional neural networks
CN109829920A (en) * 2019-02-25 2019-05-31 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN110135269A (en) * 2019-04-18 2019-08-16 杭州电子科技大学 A kind of fire image detection method based on blend color model and neural network
KR102067994B1 (en) * 2019-05-20 2020-01-20 한밭대학교 산학협력단 System for detecting flame of embedded environment using deep learning
CN111311475A (en) * 2020-02-21 2020-06-19 广州腾讯科技有限公司 Detection model training method and device, storage medium and computer equipment
CN111444801A (en) * 2020-03-18 2020-07-24 成都理工大学 Real-time detection method for infrared target of unmanned aerial vehicle
CN111814638A (en) * 2020-06-30 2020-10-23 成都睿沿科技有限公司 Security scene flame detection method based on deep learning
CN112132090A (en) * 2020-09-28 2020-12-25 天地伟业技术有限公司 Smoke and fire automatic detection and early warning method based on YOLOV3
CN112396024A (en) * 2020-12-01 2021-02-23 杭州叙简科技股份有限公司 Forest fire alarm method based on convolutional neural network
CN112861635A (en) * 2021-01-11 2021-05-28 西北工业大学 Fire and smoke real-time detection method based on deep learning
CN112906463A (en) * 2021-01-15 2021-06-04 上海东普信息科技有限公司 Image-based fire detection method, device, equipment and storage medium
CN112836608A (en) * 2021-01-25 2021-05-25 南京恩博科技有限公司 Forest fire source estimation model training method, estimation method and system
CN112686276A (en) * 2021-01-26 2021-04-20 重庆大学 Flame detection method based on improved RetinaNet network
CN112884090A (en) * 2021-04-14 2021-06-01 安徽理工大学 Fire detection and identification method based on improved YOLOv3

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于改进的区域全卷积神经网络和联合双边滤波的图像着色方法;何山;方利;张政;;激光与光电子学进展(第12期);全文 *

Also Published As

Publication number Publication date
CN113688748A (en) 2021-11-23

Similar Documents

Publication Publication Date Title
CN109559310B (en) Power transmission and transformation inspection image quality evaluation method and system based on significance detection
CN108537215B (en) Flame detection method based on image target detection
CN111368690B (en) Deep learning-based video image ship detection method and system under influence of sea waves
CN110378232B (en) Improved test room examinee position rapid detection method of SSD dual-network
CN111259892A (en) Method, device, equipment and medium for inspecting state of indicator light
CN110688925A (en) Cascade target identification method and system based on deep learning
CN111126293A (en) Flame and smoke abnormal condition detection method and system
CN109300110A (en) A kind of forest fire image detecting method based on improvement color model
CN107067412A (en) A kind of video flame smog detection method of Multi-information acquisition
CN112734739B (en) Visual building crack identification method based on attention mechanism and ResNet fusion
CN113538375A (en) PCB defect detection method based on YOLOv5
CN112668426B (en) Fire disaster image color cast quantization method based on three color modes
CN112529901B (en) Crack identification method in complex environment
Zhang et al. Application research of YOLO v2 combined with color identification
CN107862333A (en) A kind of method of the judgment object combustion zone under complex environment
CN116152658A (en) Forest fire smoke detection method based on domain countermeasure feature fusion network
CN116486231A (en) Concrete crack detection method based on improved YOLOv5
CN116029979A (en) Cloth flaw visual detection method based on improved Yolov4
CN114332739A (en) Smoke detection method based on moving target detection and deep learning technology
CN113688748B (en) Fire detection model and method
CN114596244A (en) Infrared image identification method and system based on visual processing and multi-feature fusion
CN110334703B (en) Ship detection and identification method in day and night image
CN115661094A (en) Industrial flaw detection method based on improved YOLOX model
CN110516694A (en) A kind of drainage pipeline defect automatic testing method based on cost sensitive learning
CN110084777A (en) A kind of micro parts positioning and tracing method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant