CN111091072A - YOLOv 3-based flame and dense smoke detection method - Google Patents

YOLOv 3-based flame and dense smoke detection method Download PDF

Info

Publication number
CN111091072A
CN111091072A CN201911197998.8A CN201911197998A CN111091072A CN 111091072 A CN111091072 A CN 111091072A CN 201911197998 A CN201911197998 A CN 201911197998A CN 111091072 A CN111091072 A CN 111091072A
Authority
CN
China
Prior art keywords
flame
smoke
dense smoke
data set
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911197998.8A
Other languages
Chinese (zh)
Inventor
钱惠敏
施非
周军
黄浩乾
卢新彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN201911197998.8A priority Critical patent/CN111091072A/en
Publication of CN111091072A publication Critical patent/CN111091072A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/12Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
    • G08B17/125Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke

Abstract

The invention discloses a flame and dense smoke detection method based on YOLOv3, which comprises the following steps: establishing a flame data set and a dense smoke data set; performing data enhancement in a random horizontal overturning, cutting and rotating mode; respectively training a flame model and a dense smoke model by using a YOLOv3 algorithm, and fusing into a final model; in the existing video monitoring system, a flame and smoke detection module is added; a camera based on a video monitoring system collects a monitoring scene video in real time and extracts image frames from the video based on an ffmpeg frame; detecting each frame of image by adopting a fusion detection model, determining whether flame and dense smoke exist in the image and marking the positions of the flame and the dense smoke; when a fire condition is detected, the fire-fighting equipment is automatically alarmed, the automatic fire-fighting equipment is linked, and the camera provides real-time monitoring. The method can realize effective monitoring and danger early warning of fire conditions in important places, and has the advantages of no dependence on manual characteristics, low detection cost, high detection speed, high accuracy and the like.

Description

YOLOv 3-based flame and dense smoke detection method
Technical Field
The invention belongs to the field of cross research of computer vision and machine learning, and particularly relates to a flame and dense smoke detection method based on YOLOv 3.
Background
The fire hazard can endanger the safety of lives and properties of people, and the fire hazard can cause irreparable loss in important places such as transformer substations, hospitals, libraries, forests and the like. In the important places, the timely identification and early warning of the flame have important significance on the personal safety of professionals and the safety of public property.
In the case of fire, generation of smoke, high temperature, high brightness, etc. is usually accompanied. Therefore, parameters such as smoke concentration, flame brightness, temperature, etc. are often important parameters for fire detection. Accordingly, smoke sensors, temperature sensors, and the like are often used for fire detection. However, the sensor is susceptible to environmental factors, requires a specific application environment, and cannot be used for fire detection in places such as forests.
With the development of computer vision technology, image-based flame/smoke detection technology has become a research hotspot. Compared with the flame/smoke detection technology based on the sensor, the flame/smoke detection technology based on the image can not only overcome the influence of the environment and quickly respond to the fire situation in time, but also can clearly provide the real-time situation of the fire scene, and is convenient for rescue personnel to handle.
Early image-based flame detection techniques typically implemented flame detection based on the color, brightness, texture, shape, etc. characteristics of flames. However, such methods based on the given flame characteristics have poor interference resistance and poor generalization capability, and the occurrence scene, combustion form, form of smoke generated therewith, and the like of the flame have diversity and are easily affected by the environment, so that the false alarm rate of the detection algorithm is high in different scenes.
With the continuous development of deep learning technology, characteristics are automatically mined and analyzed from a deeper level, and the method becomes a new idea in the field of fire video detection. The artificial intelligence and deep learning technology are applied to fire monitoring, a complex and time-consuming feature extraction process is avoided through the image processing and recognition technology, abundant features can be automatically learned from flame and smoke data, the fire detection accuracy is further improved, and fire positioning is realized.
The target detection algorithm is mainly divided into a two-stage target detection algorithm based on regional proposal and a one-stage target detection algorithm based on position regression. The two-stage target detection algorithm has higher detection precision, the one-stage target detection algorithm has higher detection speed, the YOLOv3 deep learning algorithm is the one-stage target detection algorithm, and the detection precision is improved by improving the network structure and other technologies. The improvement in YOLOv3 over the earlier YOLO versions included: a deeper network level is obtained by using a residual network structure for reference; by adopting a multi-scale detection method, the average detection precision and the detection effect on small objects are improved; and (4) outputting prediction by using a Logistic function instead of a Softmax function, and supporting single-target multi-label classification.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems of easy influence of environmental factors, poor sensitivity and insufficient reliability of a sensor in the prior art, the invention provides a flame and dense smoke detection method based on YOLOv 3.
The technical scheme is as follows: in order to realize the purpose of the invention, the technical scheme adopted by the invention is as follows:
a flame and dense smoke detection method based on YOLOv3 comprises the following steps:
step 1, collecting images containing flame and dense smoke, and establishing a flame initial data set and a dense smoke initial data set;
step 2, respectively carrying out data enhancement operation on images in the flame initial data set and the dense smoke initial data set to expand the data sets, marking the expanded data sets, respectively selecting p% from the expanded flame initial data set and the expanded dense smoke initial data set as a flame training data set and a dense smoke training data set in a random mode, and respectively using the rest parts of the corresponding data sets as a flame test data set and a dense smoke test data set; preferably, the data enhancement operation includes random horizontal flipping, clipping, rotating, and unified scaling to a fixed size; preferably, p% is set to 70%;
step 3, respectively training a YOLOv3 convolutional neural network by using a flame training data set and a dense smoke training data set to obtain a flame detection model and a dense smoke detection model, and obtaining a flame and smoke fusion detection model through model fusion;
step 4, adding a flame and smoke detection module in the existing video monitoring system;
step 5, collecting a monitoring scene video in real time by a camera based on a video monitoring system, and extracting image frames from the video based on an ffmpeg frame;
step 6, detecting each frame of image by adopting a fusion detection model, determining whether flame and dense smoke exist in the image and marking the positions of the flame and the dense smoke;
and 7, when the fire (flame or dense smoke) is detected, transmitting the detection result image back to the monitoring terminal and giving an alarm, linking the automatic fire fighting equipment, and monitoring the fire in real time by the camera.
Further, the step 1 specifically includes:
step 1-1, obtaining images and videos containing flames and dense smoke through self-shooting and online crawlers;
step 1-2, extracting a flame/dense smoke image frame from a flame video by adopting an ffmpeg frame, labeling flame and dense smoke regions for all the images, and respectively generating a flame data set and a dense smoke data set in a VOC format.
Further, the step 3 specifically includes:
step 3-1, respectively training a YOLOv3 model by adopting a flame training set and a dense smoke training set under a tensoflow platform; the YOLOv3 convolutional neural network takes a two-dimensional image in a data set as input, and takes the position and the class prediction confidence coefficient of a corresponding target on the input two-dimensional image as output;
3-2, according to the selected loss function, performing iterative updating on parameters of a deep convolutional neural network in the YOLOv3 model by using a gradient descent back propagation method, taking network parameters obtained after iteration to the maximum set number of times as optimal network parameters, completing training, and obtaining a preliminary flame detection model and a preliminary smoke detection model;
3-3, respectively testing the preliminary flame detection model and the smoke detection model by using the test set, adjusting the network structure according to the test result, adding pictures (namely difficult cases) which cannot be detected or are detected wrongly into the training set, and retraining until the test result reaches the expectation to obtain the final flame detection model and the final smoke detection model;
and 3-4, taking and combining the results of the fusion flame detection model and the dense smoke detection model, reducing the omission factor and obtaining the fusion flame and smoke detection model.
Further, the step 5 specifically includes:
step 5-1, the camera is connected with a computer in a wireless or hardware connection mode, and videos shot in real time are input into the computer;
step 5-2, extracting an image every n frames based on the ffmpeg frame, and performing preprocessing operation on the extracted image when the luminous flux, brightness and illumination effect of the field environment do not meet the expectation; the preprocessing operation comprises denoising, contrast enhancement, brightness and saturation adjustment; preferably, n is in the range of [25,30 ].
Further, the YOLOv3 convolutional neural network uses a Darknet-53 base convolutional network.
Further, the YOLOv3 convolutional neural network adopts 3 feature maps with different scales to detect objects. The characteristic diagram of the lower layer is the output of the 26 th convolution layer, has higher resolution, has rich geometric details, and is easier to detect smaller flames and dense smoke (the length and width of the flame/dense smoke area is less than 0.1 of the size of the original image); the high-level characteristic diagram is the output of the 52 th convolution layer, has clear semantics and larger receptive field, and is easier to detect large-area flame and dense smoke (the length and width of the flame/dense smoke area exceed 0.5 of the size of the original image); the feature map of the middle layer is the output of the 43 th convolution layer, has a medium-scale field of view, and is suitable for detecting medium flame and smoke (the length and width of the flame/smoke region is not less than 0.1 and not more than 0.5 of the original image size).
Further, the YOLOv3 convolutional neural network uses 9 scales of prior frames, which are: (10 × 13), (16 × 30), (33 × 23), (30 × 61), (62 × 45), (59 × 119), (116 × 90), (156 × 198), (373 × 326).
Furthermore, the YOLOv3 convolutional neural network does not use the Softmax function when predicting the object type, but uses the Logistic output for prediction instead, so that a plurality of labels of the object can be predicted, a multi-label object is supported, and two types of objects can be simultaneously detected under the condition that flames and dense smoke are mixed.
Further, the loss function selected in the training process includes position loss, category prediction loss and confidence loss, and the expression is as follows:
Figure BDA0002295153740000031
where M denotes the total number of samples in a picture, λobjWhether the mark area contains the target, when the target is in the image, lambdaobjTake 1, otherwise, λobjTaking 0; l isposDenotes the position loss, LclRepresents the class loss, LconfRepresenting a confidence loss;
position loss LposThe calculation method of (c) is as follows:
Figure BDA0002295153740000032
wherein x and y are respectively the horizontal and vertical coordinates of the center of the target area, w and h are respectively the width and height of the target area, T represents a true value, and P represents a predicted value;
class loss LclThe calculation method of (c) is as follows:
Figure BDA0002295153740000041
wherein cl isRepresenting categories, k representing the number of categories, IrIndicating whether r is a real target class, when r is a real target class, IrIs 1, otherwise IrIs 0;
loss of confidence LconfThe calculation method of (c) is as follows:
Lconf=(Tconf-Pconf)2
where conf represents the confidence.
Has the advantages that: compared with the prior art, the technical scheme of the invention has the following beneficial technical effects:
the invention innovatively provides a flame and smoke detection method based on YOLOv 3. By using the trained deep convolutional neural network model, whether fire occurs in an important place or not is automatically detected, so that the characteristics can be automatically extracted, the complex work of manually extracting the characteristics is avoided, and the method has the advantages of low detection cost, high detection speed, high accuracy and the like. The method effectively applies the computer technology and the image processing technology to fire detection, can be widely applied to important places such as power places, hospitals, libraries, forests and the like, and provides an effective way for fire detection early warning and fire rescue assistance.
Drawings
FIG. 1 is a schematic flow chart of the flame and smoke detection method based on YOLOv3 of the invention;
FIG. 2 is a YOLOv3 model training process;
fig. 3 is a sample flame and smoke detection.
Detailed Description
The invention is described in detail below with reference to the drawings and specific examples.
The invention provides a YOLOv 3-based flame and smoke detection method, which is used for monitoring fire in important places through a field video monitoring system. When detecting flame or dense smoke, the automatic alarm is carried out, and the fire position is displayed, so that real-time data is provided for fire fighting personnel. As shown in fig. 1, the method comprises the steps of:
step 1, collecting images containing flame and dense smoke, and establishing a flame initial data set and a dense smoke initial data set; the method specifically comprises the following steps:
step 1-1, obtaining a certain amount of images and videos containing flame and dense smoke through self-shooting and web crawlers;
step 1-2, extracting a flame/dense smoke image frame from a flame video by adopting an ffmpeg frame, labeling flame and dense smoke regions for all the images, and respectively generating a flame data set and a dense smoke data set in a VOC format.
Step 2, respectively carrying out data enhancement operation on images in the flame initial data set and the dense smoke initial data set to expand the data sets, labeling the expanded data sets, respectively selecting 70% of the expanded flame initial data set and the expanded dense smoke initial data set as a flame training data set and a dense smoke training data set in a random mode, and respectively using the rest parts of the corresponding data sets as a flame test data set and a dense smoke test data set; the data enhancement operation comprises random horizontal turning, cutting, rotating and unified zooming to a fixed size;
step 3, respectively training a YOLOv3 convolutional neural network by using a flame training data set and a dense smoke training data set to obtain a flame detection model and a dense smoke detection model, and obtaining a flame and smoke fusion detection model through model fusion; in this embodiment, the YOLOv3 convolutional neural network adopts a Darknet-53 basic convolutional network, and a YOLOv3 model training process is shown in fig. 2, and specifically includes:
step 3-1, respectively training a YOLOv3 model by adopting a flame training set and a dense smoke training set under a tensoflow platform; the YOLOv3 convolutional neural network takes a two-dimensional image in a data set as input, and takes the position and the class prediction confidence coefficient of a corresponding target on the input two-dimensional image as output;
3-2, according to the selected loss function, performing iterative updating on parameters of a deep convolutional neural network in the YOLOv3 model by using a gradient descent back propagation method, taking network parameters obtained after iteration to the maximum set number of times as optimal network parameters, completing training, and obtaining a preliminary flame detection model and a preliminary smoke detection model;
the loss function selected in the training process considers the position loss, the category prediction loss and the confidence coefficient loss, and the expression is as follows:
Figure BDA0002295153740000051
where M denotes the total number of samples in a picture, λobjWhether the mark area contains the target, when the target is in the image, lambdaobjTake 1, otherwise, λobjTaking 0; l isposDenotes the position loss, LclRepresents the class loss, LconfRepresenting a confidence loss;
position loss LposThe calculation method of (c) is as follows:
Figure BDA0002295153740000052
wherein x and y are respectively the horizontal and vertical coordinates of the center of the target area, w and h are respectively the width and height of the target area, T represents a true value, and P represents a predicted value;
class loss LclThe calculation method of (c) is as follows:
Figure BDA0002295153740000053
where cl represents the class, k represents the number of classes, IrIndicating whether r is a real target class, when r is a real target class, IrIs 1, otherwise IrIs 0;
loss of confidence LconfThe calculation method of (c) is as follows:
Lconf=(Tconf-Pconf)2
wherein conf represents a confidence level;
in this embodiment, the network parameters are set as follows: setting the learning rate to be 0.001 during network training, wherein the learning rate is attenuated by 10 times when the iteration is carried out for 20000 times, and is further attenuated by 10 times when the iteration is carried out for 40000 times; the network momentum parameter is 0.9; the weight decay regularization term is 0.0005; batch size 64, sub-batch size 32; the threshold value is 0.5 during training, and the iteration times are 50000 times;
3-3, respectively testing the preliminary flame detection model and the smoke detection model by using the test set, adjusting the network structure according to the test result, adding pictures (namely difficult cases) which cannot be detected or are detected wrongly into the training set, and retraining until the test result reaches the expectation to obtain the final flame detection model and the final smoke detection model;
and 3-4, taking and combining the results of the fusion flame detection model and the dense smoke detection model, reducing the omission factor and obtaining the fusion flame and smoke detection model.
Step 4, adding a flame and smoke detection module in the existing video monitoring system;
step 5, collecting a monitoring scene video in real time by a camera based on a video monitoring system, and extracting image frames from the video based on an ffmpeg frame; the method specifically comprises the following steps:
step 5-1, the camera is connected with a computer in a wireless or hardware connection mode, and videos shot in real time are input into the computer;
step 5-2, extracting an image every 25-30 frames based on the ffmpeg frame, and performing preprocessing operation on the extracted image when the luminous flux, brightness and illumination effect of the field environment do not meet the expectation; the preprocessing operations include denoising, contrast enhancement, brightness and saturation adjustment.
And step 6, detecting each frame of image by adopting a fusion detection model, determining whether flame and dense smoke exist in the image and marking the positions of the flame and the dense smoke, as shown in fig. 3.
And 7, when the fire (flame or dense smoke) is detected, transmitting the detection result image back to the monitoring terminal and giving an alarm, linking the automatic fire fighting equipment, and monitoring the fire in real time by the camera.
The invention innovatively provides a flame and smoke detection method based on YOLOv 3. By using the trained deep convolutional neural network model, whether a fire disaster occurs or not is automatically detected, so that the characteristics can be automatically extracted, the complex work of manually extracting the characteristics is avoided, and the method has the advantages of low detection cost, high detection speed, high accuracy and the like. The method effectively applies the computer technology and the image processing technology to fire detection, can be widely applied to important places such as power places, hospitals, libraries and the like, and provides an effective way for fire detection early warning and fire rescue assistance.
It will be readily apparent to those skilled in the art that various modifications to these embodiments and the generic principles described herein may be applied to other embodiments, such as road fire monitoring, house fire monitoring, etc. Therefore, the present invention is not limited to the embodiments described herein, and those skilled in the art should make improvements and modifications within the scope of the present invention based on the disclosure of the present invention.

Claims (9)

1. A flame and dense smoke detection method based on YOLOv3 is characterized in that: the method comprises the following steps:
step 1, collecting images containing flame and dense smoke, and respectively establishing a flame initial data set and a dense smoke initial data set;
step 2, respectively carrying out data enhancement operation on images in the flame initial data set and the dense smoke initial data set to expand the data sets, marking the expanded data sets, respectively selecting p% from the expanded flame initial data set and the expanded dense smoke initial data set as a flame training data set and a dense smoke training data set in a random mode, and respectively using the rest parts of the corresponding data sets as a flame test data set and a dense smoke test data set;
step 3, respectively training a YOLOv3 convolutional neural network by using a flame training data set and a dense smoke training data set to obtain a flame detection model and a dense smoke detection model, and obtaining a flame and smoke fusion detection model through model fusion;
step 4, adding a flame and smoke detection module in the existing video monitoring system;
step 5, collecting a monitoring scene video in real time by a camera based on a video monitoring system, and extracting image frames from the video based on an ffmpeg frame;
step 6, detecting each frame of image by adopting a fusion detection model, determining whether flame and dense smoke exist in the image and marking the positions of the flame and the dense smoke;
and 7, when flame or dense smoke is detected, transmitting the detection result image back to the monitoring terminal and giving an alarm, linking the automatic fire fighting equipment, and monitoring the fire condition in real time by the camera.
2. The YOLOv 3-based flame and smoke detection method according to claim 1, wherein: the step 1 specifically comprises:
step 1-1, obtaining images and videos containing flames and dense smoke through self-shooting and online crawlers;
step 1-2, extracting flame and dense smoke image frames from a flame video by adopting an ffmpeg frame, marking flame and dense smoke areas on all the images, and respectively generating a flame data set and a dense smoke data set in a VOC format.
3. The YOLOv 3-based flame and smoke detection method according to claim 1, wherein: the step 3 specifically includes:
step 3-1, respectively training a YOLOv3 model by adopting a flame training set and a dense smoke training set under a tensoflow platform; the YOLOv3 convolutional neural network takes a two-dimensional image in a data set as input, and takes the position and the class prediction confidence coefficient of a corresponding target on the input two-dimensional image as output;
3-2, according to the selected loss function, performing iterative updating on parameters of a deep convolutional neural network in the YOLOv3 model by using a gradient descent back propagation method, taking network parameters obtained after iteration to the maximum set number of times as optimal network parameters, completing training, and obtaining a preliminary flame detection model and a preliminary smoke detection model;
3-3, respectively testing the preliminary flame detection model and the smoke detection model by using the test set, adjusting the network structure according to the test result, adding pictures which cannot be detected or have detection errors into the training set, and retraining until the test result reaches the expectation to obtain the final flame detection model and the final smoke detection model;
and 3-4, collecting and merging the results of the flame detection model and the dense smoke detection model to obtain a flame and smoke fusion detection model.
4. The YOLOv 3-based flame and smoke detection method according to claim 1, wherein: the step 5 specifically includes:
step 5-1, the camera is connected with a computer in a wireless or hardware connection mode, and videos shot in real time are input into the computer;
step 5-2, extracting an image every n frames based on the ffmpeg frame, and performing preprocessing operation on the extracted image when the luminous flux, brightness and illumination effect of the field environment do not meet the expectation; the preprocessing operations include denoising, contrast enhancement, brightness and saturation adjustment.
5. The YOLOv 3-based flame and smoke detection method according to any one of claims 1-4, wherein: the YOLOv3 convolutional neural network, using a Darknet-53 underlying convolutional network.
6. The YOLOv 3-based flame and smoke detection method according to claim 5, wherein: the YOLOv3 convolutional neural network adopts 3 feature maps with different scales to detect objects, the feature map of the lower layer is the output of the 26 th convolutional layer, and the length and width of the detected flame/dense smoke region is less than 0.1 of the size of the original image; the high-level characteristic diagram is the output of the 52 th level convolution layer, and the length and width of the detected flame/dense smoke area exceed 0.5 of the size of the original image; the characteristic map of the middle layer is the output of the 43 th layer of the convolution layer, and the length and width of the detected flame/smoke density region is not less than 0.1 and not more than 0.5 of the original image size.
7. The YOLOv 3-based flame and smoke detection method according to claim 6, wherein the method comprises the following steps: the YOLOv3 convolutional neural network uses 9 scales of prior frames, which are respectively: (10 × 13), (16 × 30), (33 × 23), (30 × 61), (62 × 45), (59 × 119), (116 × 90), (156 × 198), (373 × 326).
8. The YOLOv 3-based flame and smoke detection method according to claim 7, wherein: the YOLOv3 convolutional neural network predicts the object type by using Logistic output, predicts a plurality of labels of the object, and simultaneously detects two types of objects under the condition that flame and dense smoke are mixed.
9. The YOLOv 3-based flame and smoke detection method according to claim 3, wherein: the loss function selected in the training process comprises position loss, category prediction loss and confidence loss, and the expression is as follows:
Figure FDA0002295153730000021
where M denotes the total number of samples in a picture, λobjWhether the mark area contains the target, when the target is in the image, lambdaobjTake 1, otherwise, λobjTaking 0; l isposDenotes the position loss, LclRepresents the class loss, LconfRepresenting a confidence loss;
position loss LposThe calculation method of (c) is as follows:
Figure FDA0002295153730000022
wherein x and y are respectively the horizontal and vertical coordinates of the center of the target area, w and h are respectively the width and height of the target area, T represents a true value, and P represents a predicted value;
class loss LclThe calculation method of (c) is as follows:
Figure FDA0002295153730000031
where cl represents the class, k represents the number of classes, IrIndicating whether r is a real target class, when r is a real target class, IrIs 1, otherwise IrIs 0;
confidence levelLoss LconfThe calculation method of (c) is as follows:
Lconf=(Tconf-Pconf)2
where conf represents the confidence.
CN201911197998.8A 2019-11-29 2019-11-29 YOLOv 3-based flame and dense smoke detection method Pending CN111091072A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911197998.8A CN111091072A (en) 2019-11-29 2019-11-29 YOLOv 3-based flame and dense smoke detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911197998.8A CN111091072A (en) 2019-11-29 2019-11-29 YOLOv 3-based flame and dense smoke detection method

Publications (1)

Publication Number Publication Date
CN111091072A true CN111091072A (en) 2020-05-01

Family

ID=70393189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911197998.8A Pending CN111091072A (en) 2019-11-29 2019-11-29 YOLOv 3-based flame and dense smoke detection method

Country Status (1)

Country Link
CN (1) CN111091072A (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111523528A (en) * 2020-07-03 2020-08-11 平安国际智慧城市科技股份有限公司 Strategy sending method and device based on scale recognition model and computer equipment
CN111599444A (en) * 2020-05-18 2020-08-28 深圳市悦动天下科技有限公司 Intelligent tongue diagnosis detection method and device, intelligent terminal and storage medium
CN111680632A (en) * 2020-06-10 2020-09-18 深延科技(北京)有限公司 Smoke and fire detection method and system based on deep learning convolutional neural network
CN111723656A (en) * 2020-05-12 2020-09-29 中国电子系统技术有限公司 Smoke detection method and device based on YOLO v3 and self-optimization
CN111860323A (en) * 2020-07-20 2020-10-30 北京华正明天信息技术股份有限公司 Method for identifying initial fire in monitoring picture based on yolov3 algorithm
CN111964723A (en) * 2020-08-18 2020-11-20 合肥金果缘视觉科技有限公司 Peanut short bud detecting system based on artificial intelligence
CN111985365A (en) * 2020-08-06 2020-11-24 合肥学院 Straw burning monitoring method and system based on target detection technology
CN111986436A (en) * 2020-09-02 2020-11-24 成都指码科技有限公司 Comprehensive flame detection method based on ultraviolet and deep neural networks
CN112036286A (en) * 2020-08-25 2020-12-04 北京华正明天信息技术股份有限公司 Method for achieving temperature sensing and intelligently analyzing and identifying flame based on yoloV3 algorithm
CN112107812A (en) * 2020-05-21 2020-12-22 西南科技大学 Forest fire fighting method and system based on deep convolutional neural network
CN112132090A (en) * 2020-09-28 2020-12-25 天地伟业技术有限公司 Smoke and fire automatic detection and early warning method based on YOLOV3
CN112149583A (en) * 2020-09-27 2020-12-29 山东产研鲲云人工智能研究院有限公司 Smoke detection method, terminal device and storage medium
CN112241693A (en) * 2020-09-25 2021-01-19 上海荷福人工智能科技(集团)有限公司 Illegal welding fire image identification method based on YOLOv3
CN112488213A (en) * 2020-12-03 2021-03-12 杭州电子科技大学 Fire picture classification method based on multi-scale feature learning network
CN112735083A (en) * 2021-01-19 2021-04-30 齐鲁工业大学 Embedded gateway for flame detection by using YOLOv5 and OpenVINO and deployment method thereof
CN112906463A (en) * 2021-01-15 2021-06-04 上海东普信息科技有限公司 Image-based fire detection method, device, equipment and storage medium
CN113033553A (en) * 2021-03-22 2021-06-25 深圳市安软科技股份有限公司 Fire detection method and device based on multi-mode fusion, related equipment and storage medium
CN113706815A (en) * 2021-08-31 2021-11-26 沈阳二一三电子科技有限公司 Vehicle fire identification method combining YOLOv3 and optical flow method
CN113743190A (en) * 2021-07-13 2021-12-03 淮阴工学院 Flame detection method and system based on BiHR-Net and YOLOv3-head
CN113903009A (en) * 2021-12-10 2022-01-07 华东交通大学 Railway foreign matter detection method and system based on improved YOLOv3 network
CN114022850A (en) * 2022-01-07 2022-02-08 深圳市安软慧视科技有限公司 Transformer substation fire monitoring method and system and related equipment
CN114998783A (en) * 2022-05-19 2022-09-02 安徽合为智能科技有限公司 Front-end equipment for video analysis of smoke, fire and personnel behaviors
CN115223324A (en) * 2022-06-16 2022-10-21 中电云数智科技有限公司 Smog real-time monitoring method and system
CN115331384A (en) * 2022-08-22 2022-11-11 重庆科技学院 Operation platform fire accident early warning system based on edge calculation
CN116503715A (en) * 2023-06-12 2023-07-28 南京信息工程大学 Forest fire detection method based on cascade network
CN116978207A (en) * 2023-09-20 2023-10-31 张家港江苏科技大学产业技术研究院 Multifunctional laboratory safety monitoring and early warning system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378265A (en) * 2019-07-08 2019-10-25 创新奇智(成都)科技有限公司 A kind of incipient fire detection method, computer-readable medium and system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378265A (en) * 2019-07-08 2019-10-25 创新奇智(成都)科技有限公司 A kind of incipient fire detection method, computer-readable medium and system

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111723656B (en) * 2020-05-12 2023-08-22 中国电子系统技术有限公司 Smog detection method and device based on YOLO v3 and self-optimization
CN111723656A (en) * 2020-05-12 2020-09-29 中国电子系统技术有限公司 Smoke detection method and device based on YOLO v3 and self-optimization
CN111599444A (en) * 2020-05-18 2020-08-28 深圳市悦动天下科技有限公司 Intelligent tongue diagnosis detection method and device, intelligent terminal and storage medium
CN112107812A (en) * 2020-05-21 2020-12-22 西南科技大学 Forest fire fighting method and system based on deep convolutional neural network
CN111680632A (en) * 2020-06-10 2020-09-18 深延科技(北京)有限公司 Smoke and fire detection method and system based on deep learning convolutional neural network
CN111523528B (en) * 2020-07-03 2020-10-20 平安国际智慧城市科技股份有限公司 Strategy sending method and device based on scale recognition model and computer equipment
CN111523528A (en) * 2020-07-03 2020-08-11 平安国际智慧城市科技股份有限公司 Strategy sending method and device based on scale recognition model and computer equipment
CN111860323A (en) * 2020-07-20 2020-10-30 北京华正明天信息技术股份有限公司 Method for identifying initial fire in monitoring picture based on yolov3 algorithm
CN111985365A (en) * 2020-08-06 2020-11-24 合肥学院 Straw burning monitoring method and system based on target detection technology
CN111964723A (en) * 2020-08-18 2020-11-20 合肥金果缘视觉科技有限公司 Peanut short bud detecting system based on artificial intelligence
CN112036286A (en) * 2020-08-25 2020-12-04 北京华正明天信息技术股份有限公司 Method for achieving temperature sensing and intelligently analyzing and identifying flame based on yoloV3 algorithm
CN111986436A (en) * 2020-09-02 2020-11-24 成都指码科技有限公司 Comprehensive flame detection method based on ultraviolet and deep neural networks
CN112241693A (en) * 2020-09-25 2021-01-19 上海荷福人工智能科技(集团)有限公司 Illegal welding fire image identification method based on YOLOv3
CN112149583A (en) * 2020-09-27 2020-12-29 山东产研鲲云人工智能研究院有限公司 Smoke detection method, terminal device and storage medium
CN112132090A (en) * 2020-09-28 2020-12-25 天地伟业技术有限公司 Smoke and fire automatic detection and early warning method based on YOLOV3
CN112488213A (en) * 2020-12-03 2021-03-12 杭州电子科技大学 Fire picture classification method based on multi-scale feature learning network
CN112906463A (en) * 2021-01-15 2021-06-04 上海东普信息科技有限公司 Image-based fire detection method, device, equipment and storage medium
CN112735083A (en) * 2021-01-19 2021-04-30 齐鲁工业大学 Embedded gateway for flame detection by using YOLOv5 and OpenVINO and deployment method thereof
CN113033553A (en) * 2021-03-22 2021-06-25 深圳市安软科技股份有限公司 Fire detection method and device based on multi-mode fusion, related equipment and storage medium
CN113743190A (en) * 2021-07-13 2021-12-03 淮阴工学院 Flame detection method and system based on BiHR-Net and YOLOv3-head
CN113743190B (en) * 2021-07-13 2023-12-22 淮阴工学院 Flame detection method and system based on BiHR-Net and YOLOv3-head
CN113706815A (en) * 2021-08-31 2021-11-26 沈阳二一三电子科技有限公司 Vehicle fire identification method combining YOLOv3 and optical flow method
CN113903009A (en) * 2021-12-10 2022-01-07 华东交通大学 Railway foreign matter detection method and system based on improved YOLOv3 network
CN113903009B (en) * 2021-12-10 2022-07-05 华东交通大学 Railway foreign matter detection method and system based on improved YOLOv3 network
CN114022850A (en) * 2022-01-07 2022-02-08 深圳市安软慧视科技有限公司 Transformer substation fire monitoring method and system and related equipment
CN114022850B (en) * 2022-01-07 2022-05-03 深圳市安软慧视科技有限公司 Transformer substation fire monitoring method and system and related equipment
CN114998783A (en) * 2022-05-19 2022-09-02 安徽合为智能科技有限公司 Front-end equipment for video analysis of smoke, fire and personnel behaviors
CN115223324A (en) * 2022-06-16 2022-10-21 中电云数智科技有限公司 Smog real-time monitoring method and system
CN115331384A (en) * 2022-08-22 2022-11-11 重庆科技学院 Operation platform fire accident early warning system based on edge calculation
CN115331384B (en) * 2022-08-22 2023-06-30 重庆科技学院 Fire accident early warning system of operation platform based on edge calculation
CN116503715A (en) * 2023-06-12 2023-07-28 南京信息工程大学 Forest fire detection method based on cascade network
CN116503715B (en) * 2023-06-12 2024-01-23 南京信息工程大学 Forest fire detection method based on cascade network
CN116978207A (en) * 2023-09-20 2023-10-31 张家港江苏科技大学产业技术研究院 Multifunctional laboratory safety monitoring and early warning system
CN116978207B (en) * 2023-09-20 2023-12-01 张家港江苏科技大学产业技术研究院 Multifunctional laboratory safety monitoring and early warning system

Similar Documents

Publication Publication Date Title
CN111091072A (en) YOLOv 3-based flame and dense smoke detection method
Shen et al. Flame detection using deep learning
WO2020173226A1 (en) Spatial-temporal behavior detection method
CN106682635A (en) Smoke detecting method based on random forest characteristic selection
CN113807276B (en) Smoking behavior identification method based on optimized YOLOv4 model
CN112699801B (en) Fire identification method and system based on video image
CN110827505A (en) Smoke segmentation method based on deep learning
CN111985365A (en) Straw burning monitoring method and system based on target detection technology
CN113850242B (en) Storage abnormal target detection method and system based on deep learning algorithm
CN114202646A (en) Infrared image smoking detection method and system based on deep learning
CN110263654A (en) A kind of flame detecting method, device and embedded device
CN109086803A (en) A kind of haze visibility detection system and method based on deep learning and the personalized factor
CN111815576B (en) Method, device, equipment and storage medium for detecting corrosion condition of metal part
CN116259002A (en) Human body dangerous behavior analysis method based on video
CN115719463A (en) Smoke and fire detection method based on super-resolution reconstruction and adaptive extrusion excitation
CN113657305B (en) Video-based intelligent detection method for black smoke vehicle and ringeman blackness level
Cao et al. YOLO-SF: YOLO for fire segmentation detection
KR102602439B1 (en) Method for detecting rip current using CCTV image based on artificial intelligence and apparatus thereof
CN112613483A (en) Outdoor fire early warning method based on semantic segmentation and recognition
CN110796008A (en) Early fire detection method based on video image
CN113299034B (en) Flame identification early warning method suitable for multiple scenes
CN114463681A (en) Fire detection method based on video monitoring platform
CN115082817A (en) Flame identification and detection method based on improved convolutional neural network
Shen et al. Lfnet: Lightweight fire smoke detection for uncertain surveillance environment
CN116468974B (en) Smoke detection method, device and storage medium based on image generation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200501