CN112132090A - Smoke and fire automatic detection and early warning method based on YOLOV3 - Google Patents
Smoke and fire automatic detection and early warning method based on YOLOV3 Download PDFInfo
- Publication number
- CN112132090A CN112132090A CN202011054961.2A CN202011054961A CN112132090A CN 112132090 A CN112132090 A CN 112132090A CN 202011054961 A CN202011054961 A CN 202011054961A CN 112132090 A CN112132090 A CN 112132090A
- Authority
- CN
- China
- Prior art keywords
- target
- detection
- training
- frame
- loss
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 106
- 239000000779 smoke Substances 0.000 title claims abstract description 38
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000012549 training Methods 0.000 claims abstract description 54
- 238000012545 processing Methods 0.000 claims abstract description 12
- 238000012544 monitoring process Methods 0.000 claims abstract description 11
- 238000013135 deep learning Methods 0.000 claims abstract description 7
- 238000012805 post-processing Methods 0.000 claims abstract description 4
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000013528 artificial neural network Methods 0.000 claims description 6
- 239000000463 material Substances 0.000 claims description 6
- 238000005516 engineering process Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 claims description 2
- 230000004807 localization Effects 0.000 claims description 2
- 238000003672 processing method Methods 0.000 claims description 2
- 230000007480 spreading Effects 0.000 abstract description 2
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 206010000369 Accident Diseases 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 239000002994 raw material Substances 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a smoke and fire automatic detection and early warning method based on YOLOV3, which comprises the following steps: s1, constructing a sample set of a training detection model; s2, building a deep learning target detection network architecture based on YOLOV 3; s3, configuring training parameters and training a detection model; s4, acquiring image information to be detected; acquiring image frames of a scene video image from monitoring equipment of a scene to be detected, and processing the frame-by-frame images by using an image preprocessing method; s5, detecting smoke and flame targets; sending the video image frames processed in the step S4 to the detection model trained in advance in the step S3 for target detection, and outputting a detection result; s6, post-processing the detection result; and S7, continuously analyzing the multi-frame image detection result, confirming that the target is effective and outputting an alarm. The smoke and fire automatic detection and early warning method based on the YOLOV3 can realize second-level detection and alarm, greatly shorten the fire early warning time, timely inform timely rescue, and effectively prevent the fire from spreading.
Description
Technical Field
The invention belongs to the technical field of video monitoring, and particularly relates to a smoke and fire automatic detection and early warning method based on YOLOV 3.
Background
Detection of smoke and fire (smoke and flame) refers to identification and positioning of smoke and fire in a monitoring video image, and has important significance in the field of security monitoring.
The fire disaster is one of very common disasters with great harm, which often causes huge resource and property loss and possible casualties, so that the fire and smoke prevention and control and early warning for deep forests, unmanned warehouses, public facilities, inflammable and explosive articles, certain important areas and the like become the most important thing, timely early warning can quickly inform personnel on duty and assist the fire fighters to timely handle the fire crisis, the purpose of preventing and avoiding the outburst and spread of fire accidents as soon as possible is achieved, and the loss is reduced to the greatest extent. The accurate and rapid detection of smoke and flame is important.
The traditional detection method mainly aims at the sensors for physically sampling temperature, transparency, smoke and the like, but the sensors are mainly suitable for close-range induction and are easily limited by places and the like, the usability, the detection accuracy, the reliability and the like are difficult to guarantee, and safety personnel cannot check the situations on site and cannot timely judge the specific situations on site. The video-based detection method can realize remote viewing through the transmission of remote pictures, so that the video-based detection method is rapidly developed and applied. The conventional video detection method generally extracts a motion region of a video image and distinguishes the motion region according to the characteristics of RGB or HSV channel components of a flame region, and the method has some effects on distinguishing flames but cannot be used for smoke. The image is predicted in a block mode, the sliding windows with different sizes are used for region extraction, then various CNNs (convolutional neural networks) are sent to be classified, whether the flame or the smoke exists or not is judged, however, the method is low in efficiency and insufficient in accuracy, the judgment on the position of the smoke and the smoke is not accurate enough, the sliding windows are selected in advance and are fixed in size and shape, and the shape of the smoke and the flame is not fixed.
Disclosure of Invention
In view of the above, in order to overcome the above drawbacks, the present invention aims to provide an automatic smoke and fire detection and early warning method based on YOLOV3,
in order to achieve the purpose, the technical scheme of the invention is realized as follows:
a firework automatic detection early warning method based on YOLOV3 comprises the following steps:
s1, constructing a sample set of a training detection model; firstly, collecting original materials, and processing the obtained original materials by using an image data enhancement technology to obtain a training sample data set; then, marking a target frame of a detection target in a training sample data set picture by using a sample marking tool, setting the category of the target frame, and respectively storing the sample picture and a generated marking file as a sample set; clustering target frames marked in all training set samples by adopting a clustering algorithm;
s2, building a deep learning target detection network architecture based on YOLOV 3; adopting the cut Darknet-53 convolution neural network as a backbone to extract the characteristics of the input image, and inputting the characteristic diagram generated by the neural network into a detection model of YOLOV 3;
s3, configuring training parameters and training a detection model;
s4, acquiring image information to be detected; acquiring image frames of a scene video image from monitoring equipment of a scene to be detected, and processing the frame-by-frame images by using an image preprocessing method;
s5, detecting smoke and flame targets; sending the video image frames processed in the step S4 to the detection model trained in advance in the step S3 for target detection, and outputting a detection result;
s6, post-processing the detection result; the processing method comprises the steps of judging whether the target is an effective target or not according to the confidence degrees of the targets detected in the step S5, and if the confidence degrees are lower than a threshold value, performing false detection, and not performing processing; judging whether the target is in the designated area according to the coordinates of the target, and if not, excluding the target; setting an IOU parameter to judge the overlapping degree of the overlapped targets existing in the detection result, removing the lower confidence value in the frame with larger overlapping degree, and only keeping the highest frame with higher overlapping degree which can detect the same target; judging the size of the target according to the coordinates of the detection result, and removing the detection result which does not conform to the size range of the actual scene;
s7, continuously analyzing the multi-frame image detection result, confirming that the target is effective and outputting an alarm; and analyzing the detection result of the continuous multi-frame image, judging whether the detection result is an effective target, outputting an alarm signal in time, and recording related information.
Further, in step S1, the source material includes, but is not limited to, smoke and flame video, and pictures;
the detection target is smoke or flame;
the target frame is a circumscribed rectangle frame.
Further, the specific method of step S3 is as follows:
setting the hyper-parameters of a training detection model, setting the initial learning rate to be 0.001, setting each training batch to be 64, and setting the total iteration number of training samples to be 140 rounds; model training optimizes network weight parameters by using an SGD algorithm according to a BP principle, carries out iterative training, reduces the loss value of a network to a lower value, and acquires a model for detecting smoke and flame after training is finished;
further, in step S5, the detection result output by the model includes the category to which the target belongs, the coordinates of the circumscribed rectangular frame of the target, and the corresponding confidence level;
further, the loss value is calculated as follows:
loss of the training network is divided into confidence loss Lconf(O, C), class loss Lcla(o, c) and location loss Lloc(L, g), the total loss L (O, O, C, C, L, g) is a weighted sum of the three. Information b (x, y, w, h, C, C) of the prediction box output by the network1,c2) Calculating the loss with the true value g (x, y, w, h) to obtainFinal loss, where (x, y, w, h) represents the horizontal and vertical coordinates of the midpoint of the rectangle circumscribing the target and the width and height of the rectangle, C represents the probability that the predicted frame is located at a position of a target, and C represents the probability that the predicted frame is located at a position of a target1,c2Representing the probability of the class to which the object belongs.
Calculating the total loss value:
L(O,o,C,c,l,g)=λ1Lconf(O,C)+λ2Lcla(o,c)+λ3Lloc(l,g)
wherein λ1、λ2And λ3Weighting weights of confidence coefficient loss, category loss and positioning loss are respectively taken as 0.3,0.2 and 0.5;
and (3) confidence coefficient loss calculation:
wherein O isiIndicating whether the current position has a target or not, wherein the current position has a true value, the current position has a target of 1, and otherwise, the current position has 0 and CiIs the probability that the current position of the model prediction output is an object;
category loss calculation:
cij=Sigmoid(cij)
wherein o isijWhether the jth category exists at the position of the ith prediction frame or not is represented as the true value, the existence is 1, otherwise, the existence is 0, cijThe probability that the jth category exists in the position where the ith prediction frame is located in the prediction result is represented;
and (3) calculating the positioning loss:
wherein (g)x,gy,gw,gh) Is manually marked target box information g (x, y, w, h), subscript i represents the ith box and belongs to the true value, (b)x,by,bw,bh) Coordinate information of the detection box representing the network prediction, and (g)x,gy,gw,gh) Corresponding to (c)x,cy,pw,ph) Information indicating preset anchors, wherein (c)x,cy) Indicating the position of the center point of anchors on the feature map, (p)w,ph) The width and the height of the preset anchors are represented and respectively correspond to the marking information and the prediction information; anchors is the result of counting and clustering the width and height of the target frame marked in the training data by a clustering algorithm during the training of Yolov 3.
Compared with the prior art, the automatic smoke and fire detection and early warning method based on the YOLOV3 has the following advantages:
(1) the firework automatic detection early warning method based on the YOLOV3 has visual and easy-to-use video pictures; the camera is arranged outside the monitoring area, so that the fire spreading trend can be predicted from a higher visual angle after an alarm occurs, and the disaster relief task can be remotely commanded. The system realizes non-contact detection, is not limited by space and environmental conditions, is suitable for various scenes (deep forests, fields, factories, warehouses, business surpasses, houses and the like), can realize second-level detection and alarm, greatly shortens the fire early warning time, timely informs of timely rescue, and effectively prevents the spread of fire.
(2) The smoke and fire automatic detection early warning method based on the YOLOV3 is used for intelligently detecting smoke and fire, so that the cost is saved; the intelligent analysis in real time is realized continuously for 24 hours, smoke and flame are automatically detected and identified, automatic alarm is realized, the manual inspection mode is not relied on, the labor cost is saved, and the working efficiency is improved. And the large-range coverage of compatible big and small scenes, far and near distances and various angles can be realized by only one camera on site without depending on any other sensor equipment, so that the equipment cost is saved.
(3) The smoke and fire automatic detection early warning method based on the YOLOV3 is deep in learning and accurate in detection; the algorithm is combined with the forefront deep learning technology, based on the Darknet-53 deep neural network, and the deep learning target detection algorithm based on the YOLOV3 is adopted, so that the method has the characteristics of high detection rate, high detection speed, good real-time performance and stable performance, and meets the requirements of practical application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of an automatic detection and early warning method according to an embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used only for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention. Furthermore, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first," "second," etc. may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art through specific situations.
The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
The algorithm is suitable for automatic detection and early warning of smoke and flame in various scenes, and aims to realize 24-hour work under the unattended condition, automatically analyze possible abnormal smoke and open fire in a monitoring area, remotely check a real-time field picture, conveniently judge the field condition according to a visual picture and direct scheduling rescue.
The specific implementation method comprises the following steps:
1. and training a detection model. Through the collected raw materials, a sample picture for training is obtained after processing, a deep learning neural network is trained on the basis of a Darknet frame, and a model for detecting smoke and flame is obtained.
First, a set of training samples is constructed. The collection of samples can be done by downloading videos and pictures from the web using a crawler tool, or by downloading published smoke and fire data sets. Manually labeling the sample data, marking out external rectangular frames of Smoke and flame (detection targets) in a training sample set picture, setting the belonged categories (Smoke and Fire), then performing enhancement processing on the image and the label by utilizing multiple image data enhancement technologies, and finally respectively storing the sample picture and the generated labeling file as a sample set. Clustering target frames marked in all training set samples by adopting a clustering algorithm to obtain 9 anchors, wherein each anchor is actually a cluster of the width and height of the target frame and is used for returning the size of the target during training;
and then building a deep learning target detection network architecture based on the YOLOV 3. And (3) performing feature extraction on the input image by using the cut Darknet-53 convolutional neural network as a backbone, using a high-dimensional feature map generated by network extraction as a yolo layer of YOLOV3, predicting and calculating a loss value. The Darknet-53 network is cut, the number of output channels of all layers of the network is reduced by half, the calculated amount is reduced, and the processing speed is increased;
and finally, configuring training parameters and training a detection model. Setting the reference learning rate of the network to be 0.001, sending samples to the network in batches for training, setting each training batch to be 64, setting the total iteration round number of the training samples to be 140 rounds, multiplying the learning rate by 0.1 at certain sample training round numbers (such as 40), and carrying out three operations in total, so that the learning rate is reduced in the training process, and the model training can be quickly converged and stabilized. Model training is carried out according to a BP (back propagation) principle, the loss value calculated by a prediction result output by a back propagation network and an actual marking result is updated, the network weight adopts an SGD (random gradient descent) algorithm, and the loss value of the network is gradually reduced and tends to be stable through continuous iterative optimization training. After training is finished, obtaining a model for detecting smoke and flame;
2. and detecting the picture and outputting a detection result. And detecting and analyzing the actual monitoring scene video frame by using the detection model generated by the last training step, and acquiring the confidence coefficient and the position information of the detection target in the picture.
First is equipment selection and installation. The algorithm is suitable for various scenes, and the camera equipment can be arranged at a high point of the scene to be monitored, so that the monitoring area is obliquely overlooked downwards. The device is arranged at a higher and non-shielding position, and the area which can be monitored is larger. If multi-angle and remote monitoring is needed, a variable-magnification and rotary ball machine can be used to realize 360-degree rotation and high-magnification variable magnification, and all the coverable areas can be automatically crunched and monitored only by setting a preset position in advance;
the detection area is then set. Setting a detection area in a video picture, and drawing a polygonal area as a designated area according to a clockwise or counterclockwise sequence. Only the target in the designated area is detected to be effective;
and finally, detecting the picture information in real time by using the model and outputting the result. And carrying out simple preprocessing on the obtained video frame image, and then carrying out detection analysis by using a detection model to obtain the detection result of the current frame. If the effective target exists in the picture, the model outputs the coordinates of a circumscribed rectangular frame of the target, the category of the target and the corresponding confidence coefficient;
3. and (5) processing and analyzing the detection result. The detection result output by the model is only all detected target information in the picture, and all the detected target information cannot be output as an effective target, so that reasonable judgment needs to be carried out according to information such as confidence degree, state, position and the like of the target, a preset confidence degree threshold value and a detection area, and the final result is output, and the method mainly comprises two steps.
And carrying out post-processing on the detection result. Setting a confidence threshold in advance, and judging whether the target is an effective target according to the confidence of each target detected in the step (v). The confidence coefficient represents how high the probability of the detection result is an effective target, the higher the confidence coefficient is, the higher the probability of the detection result is, the most 1 is, on the contrary, the detection result is possibly false detection if the confidence coefficient is lower, and other processing is not needed; judging whether the target is in a designated area or not according to the corresponding coordinates of each target, mainly judging the central point of the target, judging that the target is in the designated area if the central point is in the designated area, and excluding the target if the central point is not in the designated area; setting an IOU (cross-over ratio) parameter for overlapping detection frames existing in the detection result to judge the overlapping degree, if the overlapping degree of a plurality of frames is larger, removing the lower confidence value in the frames with larger overlapping degree, only keeping a highest target frame which is possibly detected as the same target and has higher overlapping degree, and only outputting an effective result with the highest confidence degree; and judging the size of the target in the picture according to the coordinates of the target frame of the detection result, and removing some results of abnormal size which possibly appear, so that the result does not conform to the size range of the actual scene. Finally, outputting effective targets meeting all conditions;
and then comprehensively analyzing the continuous video frame images to determine a final counting result. And analyzing the detection result of the continuous multi-frame image, and judging whether the detection result is an effective target. And comparing the positions of the targets to determine that the continuous multi-frame images have the targets at the same positions to be detected and the target area has the characteristics of increasing trend and the like, confirming the targets as effective results, sending alarm signals to safety personnel in time and recording related information.
The calculation method of the loss value is shown in the following formula. Loss of the training network is divided into confidence loss Lconf(O, C), class loss Lcla(o, c) and location loss Lloc(L, g), the total loss L (O, O, C, C, L, g) is a weighted sum of the three. Information b (x, y, w, h, C, C) of the prediction box output by the network1,c2) Calculating loss with the real value g (x, y, w, h) to obtain the final loss, wherein (x, y, w, h) respectively represents the horizontal coordinate and the vertical coordinate of the midpoint of the rectangle circumscribing the target and the width and the height of the rectangle, C represents the probability that the position of the prediction frame is a target, and C represents the probability that the position of the prediction frame is a target1,c2Representing the probability of the class to which the object belongs.
Calculating the total loss value:
L(O,o,C,c,l,g)=λ1Lconf(O,C)+λ2Lcla(o,c)+λ3Lloc(l,g)
wherein λ1、λ2And λ3The weighting weights for confidence loss, category loss, and localization loss, respectively, are taken to be 0.3,0.2, and 0.5, respectively.
And (3) confidence coefficient loss calculation:
wherein O isiIndicating whether the current position has a target or not, wherein the current position has a true value, the current position has a target of 1, and otherwise, the current position has 0 and CiIs the probability that the current position of the model prediction output is an object;
category loss calculation:
cij=Sigmoid(cij)
wherein o isijWhether the jth category exists at the position of the ith prediction frame or not is represented as the true value, the existence is 1, otherwise, the existence is 0, cijThe probability that the jth category exists in the position where the ith prediction frame is located in the prediction result is represented;
and (3) calculating the positioning loss:
wherein (g)x,gy,gw,gh) Is manually marked target box information g (x, y, w, h), subscript i represents the ith box and belongs to the true value, (b)x,by,bw,bh) Coordinate information of the detection box representing the network prediction, and (g)x,gy,gw,gh) Corresponding to (c)x,cy,pw,ph) Information indicating preset anchors, wherein (c)x,cy) Indicating the position of the center point of anchors on the feature map, (p)w,ph) And the width and the height of the preset anchors are represented, and correspond to the marking information and the prediction information respectively. Anchors is the result of counting and clustering the width and height of the target frame marked in the training data by a clustering algorithm during the training of Yolov 3.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (5)
1. A firework automatic detection early warning method based on YOLOV3 is characterized by comprising the following steps:
s1, constructing a sample set of a training detection model; firstly, collecting original materials, and processing the obtained original materials by using an image data enhancement technology to obtain a training sample data set; then, marking a target frame of a detection target in a training sample data set picture by using a sample marking tool, setting the category of the target frame, and respectively storing the sample picture and a generated marking file as a sample set; clustering target frames marked in all training set samples by adopting a clustering algorithm;
s2, building a deep learning target detection network architecture based on YOLOV 3; adopting the cut Darknet-53 convolution neural network as a backbone to extract the characteristics of the input image, and inputting the characteristic diagram generated by the neural network into a detection model of YOLOV 3;
s3, configuring training parameters and training a detection model;
s4, acquiring image information to be detected; acquiring image frames of a scene video image from monitoring equipment of a scene to be detected, and processing the frame-by-frame images by using an image preprocessing method;
s5, detecting smoke and flame targets; sending the video image frames processed in the step S4 to the detection model trained in advance in the step S3 for target detection, and outputting a detection result;
s6, post-processing the detection result; the processing method comprises the steps of judging whether the target is an effective target or not according to the confidence degrees of the targets detected in the step S5, and if the confidence degrees are lower than a threshold value, performing false detection, and not performing processing; judging whether the target is in the designated area according to the coordinates of the target, and if not, excluding the target; setting an IOU parameter to judge the overlapping degree of the overlapped targets existing in the detection result, removing the lower confidence value in the frame with larger overlapping degree, and only keeping the highest frame with higher overlapping degree which can detect the same target; judging the size of the target according to the coordinates of the detection result, and removing the detection result which does not conform to the size range of the actual scene;
s7, continuously analyzing the multi-frame image detection result, confirming that the target is effective and outputting an alarm; and analyzing the detection result of the continuous multi-frame image, judging whether the detection result is an effective target, outputting an alarm signal in time, and recording related information.
2. The pyrotechnical automatic detection early warning method based on YOLOV3 as claimed in claim 1, wherein: in step S1, the source material includes, but is not limited to, smoke and flame video, and pictures;
the detection target is smoke or flame;
the target frame is a circumscribed rectangle frame.
3. The pyrotechnical automatic detection early warning method based on YOLOV3 according to claim 1, wherein the specific method of step S3 is as follows:
setting the hyper-parameters of a training detection model, setting the initial learning rate to be 0.001, setting each training batch to be 64, and setting the total iteration number of training samples to be 140 rounds; model training optimizes network weight parameters by using an SGD algorithm according to a BP principle, carries out iterative training, reduces the loss value of the network to a lower value, and acquires a model for detecting smoke and flame after training.
4. The pyrotechnical automatic detection early warning method based on YOLOV3 as claimed in claim 1, wherein: in step S5, the detection result output by the model includes the category to which the target belongs, the coordinates of the circumscribed rectangular frame of the target, and the corresponding confidence level.
5. The pyrotechnical automatic detection early warning method based on YOLOV3 as claimed in claim 3, wherein: the loss value is calculated as follows:
loss of the training network is divided into confidence loss Lconf(O, C), class loss Lcla(o, c) and location loss Lloc(L, g), the total loss L (O, O, C, C, L, g) is a weighted sum of the three. Information b (x, y, w, h, C, C) of the prediction box output by the network1,c2) Calculating loss with the real value g (x, y, w, h) to obtain the final loss, wherein (x, y, w, h) respectively represents the horizontal coordinate and the vertical coordinate of the midpoint of the rectangle circumscribing the target and the width and the height of the rectangle, C represents the probability that the position of the prediction frame is a target, and C represents the probability that the position of the prediction frame is a target1,c2Representing the probability of the class to which the object belongs.
Calculating the total loss value:
L(O,o,C,c,l,g)=λ1Lconf(O,C)+λ2Lcla(o,c)+λ3Lloc(l,g)
wherein λ1、λ2And λ3Weighted weights, score, for confidence loss, category loss, and localization loss, respectivelyRespectively taking 0.3,0.2 and 0.5;
and (3) confidence coefficient loss calculation:
wherein O isiIndicating whether the current position has a target or not, wherein the current position has a true value, the current position has a target of 1, and otherwise, the current position has 0 and CiIs the probability that the current position of the model prediction output is an object;
category loss calculation:
cij=Sigmoid(cij)
wherein o isijWhether the jth category exists at the position of the ith prediction frame or not is represented as the true value, the existence is 1, otherwise, the existence is 0, cijThe probability that the jth category exists in the position where the ith prediction frame is located in the prediction result is represented;
and (3) calculating the positioning loss:
wherein (g)x,gy,gw,gh) Is manually marked target box information g (x, y, w, h), subscript i represents the ith box and belongs to the true value, (b)x,by,bw,bh) Coordinate information of the detection box representing the network prediction, and (g)x,gy,gw,gh) Corresponding to (c)x,cy,pw,ph) Information indicating preset anchors, wherein (c)x,cy) Indicating the position of the center point of anchors on the feature map, (p)w,ph) The width and the height of the preset anchors are represented and respectively correspond to the marking information and the prediction information; anchors is the result of counting and clustering the width and height of the target frame marked in the training data by a clustering algorithm during the training of Yolov 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011054961.2A CN112132090A (en) | 2020-09-28 | 2020-09-28 | Smoke and fire automatic detection and early warning method based on YOLOV3 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011054961.2A CN112132090A (en) | 2020-09-28 | 2020-09-28 | Smoke and fire automatic detection and early warning method based on YOLOV3 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112132090A true CN112132090A (en) | 2020-12-25 |
Family
ID=73843210
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011054961.2A Pending CN112132090A (en) | 2020-09-28 | 2020-09-28 | Smoke and fire automatic detection and early warning method based on YOLOV3 |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112132090A (en) |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112598071A (en) * | 2020-12-28 | 2021-04-02 | 北京市商汤科技开发有限公司 | Open fire identification method, device, equipment and storage medium |
CN112699858A (en) * | 2021-03-24 | 2021-04-23 | 中国人民解放军国防科技大学 | Unmanned platform smoke fog sensing method and system, computer equipment and storage medium |
CN112699801A (en) * | 2020-12-30 | 2021-04-23 | 上海船舶电子设备研究所(中国船舶重工集团公司第七二六研究所) | Fire identification method and system based on video image |
CN112861635A (en) * | 2021-01-11 | 2021-05-28 | 西北工业大学 | Fire and smoke real-time detection method based on deep learning |
CN112884090A (en) * | 2021-04-14 | 2021-06-01 | 安徽理工大学 | Fire detection and identification method based on improved YOLOv3 |
CN112906463A (en) * | 2021-01-15 | 2021-06-04 | 上海东普信息科技有限公司 | Image-based fire detection method, device, equipment and storage medium |
CN113011405A (en) * | 2021-05-25 | 2021-06-22 | 南京柠瑛智能科技有限公司 | Method for solving multi-frame overlapping error of ground object target identification of unmanned aerial vehicle |
CN113191274A (en) * | 2021-04-30 | 2021-07-30 | 西安聚全网络科技有限公司 | Oil field video intelligent safety event detection method and system based on neural network |
CN113192038A (en) * | 2021-05-07 | 2021-07-30 | 北京科技大学 | Method for identifying and monitoring abnormal smoke and fire in existing flame environment based on deep learning |
CN113343779A (en) * | 2021-05-14 | 2021-09-03 | 南方电网调峰调频发电有限公司 | Environment anomaly detection method and device, computer equipment and storage medium |
CN113379999A (en) * | 2021-06-22 | 2021-09-10 | 徐州才聚智能科技有限公司 | Fire detection method and device, electronic equipment and storage medium |
CN113408361A (en) * | 2021-05-25 | 2021-09-17 | 中国矿业大学 | Deep learning-based mining conveyor belt bulk material detection method and system |
CN113469057A (en) * | 2021-07-02 | 2021-10-01 | 中南大学 | Fire hole video self-adaptive detection method, device, equipment and medium |
CN113486857A (en) * | 2021-08-03 | 2021-10-08 | 云南大学 | Ascending safety detection method and system based on YOLOv4 |
CN113553948A (en) * | 2021-07-23 | 2021-10-26 | 中远海运科技(北京)有限公司 | Automatic recognition and counting method for tobacco insects and computer readable medium |
CN113627223A (en) * | 2021-01-07 | 2021-11-09 | 广州中国科学院软件应用技术研究所 | Flame detection algorithm based on deep learning target detection and classification technology |
CN113688748A (en) * | 2021-08-27 | 2021-11-23 | 武汉大千信息技术有限公司 | Fire detection model and method |
CN113706815A (en) * | 2021-08-31 | 2021-11-26 | 沈阳二一三电子科技有限公司 | Vehicle fire identification method combining YOLOv3 and optical flow method |
CN113743378A (en) * | 2021-11-03 | 2021-12-03 | 航天宏图信息技术股份有限公司 | Fire monitoring method and device based on video |
CN113776408A (en) * | 2021-09-13 | 2021-12-10 | 北京邮电大学 | Reading method for gate opening ruler |
CN114359797A (en) * | 2021-12-29 | 2022-04-15 | 福建天晴数码有限公司 | Construction site night abnormity real-time detection method based on GAN network |
CN114626439A (en) * | 2022-02-21 | 2022-06-14 | 华南理工大学 | Transmission line peripheral smoke and fire detection method based on improved YOLOv4 |
CN114943923A (en) * | 2022-06-17 | 2022-08-26 | 中国人民解放军陆军炮兵防空兵学院 | Method and system for recognizing explosion flare smoke of cannonball based on video of deep learning |
CN114998783A (en) * | 2022-05-19 | 2022-09-02 | 安徽合为智能科技有限公司 | Front-end equipment for video analysis of smoke, fire and personnel behaviors |
CN115861922A (en) * | 2022-11-23 | 2023-03-28 | 南京恩博科技有限公司 | Sparse smoke and fire detection method and device, computer equipment and storage medium |
TWI807354B (en) * | 2021-06-28 | 2023-07-01 | 南亞塑膠工業股份有限公司 | Fire detection system and fire detection method based on artificial intelligence and image recognition |
CN116912782A (en) * | 2023-09-14 | 2023-10-20 | 四川泓宝润业工程技术有限公司 | Firework detection method based on overlapping annotation training |
CN117197978A (en) * | 2023-04-21 | 2023-12-08 | 中国消防救援学院 | Forest fire monitoring and early warning system based on deep learning |
WO2024022059A1 (en) * | 2022-07-29 | 2024-02-01 | 京东方科技集团股份有限公司 | Environment detection and alarming method and apparatus, computer device, and storage medium |
CN117953432A (en) * | 2024-03-26 | 2024-04-30 | 湖北信通通信有限公司 | Intelligent smoke and fire identification method and system based on AI algorithm |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105590401A (en) * | 2015-12-15 | 2016-05-18 | 天维尔信息科技股份有限公司 | Early-warning linkage method and system based on video images |
CN110473375A (en) * | 2019-08-14 | 2019-11-19 | 成都睿云物联科技有限公司 | Monitoring method, device, equipment and the system of forest fire |
CN110969205A (en) * | 2019-11-29 | 2020-04-07 | 南京恩博科技有限公司 | Forest smoke and fire detection method based on target detection, storage medium and equipment |
CN111091072A (en) * | 2019-11-29 | 2020-05-01 | 河海大学 | YOLOv 3-based flame and dense smoke detection method |
CN111680632A (en) * | 2020-06-10 | 2020-09-18 | 深延科技(北京)有限公司 | Smoke and fire detection method and system based on deep learning convolutional neural network |
CN111709310A (en) * | 2020-05-26 | 2020-09-25 | 重庆大学 | Gesture tracking and recognition method based on deep learning |
-
2020
- 2020-09-28 CN CN202011054961.2A patent/CN112132090A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105590401A (en) * | 2015-12-15 | 2016-05-18 | 天维尔信息科技股份有限公司 | Early-warning linkage method and system based on video images |
CN110473375A (en) * | 2019-08-14 | 2019-11-19 | 成都睿云物联科技有限公司 | Monitoring method, device, equipment and the system of forest fire |
CN110969205A (en) * | 2019-11-29 | 2020-04-07 | 南京恩博科技有限公司 | Forest smoke and fire detection method based on target detection, storage medium and equipment |
CN111091072A (en) * | 2019-11-29 | 2020-05-01 | 河海大学 | YOLOv 3-based flame and dense smoke detection method |
CN111709310A (en) * | 2020-05-26 | 2020-09-25 | 重庆大学 | Gesture tracking and recognition method based on deep learning |
CN111680632A (en) * | 2020-06-10 | 2020-09-18 | 深延科技(北京)有限公司 | Smoke and fire detection method and system based on deep learning convolutional neural network |
Non-Patent Citations (1)
Title |
---|
罗小权 等: "改进YOLOV3的火灾检测方法", 《计算机工程与应用》, vol. 56, no. 17, pages 187 - 196 * |
Cited By (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112598071A (en) * | 2020-12-28 | 2021-04-02 | 北京市商汤科技开发有限公司 | Open fire identification method, device, equipment and storage medium |
CN112699801A (en) * | 2020-12-30 | 2021-04-23 | 上海船舶电子设备研究所(中国船舶重工集团公司第七二六研究所) | Fire identification method and system based on video image |
CN113627223A (en) * | 2021-01-07 | 2021-11-09 | 广州中国科学院软件应用技术研究所 | Flame detection algorithm based on deep learning target detection and classification technology |
CN112861635A (en) * | 2021-01-11 | 2021-05-28 | 西北工业大学 | Fire and smoke real-time detection method based on deep learning |
CN112861635B (en) * | 2021-01-11 | 2024-05-14 | 西北工业大学 | Fire disaster and smoke real-time detection method based on deep learning |
CN112906463A (en) * | 2021-01-15 | 2021-06-04 | 上海东普信息科技有限公司 | Image-based fire detection method, device, equipment and storage medium |
CN112699858A (en) * | 2021-03-24 | 2021-04-23 | 中国人民解放军国防科技大学 | Unmanned platform smoke fog sensing method and system, computer equipment and storage medium |
CN112699858B (en) * | 2021-03-24 | 2021-05-18 | 中国人民解放军国防科技大学 | Unmanned platform smoke fog sensing method and system, computer equipment and storage medium |
CN112884090A (en) * | 2021-04-14 | 2021-06-01 | 安徽理工大学 | Fire detection and identification method based on improved YOLOv3 |
CN113191274A (en) * | 2021-04-30 | 2021-07-30 | 西安聚全网络科技有限公司 | Oil field video intelligent safety event detection method and system based on neural network |
CN113192038B (en) * | 2021-05-07 | 2022-08-19 | 北京科技大学 | Method for recognizing and monitoring abnormal smoke and fire in existing flame environment based on deep learning |
CN113192038A (en) * | 2021-05-07 | 2021-07-30 | 北京科技大学 | Method for identifying and monitoring abnormal smoke and fire in existing flame environment based on deep learning |
CN113343779A (en) * | 2021-05-14 | 2021-09-03 | 南方电网调峰调频发电有限公司 | Environment anomaly detection method and device, computer equipment and storage medium |
CN113011405B (en) * | 2021-05-25 | 2021-08-13 | 南京柠瑛智能科技有限公司 | Method for solving multi-frame overlapping error of ground object target identification of unmanned aerial vehicle |
CN113408361A (en) * | 2021-05-25 | 2021-09-17 | 中国矿业大学 | Deep learning-based mining conveyor belt bulk material detection method and system |
CN113408361B (en) * | 2021-05-25 | 2023-09-19 | 中国矿业大学 | Mining conveyor belt massive material detection method and system based on deep learning |
CN113011405A (en) * | 2021-05-25 | 2021-06-22 | 南京柠瑛智能科技有限公司 | Method for solving multi-frame overlapping error of ground object target identification of unmanned aerial vehicle |
CN113379999A (en) * | 2021-06-22 | 2021-09-10 | 徐州才聚智能科技有限公司 | Fire detection method and device, electronic equipment and storage medium |
CN113379999B (en) * | 2021-06-22 | 2024-05-24 | 徐州才聚智能科技有限公司 | Fire detection method, device, electronic equipment and storage medium |
TWI807354B (en) * | 2021-06-28 | 2023-07-01 | 南亞塑膠工業股份有限公司 | Fire detection system and fire detection method based on artificial intelligence and image recognition |
CN113469057A (en) * | 2021-07-02 | 2021-10-01 | 中南大学 | Fire hole video self-adaptive detection method, device, equipment and medium |
CN113469057B (en) * | 2021-07-02 | 2023-04-28 | 中南大学 | Fire eye video self-adaptive detection method, device, equipment and medium |
CN113553948A (en) * | 2021-07-23 | 2021-10-26 | 中远海运科技(北京)有限公司 | Automatic recognition and counting method for tobacco insects and computer readable medium |
CN113486857A (en) * | 2021-08-03 | 2021-10-08 | 云南大学 | Ascending safety detection method and system based on YOLOv4 |
CN113486857B (en) * | 2021-08-03 | 2023-05-12 | 云南大学 | YOLOv 4-based ascending safety detection method and system |
CN113688748B (en) * | 2021-08-27 | 2023-08-18 | 武汉大千信息技术有限公司 | Fire detection model and method |
CN113688748A (en) * | 2021-08-27 | 2021-11-23 | 武汉大千信息技术有限公司 | Fire detection model and method |
CN113706815A (en) * | 2021-08-31 | 2021-11-26 | 沈阳二一三电子科技有限公司 | Vehicle fire identification method combining YOLOv3 and optical flow method |
CN113776408A (en) * | 2021-09-13 | 2021-12-10 | 北京邮电大学 | Reading method for gate opening ruler |
CN113743378B (en) * | 2021-11-03 | 2022-02-08 | 航天宏图信息技术股份有限公司 | Fire monitoring method and device based on video |
CN113743378A (en) * | 2021-11-03 | 2021-12-03 | 航天宏图信息技术股份有限公司 | Fire monitoring method and device based on video |
CN114359797A (en) * | 2021-12-29 | 2022-04-15 | 福建天晴数码有限公司 | Construction site night abnormity real-time detection method based on GAN network |
CN114626439A (en) * | 2022-02-21 | 2022-06-14 | 华南理工大学 | Transmission line peripheral smoke and fire detection method based on improved YOLOv4 |
CN114998783A (en) * | 2022-05-19 | 2022-09-02 | 安徽合为智能科技有限公司 | Front-end equipment for video analysis of smoke, fire and personnel behaviors |
CN114943923A (en) * | 2022-06-17 | 2022-08-26 | 中国人民解放军陆军炮兵防空兵学院 | Method and system for recognizing explosion flare smoke of cannonball based on video of deep learning |
CN114943923B (en) * | 2022-06-17 | 2022-12-23 | 中国人民解放军陆军炮兵防空兵学院 | Method and system for recognizing explosion flare smoke of cannonball based on video of deep learning |
WO2024022059A1 (en) * | 2022-07-29 | 2024-02-01 | 京东方科技集团股份有限公司 | Environment detection and alarming method and apparatus, computer device, and storage medium |
CN115861922A (en) * | 2022-11-23 | 2023-03-28 | 南京恩博科技有限公司 | Sparse smoke and fire detection method and device, computer equipment and storage medium |
CN115861922B (en) * | 2022-11-23 | 2023-10-03 | 南京恩博科技有限公司 | Sparse smoke detection method and device, computer equipment and storage medium |
CN117197978A (en) * | 2023-04-21 | 2023-12-08 | 中国消防救援学院 | Forest fire monitoring and early warning system based on deep learning |
CN116912782A (en) * | 2023-09-14 | 2023-10-20 | 四川泓宝润业工程技术有限公司 | Firework detection method based on overlapping annotation training |
CN116912782B (en) * | 2023-09-14 | 2023-11-14 | 四川泓宝润业工程技术有限公司 | Firework detection method based on overlapping annotation training |
CN117953432A (en) * | 2024-03-26 | 2024-04-30 | 湖北信通通信有限公司 | Intelligent smoke and fire identification method and system based on AI algorithm |
CN117953432B (en) * | 2024-03-26 | 2024-06-11 | 湖北信通通信有限公司 | Intelligent smoke and fire identification method and system based on AI algorithm |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112132090A (en) | Smoke and fire automatic detection and early warning method based on YOLOV3 | |
CN101795395B (en) | System and method for monitoring crowd situation | |
CN110826514A (en) | Construction site violation intelligent identification method based on deep learning | |
CN104599427A (en) | Intelligent image type fire alarming system for highway tunnel | |
CN113963301A (en) | Space-time feature fused video fire and smoke detection method and system | |
US20230196895A1 (en) | Method for monitoring state of wearing safety protective equipment and server for providing the method | |
CN114689058B (en) | Fire evacuation path planning method based on deep learning and hybrid genetic algorithm | |
CN111163294A (en) | Building safety channel monitoring system and method for artificial intelligence target recognition | |
CN114677640A (en) | Intelligent construction site safety monitoring system and method based on machine vision | |
CN111652128B (en) | High-altitude power operation safety monitoring method, system and storage device | |
CN111832450B (en) | Knife holding detection method based on image recognition | |
CN115880231A (en) | Power transmission line hidden danger detection method and system based on deep learning | |
CN112643719A (en) | Tunnel security detection method and system based on inspection robot | |
CN115841730A (en) | Video monitoring system and abnormal event detection method | |
CN111885349A (en) | Pipe rack abnormity detection system and method | |
CN117789394B (en) | Early fire smoke detection method based on motion history image | |
KR101542134B1 (en) | The apparatus and method of surveillance a rock fall based on smart video analytic | |
CN115171006B (en) | Detection method for automatically identifying person entering electric power dangerous area based on deep learning | |
CN116052356A (en) | Intelligent building site monitor platform | |
CN114283367B (en) | Artificial intelligent open fire detection method and system for garden fire early warning | |
CN115546684A (en) | Detection method of elevator video light curtain system for pet leashes | |
CN114694073A (en) | Intelligent detection method and device for wearing condition of safety belt, storage medium and equipment | |
CN114092783A (en) | Dangerous goods detection method based on attention mechanism continuous visual angle | |
CN113191182A (en) | Violent abnormal behavior detection method based on deep learning | |
Fujita et al. | Collapsed Building Detection Using Multiple Object Tracking from Aerial Videos and Analysis of Effective Filming Techniques of Drones |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |