CN112132090A - Smoke and fire automatic detection and early warning method based on YOLOV3 - Google Patents
Smoke and fire automatic detection and early warning method based on YOLOV3 Download PDFInfo
- Publication number
- CN112132090A CN112132090A CN202011054961.2A CN202011054961A CN112132090A CN 112132090 A CN112132090 A CN 112132090A CN 202011054961 A CN202011054961 A CN 202011054961A CN 112132090 A CN112132090 A CN 112132090A
- Authority
- CN
- China
- Prior art keywords
- target
- detection
- training
- loss
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 96
- 238000000034 method Methods 0.000 title claims abstract description 34
- 239000000779 smoke Substances 0.000 title claims abstract description 23
- 238000012549 training Methods 0.000 claims abstract description 50
- 238000012544 monitoring process Methods 0.000 claims abstract description 9
- 238000013135 deep learning Methods 0.000 claims abstract description 8
- 238000007781 pre-processing Methods 0.000 claims abstract description 3
- 238000004364 calculation method Methods 0.000 claims description 13
- 230000004807 localization Effects 0.000 claims description 6
- 239000000463 material Substances 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 6
- 238000013527 convolutional neural network Methods 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 3
- 238000012805 post-processing Methods 0.000 claims description 3
- 238000003672 processing method Methods 0.000 claims description 2
- 238000004458 analytical method Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 206010000369 Accident Diseases 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 239000002994 raw material Substances 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
本发明提供了一种基于YOLOV3的烟火自动检测预警方法,包括:S1、构建训练检测模型的样本集;S2、搭建基于YOLOV3的深度学习目标检测网络架构;S3、配置训练参数,训练检测模型;S4、获取待检测图像信息;从待检测现场的监控设备中,获取现场视频画面的图像帧,利用图像预处理方法对逐帧图像进行处理;S5、检测烟雾和火焰目标;将步骤S4中处理过的视频图像帧送入步骤S3中预先训练好的检测模型进行目标的检测,并输出检测结果;S6、对检测结果进行后处理;S7、连续分析多帧图像检测结果,确认目标有效并输出报警。本发明所述的基于YOLOV3的烟火自动检测预警方法可以实现秒级检测及报警,极大缩短火灾预警时间,及时通知及时救援,有效阻止火灾的蔓延。
The present invention provides a method for automatic detection and early warning of fireworks based on YOLOV3, comprising: S1, constructing a sample set of a training detection model; S2, constructing a deep learning target detection network architecture based on YOLOV3; S3, configuring training parameters, and training a detection model; S4, obtain the image information to be detected; from the monitoring equipment at the scene to be detected, obtain the image frames of the live video picture, and use the image preprocessing method to process the frame-by-frame images; S5, detect smoke and flame targets; process in step S4 The passed video image frames are sent to the pre-trained detection model in step S3 to detect the target, and the detection result is output; S6, the detection result is post-processed; S7, the multi-frame image detection results are continuously analyzed to confirm that the target is valid and output Call the police. The YOLOV3-based automatic pyrotechnic detection and early warning method of the present invention can realize second-level detection and alarm, greatly shorten the fire warning time, notify timely rescue in time, and effectively prevent the spread of fire.
Description
技术领域technical field
本发明属于视频监控技术领域,尤其是涉及一种基于YOLOV3的烟火自动检测预警方法。The invention belongs to the technical field of video surveillance, and in particular relates to a fireworks automatic detection and early warning method based on YOLOV3.
背景技术Background technique
烟火(烟雾和火焰)检测是指在监控视频图像中进行烟火的识别和定位,在安防监控领域具有重要的意义。Fireworks (smoke and flame) detection refers to the identification and positioning of fireworks in surveillance video images, which is of great significance in the field of security monitoring.
火灾是非常常见且危害极大的灾害之一,常常会造成巨大的资源、财产损失并可能造成人员伤亡,因此对于深林、无人仓库、公共设施、易燃易爆品、某些重要区域等的烟火防控和预警成为重中之重,及时预警可以快速通知执勤人员并协助消防人员及时处理火灾危机,做到尽早预防和避免火灾事故的突发和蔓延,最大程度减少损失。其中对烟雾、火焰进行准确、快速的检测成为重中之重。Fire is one of the very common and extremely harmful disasters, which often causes huge loss of resources and property and may cause casualties. Fire prevention and control and early warning have become the top priority. Timely early warning can quickly notify duty personnel and assist firefighters to deal with fire crises in a timely manner, so as to prevent and avoid the sudden and spread of fire accidents as soon as possible, and minimize losses. Among them, accurate and rapid detection of smoke and flames has become the top priority.
传统的检测方法主要是针对温度、透明度、烟雾等进行物理采样的传感器,但是传感器主要适用于近距离感应,容易受到场地等限制,并且易用性、检测准确率、可靠性等难以保证,安全人员不到现场查看难以及时判断现场具体情况。而基于视频的检测方法则可以通过远距离画面的传输,实现远程查看,因此得到了快速发展和应用。常用的视频检测方法,一般通过对视频图像进行运动区域提取,根据火焰区域的RGB或HSV通道分量的特征进行判别,这种方法对火焰的判别有一些效果,但是对烟雾却无法使用。也有采用对图像进行分块预测,利用不同大小的滑动窗口进行区域提取,然后送入各种CNN(卷积神经网络)进行分类,判断是否是火焰或烟雾,但是这种方法效率低、准确率也不够,并且对烟火的位置判断的也不够准确,因为滑动窗口是预先选好且大小和形状固定的,而烟雾火焰的形状是不固定的。Traditional detection methods are mainly sensors for physical sampling of temperature, transparency, smoke, etc., but sensors are mainly suitable for short-distance sensing, which are easily limited by venues, and are difficult to guarantee ease of use, detection accuracy, and reliability. It is difficult to judge the specific situation of the scene in time if the personnel do not visit the site. The video-based detection method can realize remote viewing through the transmission of long-distance pictures, so it has been rapidly developed and applied. The commonly used video detection method generally extracts the motion area of the video image and discriminates according to the characteristics of the RGB or HSV channel components of the flame area. This method has some effects on the discrimination of flames, but it cannot be used for smoke. It is also used to predict the image in blocks, use sliding windows of different sizes to extract regions, and then send them to various CNNs (convolutional neural networks) for classification to determine whether it is flame or smoke, but this method is inefficient and accurate. It is not enough, and the location of the fireworks is not accurate enough, because the sliding window is pre-selected and the size and shape are fixed, while the shape of the smoke and flames is not fixed.
发明内容SUMMARY OF THE INVENTION
有鉴于此,为克服上述缺陷,本发明旨在提出一种基于YOLOV3的烟火自动检测预警方法,In view of this, in order to overcome above-mentioned defect, the present invention aims to propose a kind of pyrotechnic automatic detection and early warning method based on YOLOV3,
为达到上述目的,本发明的技术方案是这样实现的:In order to achieve the above object, the technical scheme of the present invention is achieved in this way:
一种基于YOLOV3的烟火自动检测预警方法,包括:A fireworks automatic detection and early warning method based on YOLOV3, comprising:
S1、构建训练检测模型的样本集;首先搜集原始素材,对获取的原始素材利用图像数据增强技术进行处理,得到训练样本数据集;然后利用样本标注工具在训练样本数据集图片中标出检测目标的目标框并设定所属类别,将样本图片和生成的标注文件分别保存作为样本集;对所有的训练集样本中标注的目标框采用聚类算法进行聚类;S1. Construct a sample set for training the detection model; first collect original materials, process the acquired original materials with image data enhancement technology, and obtain a training sample data set; then use a sample labeling tool to mark the detection target in the picture of the training sample data set. Set the target frame and set the category to which it belongs, save the sample image and the generated annotation file as the sample set respectively; use the clustering algorithm to cluster the target frame marked in all the training set samples;
S2、搭建基于YOLOV3的深度学习目标检测网络架构;采用裁剪的Darknet-53卷积神经网络作为backbone对输入的图像进行特征提取,将神经网络生成的特征图输入到YOLOV3的检测模型;S2. Build a deep learning target detection network architecture based on YOLOV3; use a cropped Darknet-53 convolutional neural network as a backbone to extract features from the input image, and input the feature map generated by the neural network into the detection model of YOLOV3;
S3、配置训练参数,训练检测模型;S3. Configure the training parameters and train the detection model;
S4、获取待检测图像信息;从待检测现场的监控设备中,获取现场视频画面的图像帧,利用图像预处理方法对逐帧图像进行处理;S4, acquiring the image information to be detected; from the monitoring equipment on the site to be detected, acquiring image frames of the on-site video picture, and using an image preprocessing method to process the frame-by-frame images;
S5、检测烟雾和火焰目标;将步骤S4中处理过的视频图像帧送入步骤S3中预先训练好的检测模型进行目标的检测,并输出检测结果;S5, detecting smoke and flame targets; sending the video image frames processed in step S4 into the pre-trained detection model in step S3 for target detection, and outputting the detection results;
S6、对检测结果进行后处理;处理方法包括根据步骤S5中检测得到的各个目标的置信度,判断是否是一个有效的目标,如果置信度低于阈值则可能存在误检,则不进行处理;根据目标的坐标判断该目标是否在指定的区域内,如果不在则排除;对于检测结果中存在的重叠目标,设置一个IOU参数判断重叠度,去除重叠度较大框中置信值较低的,只保留一个最高的,重叠度较高的目标框可能检测的是同一个目标;根据检测结果的坐标,判断目标的大小,去除不符合实际场景大小范围的检测结果;S6, post-processing the detection result; the processing method includes determining whether it is an effective target according to the confidence level of each target detected in step S5, and if the confidence level is lower than the threshold, there may be a false detection, and no processing is performed; Determine whether the target is in the specified area according to the coordinates of the target, if not, exclude it; for the overlapping targets existing in the detection results, set an IOU parameter to judge the degree of overlap, and remove the ones with a large overlap and a low confidence value in the box, only Retaining a target frame with the highest and higher degree of overlap may detect the same target; according to the coordinates of the detection result, determine the size of the target, and remove the detection results that do not conform to the actual scene size range;
S7、连续分析多帧图像检测结果,确认目标有效并输出报警;对连续多帧图像检测结果进行分析,判断是否是有效存在的目标,及时输出报警信号,并记录相关信息。S7. Continuously analyze the detection results of multi-frame images, confirm that the target is valid and output an alarm; analyze the detection results of continuous multi-frame images, determine whether it is a valid target, output an alarm signal in time, and record relevant information.
进一步的,所述步骤S1中,原始素材包括但不限于烟雾和火焰视频、图片;Further, in the step S1, the original materials include but are not limited to smoke and flame videos and pictures;
所述检测目标为烟雾或者火焰;The detection target is smoke or flame;
所述目标框为外接矩形框。The target frame is a circumscribed rectangular frame.
进一步的,所述步骤S3的具体方法如下:Further, the concrete method of described step S3 is as follows:
设置训练检测模型的超参数,设置初始学习率为0.001,每个训练批次设置为64,训练样本的总迭代轮数设置为140轮;模型训练根据BP原理,利用SGD算法优化网络权重参数,进行迭代训练,将网络的loss值下降至较低值,训练完成后,获取用于检测烟雾和火焰的模型;Set the hyperparameters of the training detection model, set the initial learning rate to 0.001, set each training batch to 64, and set the total number of iteration rounds of training samples to 140 rounds; the model training is based on the BP principle, using the SGD algorithm to optimize the network weight parameters, Perform iterative training to reduce the loss value of the network to a lower value. After the training is completed, obtain a model for detecting smoke and flames;
进一步的,所述步骤S5中,模型输出的检测结果包括目标所属的类别、目标的外接矩形框的坐标和对应的置信度;Further, in the step S5, the detection result output by the model includes the category to which the target belongs, the coordinates of the bounding rectangle of the target, and the corresponding confidence level;
进一步的,loss值的计算方法如下:Further, the calculation method of the loss value is as follows:
训练网络的loss分为置信度损失Lconf(O,C)、类别损失Lcla(o,c)和定位损失Lloc(l,g),总的损失L(O,o,C,c,l,g)是三者的加权和。利用网络输出的预测框的信息b(x,y,w,h,C,c1,c2)与真实值g(x,y,w,h)计算损失,得到最终的loss,其中(x,y,w,h)分别表示目标外接矩形框的中点横坐标、纵坐标与矩形框的宽和高,C表示预测框所在位置是一个目标的概率,c1,c2表示目标所属类别的概率。The loss of training network is divided into confidence loss L conf (O, C), category loss L cla (o, c) and localization loss L loc (l, g), the total loss L (O, o, C, c, l, g) is the weighted sum of the three. Use the information b(x, y, w, h, C, c 1 , c 2 ) of the predicted frame output by the network and the real value g(x, y, w, h) to calculate the loss to obtain the final loss, where (x , y, w, h) represent the abscissa and ordinate of the midpoint of the bounding rectangle of the target and the width and height of the rectangle respectively, C represents the probability that the location of the prediction frame is a target, c 1 , c 2 represent the category to which the target belongs The probability.
总loss值计算:Calculate the total loss value:
L(O,o,C,c,l,g)=λ1Lconf(O,C)+λ2Lcla(o,c)+λ3Lloc(l,g)L(O,o,C,c,l,g)=λ 1 L conf (O,C)+λ 2 L cla (o,c)+λ 3 L loc (l,g)
其中λ1、λ2和λ3分别为置信度损失、类别损失和定位损失的加权权重,分别取0.3,0.2,0.5;Among them, λ 1 , λ 2 and λ 3 are the weighted weights of confidence loss, category loss and localization loss, which are 0.3, 0.2, and 0.5, respectively;
置信度损失计算:Confidence loss calculation:
其中Oi表示当前位置是否有目标,是真实值,有目标则为1,否则为0,Ci是模型预测输出的当前位置是一个目标的概率;where O i indicates whether the current position has a target, which is the true value, 1 if there is a target, 0 otherwise, and C i is the probability that the current position predicted by the model is a target;
类别损失计算:Class loss calculation:
cij=Sigmoid(cij)c ij =Sigmoid(c ij )
其中oij表示第i个预测框所在的位置是否存在第j个类别,属于真实值,存在为1,否则为0,cij表示预测的结果中,第i个预测框所在的位置存在第j个类别的概率;Among them, o ij indicates whether there is the jth category at the position of the i -th prediction frame, which belongs to the real value. If it exists, it is 1; probability of a class;
定位损失计算:Positioning loss calculation:
其中(gx,gy,gw,gh)是手工标注的目标框信息g(x,y,w,h),下标i表示第i个框,属于真实值,(bx,by,bw,bh)表示网络预测的检测框的坐标信息,与(gx,gy,gw,gh)相对应,(cx,cy,pw,ph)表示预设anchors(锚点)的信息,其中(cx,cy)表示anchors中心点在特征图上的位置,(pw,ph)表示预设anchors的宽高,与标注信息和预测信息分别相对应;Anchors是YOLOV3训练时通过聚类算法对训练数据中标注的目标框宽高进行统计并聚类的结果。Where (g x , g y , g w , g h ) is the manually labeled target frame information g(x, y, w, h), the subscript i represents the ith frame, which belongs to the true value, (b x , b y , b w , b h ) represent the coordinate information of the detection frame predicted by the network, corresponding to (g x , g y , g w , g h ), and (c x , c y , p w , p h ) represent the pre- Set the information of anchors (anchor points), where (c x , c y ) represents the position of the anchors center point on the feature map, and (p w , p h ) represents the width and height of the preset anchors, which are separate from the annotation information and prediction information. Correspondingly; Anchors is the result of statistics and clustering of the width and height of the target frame marked in the training data through the clustering algorithm during YOLOV3 training.
相对于现有技术,本发明所述的基于YOLOV3的烟火自动检测预警方法具有以下优势:Compared with the prior art, the YOLOV3-based pyrotechnic automatic detection and early warning method of the present invention has the following advantages:
(1)本发明所述的基于YOLOV3的烟火自动检测预警方法视频画面,直观易用;采用摄像机监控指定区域,可以随时随地远程查看现场实时状况,且摄像机安装在监控区域之外,发生警报后可以从较高的视角预判火灾蔓延的趋势,远程指挥救灾任务。实现非接触式检测,不受空间、环境条件的限制,适用于多种场景(深林、田地、工厂、仓储、商超、住宅等),可以实现秒级检测及报警,极大缩短火灾预警时间,及时通知及时救援,有效阻止火灾的蔓延。(1) The video picture of the automatic detection and early warning method for fireworks based on YOLOV3 according to the present invention is intuitive and easy to use; the camera is used to monitor the designated area, and the real-time status of the scene can be remotely viewed anytime, anywhere, and the camera is installed outside the monitoring area. It can predict the trend of fire spread from a higher perspective and remotely command disaster relief tasks. Realize non-contact detection, not limited by space and environmental conditions, suitable for a variety of scenarios (deep forests, fields, factories, warehouses, supermarkets, residences, etc.), can achieve second-level detection and alarm, greatly shorten the fire warning time , timely notification and timely rescue, and effectively prevent the spread of fire.
(2)本发明所述的基于YOLOV3的烟火自动检测预警方法智能检测,节约成本;实现24小时不间断的实时智能分析,对烟雾和火焰自动检测和识别,自动报警,不再依赖于人工巡视的方式,节约人力成本,提高工作效率。并且不依赖其他任何传感器设备,现场只需一个摄像头,即可实现兼容大小场景、远近距离、各种角度的大范围覆盖,节约了设备成本.(2) The automatic detection and early warning method for fireworks based on YOLOV3 according to the present invention is intelligently detected, which saves costs; realizes 24-hour uninterrupted real-time intelligent analysis, automatically detects and recognizes smoke and flames, automatically alarms, and no longer relies on manual inspections way to save labor costs and improve work efficiency. And does not rely on any other sensor equipment, only one camera on site can achieve a wide range of coverage compatible with large and small scenes, far and near distances, and various angles, saving equipment costs.
(3)本发明所述的基于YOLOV3的烟火自动检测预警方法深度学习,检测精准;本发明算法结合最前沿的深度学习技术,以Darknet-53深层神经网络为基础,采用基于YOLOV3的深度学习目标检测算法,具有检测率高、检测速度快,实时性好且性能稳定的特点,满足实际应用需求。(3) The YOLOV3-based pyrotechnic automatic detection and early warning method of the present invention has deep learning and accurate detection; the algorithm of the present invention combines the most cutting-edge deep learning technology, is based on the Darknet-53 deep neural network, and adopts the deep learning target based on YOLOV3 The detection algorithm has the characteristics of high detection rate, fast detection speed, good real-time performance and stable performance, which can meet the needs of practical applications.
附图说明Description of drawings
构成本发明的一部分的附图用来提供对本发明的进一步理解,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:The accompanying drawings constituting a part of the present invention are used to provide further understanding of the present invention, and the exemplary embodiments of the present invention and their descriptions are used to explain the present invention and do not constitute an improper limitation of the present invention. In the attached image:
图1为本发明实施例所述的自动检测预警方法流程图。FIG. 1 is a flowchart of an automatic detection and early warning method according to an embodiment of the present invention.
具体实施方式Detailed ways
需要说明的是,在不冲突的情况下,本发明中的实施例及实施例中的特征可以相互组合。It should be noted that the embodiments of the present invention and the features of the embodiments may be combined with each other under the condition of no conflict.
在本发明的描述中,需要理解的是,术语“中心”、“纵向”、“横向”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。此外,术语“第一”、“第二”等仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”等的特征可以明示或者隐含地包括一个或者更多个该特征。在本发明的描述中,除非另有说明,“多个”的含义是两个或两个以上。In the description of the present invention, it should be understood that the terms "center", "portrait", "horizontal", "top", "bottom", "front", "rear", "left", "right", " The orientation or positional relationship indicated by vertical, horizontal, top, bottom, inner, outer, etc. is based on the orientation or positional relationship shown in the drawings, and is only for the convenience of describing the present invention and The description is simplified rather than indicating or implying that the device or element referred to must have a particular orientation, be constructed and operate in a particular orientation, and therefore should not be construed as limiting the invention. In addition, the terms "first", "second", etc. are used for descriptive purposes only, and should not be construed as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature defined as "first", "second", etc., may expressly or implicitly include one or more of that feature. In the description of the present invention, unless otherwise specified, "plurality" means two or more.
在本发明的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通。对于本领域的普通技术人员而言,可以通过具体情况理解上述术语在本发明中的具体含义。In the description of the present invention, it should be noted that the terms "installed", "connected" and "connected" should be understood in a broad sense, unless otherwise expressly specified and limited, for example, it may be a fixed connection or a detachable connection Connection, or integral connection; can be mechanical connection, can also be electrical connection; can be directly connected, can also be indirectly connected through an intermediate medium, can be internal communication between two elements. For those of ordinary skill in the art, the specific meanings of the above terms in the present invention can be understood through specific situations.
下面将参考附图并结合实施例来详细说明本发明。The present invention will be described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
本发明算法适用于各种场景的烟雾、火焰的自动检测及预警,目的是能够实现在无人值守的情况下24小时工作,自动分析监控区域内的可能异常烟雾和明火,同时还可远程查看实时现场画面,便于根据直观画面判断现场情况,并指挥调度救援。The algorithm of the invention is suitable for automatic detection and early warning of smoke and flames in various scenarios. The purpose is to realize 24-hour work under unattended conditions, automatically analyze possible abnormal smoke and open flames in the monitoring area, and at the same time, it can also be viewed remotely. The real-time on-site picture is convenient for judging the on-site situation according to the intuitive picture, and commanding and dispatching rescue.
具体实施方法如下:The specific implementation method is as follows:
1.训练检测模型。通过搜集的原始素材,处理后得到用于训练的样本图片,基于Darknet框架训练深度学习神经网络,得到用于检测烟雾、火焰的模型,主要分为三步。1. Train the detection model. Through the collected raw materials, the sample images for training are obtained after processing, and the deep learning neural network is trained based on the Darknet framework to obtain a model for detecting smoke and flames, which is mainly divided into three steps.
首先是构建用于训练样本集。样本的搜集可以从网上利用爬虫工具下载视频和图片,也可以下载公开的烟火数据集。对样本数据进行手动标注,在训练样本集图片中标出烟雾和火焰(检测目标)的外接矩形框并设定所属类别(Smoke和Fire),然后利用多种图像数据增强技术对图像和标签进行增强处理,最后将样本图片和生成的标注文件分别保存作为样本集。对所有的训练集样本中标注的目标框采用聚类算法进行聚类,得到9个anchor,每个anchor其实就是目标框的宽和高的大小的聚类,用于在训练时回归目标大小;The first is to construct a sample set for training. The collection of samples can be used to download videos and pictures from the Internet using crawler tools, or download public fireworks data sets. Manually label the sample data, mark the bounding rectangle of smoke and flame (detection target) in the training sample set picture and set the category (Smoke and Fire), and then use a variety of image data enhancement techniques to enhance the image and label. process, and finally save the sample image and the generated annotation file as a sample set. The target frames marked in all the training set samples are clustered by the clustering algorithm, and 9 anchors are obtained. Each anchor is actually a clustering of the width and height of the target frame, which is used to return the target size during training;
然后搭建基于YOLOV3的深度学习目标检测网络架构。采用裁剪的Darknet-53卷积神经网络作为backbone对输入的图像进行特征提取,将网络提取生成的高维特征图作为YOLOV3的yolo层,进行预测并计算loss值。裁剪Darknet-53网络是将网络所有层的输出通道数减小一半,减小计算量,处理速度;Then build a deep learning target detection network architecture based on YOLOV3. The cropped Darknet-53 convolutional neural network is used as the backbone to extract features from the input image, and the high-dimensional feature map generated by the network extraction is used as the yolo layer of YOLOV3 to predict and calculate the loss value. Cutting the Darknet-53 network is to reduce the number of output channels of all layers of the network by half, reducing the amount of calculation and processing speed;
最后配置训练参数,训练检测模型。设置网络的基准学习率为0.001,将样本分批送入网络进行训练,每个训练批次设置为64,设置训练样本的总迭代轮数设置为140轮,并且每隔一定的样本训练轮数(如40),将学习率乘以0.1,共进行三次操作,在训练过程中降低学习率,可以让模型训练快速收敛、稳定。模型训练根据BP(反向传播)原理,反向传播网络输出的预测结果与实际标注结果计算的loss(损失)值,更新网络权重采用SGD(随机梯度下降)算法,通过不断的迭代优化训练,网络的loss值逐渐下降并趋于稳定。训练完成后,获取用于检测烟雾和火焰的模型;Finally, configure the training parameters and train the detection model. Set the benchmark learning rate of the network to 0.001, send the samples to the network in batches for training, set each training batch to 64, set the total number of iterations of the training samples to 140, and set the number of training rounds every certain sample. (such as 40), multiply the learning rate by 0.1, and perform three operations in total, and reduce the learning rate during the training process, which can make the model training quickly converge and stabilize. Model training is based on the principle of BP (back-propagation), the loss (loss) value calculated by the prediction result output by the back-propagation network and the actual labeling result, and the SGD (stochastic gradient descent) algorithm is used to update the network weight. Through continuous iterative optimization training, The loss value of the network gradually decreases and becomes stable. After the training is complete, get a model for detecting smoke and flames;
2.检测图片并输出检测结果。利用上一步训练生成的检测模型对实际的监控场景视频帧进行检测分析,获取画面中的检测目标的置信度和位置信息,主要分为三步。2. Detect the picture and output the detection result. The detection model generated by the previous training is used to detect and analyze the video frame of the actual monitoring scene, and obtain the confidence and position information of the detection target in the picture, which is mainly divided into three steps.
首先是设备选择及安装。本发明算法适用于各种场景,可将摄像头设备安装于待监控场景的一个高点,倾斜向下俯视监控区域。安装在较高且无遮挡的位置,可以监控的区域更大。如果需要多角度、远距离监控,可以使用可变倍、旋转的球机,实现360°旋转,高倍率变倍,只需提前设置预置位,可以自动巡航监控所有可覆盖区域;The first is equipment selection and installation. The algorithm of the invention is suitable for various scenarios, and the camera device can be installed at a high point of the scene to be monitored, and the monitoring area can be tilted downwards. Installed in a high and unobstructed position, the area that can be monitored is larger. If you need multi-angle and long-distance monitoring, you can use a variable magnification and rotating dome camera to achieve 360° rotation and high magnification zoom. You only need to set the preset position in advance, and you can automatically cruise and monitor all the coverage areas;
然后设置检测区域。在视频画面中设置检测区域,按照顺时针或者逆时针的顺序画一个多边形区域,作为指定区域。只有检测到指定区域内的目标才算有效目标;Then set the detection area. Set the detection area in the video screen, and draw a polygonal area in clockwise or counterclockwise order as the designated area. Only the target within the specified area is detected as a valid target;
最后利用模型实时检测画面信息,并输出结果。对获取的视频帧图像进行简单的预处理,然后用检测模型进行检测分析,得到当前帧的检测结果。如果画面中有有效目标,则模型会输出目标的外接矩形框的坐标、目标所属的类别和对应的置信度;Finally, the model is used to detect the picture information in real time and output the results. The acquired video frame image is simply preprocessed, and then the detection model is used for detection and analysis to obtain the detection result of the current frame. If there is a valid target in the picture, the model will output the coordinates of the bounding rectangle of the target, the category to which the target belongs, and the corresponding confidence;
3.检测结果处理、分析。模型输出的检测结果只是画面中所有检测到的目标信息,不能全部作为有效目标输出,因此需要根据目标的置信度、状态、位置等信息,根据预先设置的置信度阈值、检测区域进行合理的判断,再输出最终的结果,主要分为两步。3. Processing and analysis of test results. The detection results output by the model are only all detected target information in the screen, and cannot all be output as valid targets. Therefore, it is necessary to make reasonable judgments according to the confidence, state, location and other information of the target, according to the preset confidence threshold and detection area. , and then output the final result, which is mainly divided into two steps.
对检测结果进行后处理。主要包括提前设定一个置信度阈值,根据(v)中检测得到的各个目标的置信度,判断是否是一个有效的目标。置信度就是表示这个检测结果有多大的概率是一个有效目标,置信度越高则是一个有效目标的概率越高,最为为1,相反如果置信度较低则可能是误检,不需要再做其他处理;根据各个目标的对应的坐标判断该目标是否在指定的区域内,主要是判断目标的中心点,如果中心点在指定区域内则判定为在区域内,如果不在则排除;对于检测结果中存在的重叠检测框,设置一个IOU(交并比)参数判断重叠度,如果有多个框的重叠度较大,则去除重叠度较大框中置信值较低的,只保留一个最高的,重叠度较高的目标框可能检测的是同一个目标,只输出一个置信度最高的有效结果即可;根据检测结果目标框的坐标,对目标在画面中所占的大小进行判断,去除一些可能出现的异常大小的结果,不符合实际场景大小范围。最后输出满足所有条件的有效目标;Post-processing the test results. It mainly includes setting a confidence threshold in advance, and judging whether it is an effective target according to the confidence of each target detected in (v). The confidence level indicates how likely the detection result is to be a valid target. The higher the confidence level, the higher the probability of being a valid target, and the value is 1. On the contrary, if the confidence level is low, it may be a false detection, and no further testing is required. Other processing; according to the corresponding coordinates of each target to determine whether the target is in the designated area, mainly to determine the center point of the target, if the center point is in the designated area, it is determined to be in the area, if not, it is excluded; for the detection results If there are overlapping detection frames in the frame, set an IOU (intersection and union ratio) parameter to judge the degree of overlap. If there are multiple frames with a large degree of overlap, remove the frame with a high degree of overlap and a low confidence value, and only keep the one with the highest degree of overlap. , the target frame with high overlap may detect the same target, and only one valid result with the highest confidence can be output; according to the coordinates of the target frame of the detection result, the size of the target in the screen is judged, and some The result of possible abnormal size does not conform to the actual scene size range. Finally output a valid target that satisfies all conditions;
然后综合分析连续的视频帧图像,确定最终计数结果。对连续多帧图像检测结果进行分析,判断是否是有效存在的目标。通过对目标位置的比对,确定连续多帧图像均有相同位置的目标检出,且目标区域有变大的趋势等特征,则确认为有效结果,及时向安全人员发出报警信号,并记录相关信息。Then, the continuous video frame images are comprehensively analyzed to determine the final counting result. Analyze the detection results of continuous multi-frame images to determine whether it is an effective target. Through the comparison of the target positions, it is determined that the targets in the same position are detected in consecutive multiple frames of images, and the target area has the characteristics of becoming larger, then it is confirmed as a valid result, and an alarm signal is sent to the security personnel in time, and relevant records are recorded. information.
其中,loss值的计算方法如下公式所示。训练网络的loss分为置信度损失Lconf(O,C)、类别损失Lcla(o,c)和定位损失Lloc(l,g),总的损失L(O,o,C,c,l,g)是三者的加权和。利用网络输出的预测框的信息b(x,y,w,h,C,c1,c2)与真实值g(x,y,w,h)计算损失,得到最终的loss,其中(x,y,w,h)分别表示目标外接矩形框的中点横坐标、纵坐标与矩形框的宽和高,C表示预测框所在位置是一个目标的概率,c1,c2表示目标所属类别的概率。Among them, the calculation method of the loss value is shown in the following formula. The loss of training network is divided into confidence loss L conf (O, C), category loss L cla (o, c) and localization loss L loc (l, g), the total loss L (O, o, C, c, l, g) is the weighted sum of the three. Use the information b(x, y, w, h, C, c 1 , c 2 ) of the predicted frame output by the network and the real value g(x, y, w, h) to calculate the loss to obtain the final loss, where (x , y, w, h) represent the abscissa and ordinate of the midpoint of the bounding rectangle of the target and the width and height of the rectangle respectively, C represents the probability that the location of the prediction frame is a target, c 1 , c 2 represent the category to which the target belongs The probability.
总loss值计算:Calculate the total loss value:
L(O,o,C,c,l,g)=λ1Lconf(O,C)+λ2Lcla(o,c)+λ3Lloc(l,g)L(O,o,C,c,l,g)=λ 1 L conf (O,C)+λ 2 L cla (o,c)+λ 3 L loc (l,g)
其中λ1、λ2和λ3分别为置信度损失、类别损失和定位损失的加权权重,分别取0.3,0.2,0.5。Among them, λ 1 , λ 2 and λ 3 are the weighted weights of confidence loss, category loss and localization loss, respectively, which are 0.3, 0.2, and 0.5.
置信度损失计算:Confidence loss calculation:
其中Oi表示当前位置是否有目标,是真实值,有目标则为1,否则为0,Ci是模型预测输出的当前位置是一个目标的概率;where O i indicates whether the current position has a target, which is the true value, 1 if there is a target, 0 otherwise, and C i is the probability that the current position predicted by the model is a target;
类别损失计算:Class loss calculation:
cij=Sigmoid(cij)c ij =Sigmoid(c ij )
其中oij表示第i个预测框所在的位置是否存在第j个类别,属于真实值,存在为1,否则为0,cij表示预测的结果中,第i个预测框所在的位置存在第j个类别的概率;Among them, o ij indicates whether there is the jth category at the position of the i -th prediction frame, which belongs to the real value. If it exists, it is 1; probability of a class;
定位损失计算:Positioning loss calculation:
其中(gx,gy,gw,gh)是手工标注的目标框信息g(x,y,w,h),下标i表示第i个框,属于真实值,(bx,by,bw,bh)表示网络预测的检测框的坐标信息,与(gx,gy,gw,gh)相对应,(cx,cy,pw,ph)表示预设anchors(锚点)的信息,其中(cx,cy)表示anchors中心点在特征图上的位置,(pw,ph)表示预设anchors的宽高,与标注信息和预测信息分别相对应。Anchors是YOLOV3训练时通过聚类算法对训练数据中标注的目标框宽高进行统计并聚类的结果。Where (g x , g y , g w , g h ) is the manually labeled target frame information g(x, y, w, h), the subscript i represents the ith frame, which belongs to the true value, (b x , b y , b w , b h ) represent the coordinate information of the detection frame predicted by the network, corresponding to (g x , g y , g w , g h ), and (c x , c y , p w , p h ) represent the pre- Set the information of anchors (anchor points), where (c x , c y ) represents the position of the anchors center point on the feature map, and (p w , p h ) represents the width and height of the preset anchors, which are separate from the annotation information and prediction information. Corresponding. Anchors is the result of statistics and clustering of the width and height of the target frame marked in the training data through the clustering algorithm during YOLOV3 training.
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included in the scope of the present invention. within the scope of protection.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011054961.2A CN112132090A (en) | 2020-09-28 | 2020-09-28 | Smoke and fire automatic detection and early warning method based on YOLOV3 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011054961.2A CN112132090A (en) | 2020-09-28 | 2020-09-28 | Smoke and fire automatic detection and early warning method based on YOLOV3 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112132090A true CN112132090A (en) | 2020-12-25 |
Family
ID=73843210
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011054961.2A Pending CN112132090A (en) | 2020-09-28 | 2020-09-28 | Smoke and fire automatic detection and early warning method based on YOLOV3 |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112132090A (en) |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112598071A (en) * | 2020-12-28 | 2021-04-02 | 北京市商汤科技开发有限公司 | Open fire identification method, device, equipment and storage medium |
CN112699858A (en) * | 2021-03-24 | 2021-04-23 | 中国人民解放军国防科技大学 | Unmanned platform smoke fog sensing method and system, computer equipment and storage medium |
CN112699801A (en) * | 2020-12-30 | 2021-04-23 | 上海船舶电子设备研究所(中国船舶重工集团公司第七二六研究所) | Fire identification method and system based on video image |
CN112861635A (en) * | 2021-01-11 | 2021-05-28 | 西北工业大学 | Fire and smoke real-time detection method based on deep learning |
CN112861673A (en) * | 2021-01-27 | 2021-05-28 | 长扬科技(北京)有限公司 | False alarm removal early warning method and system for multi-target detection of surveillance video |
CN112884090A (en) * | 2021-04-14 | 2021-06-01 | 安徽理工大学 | Fire detection and identification method based on improved YOLOv3 |
CN112906463A (en) * | 2021-01-15 | 2021-06-04 | 上海东普信息科技有限公司 | Image-based fire detection method, device, equipment and storage medium |
CN113011405A (en) * | 2021-05-25 | 2021-06-22 | 南京柠瑛智能科技有限公司 | Method for solving multi-frame overlapping error of ground object target identification of unmanned aerial vehicle |
CN113191274A (en) * | 2021-04-30 | 2021-07-30 | 西安聚全网络科技有限公司 | Oil field video intelligent safety event detection method and system based on neural network |
CN113192038A (en) * | 2021-05-07 | 2021-07-30 | 北京科技大学 | Method for identifying and monitoring abnormal smoke and fire in existing flame environment based on deep learning |
CN113343779A (en) * | 2021-05-14 | 2021-09-03 | 南方电网调峰调频发电有限公司 | Environment anomaly detection method and device, computer equipment and storage medium |
CN113379999A (en) * | 2021-06-22 | 2021-09-10 | 徐州才聚智能科技有限公司 | Fire detection method and device, electronic equipment and storage medium |
CN113408361A (en) * | 2021-05-25 | 2021-09-17 | 中国矿业大学 | Deep learning-based mining conveyor belt bulk material detection method and system |
CN113469057A (en) * | 2021-07-02 | 2021-10-01 | 中南大学 | Fire hole video self-adaptive detection method, device, equipment and medium |
CN113486857A (en) * | 2021-08-03 | 2021-10-08 | 云南大学 | Ascending safety detection method and system based on YOLOv4 |
CN113553948A (en) * | 2021-07-23 | 2021-10-26 | 中远海运科技(北京)有限公司 | Automatic recognition and counting method for tobacco insects and computer readable medium |
CN113627223A (en) * | 2021-01-07 | 2021-11-09 | 广州中国科学院软件应用技术研究所 | Flame detection algorithm based on deep learning target detection and classification technology |
CN113688748A (en) * | 2021-08-27 | 2021-11-23 | 武汉大千信息技术有限公司 | Fire detection model and method |
CN113706815A (en) * | 2021-08-31 | 2021-11-26 | 沈阳二一三电子科技有限公司 | Vehicle fire identification method combining YOLOv3 and optical flow method |
CN113743378A (en) * | 2021-11-03 | 2021-12-03 | 航天宏图信息技术股份有限公司 | Fire monitoring method and device based on video |
CN113776408A (en) * | 2021-09-13 | 2021-12-10 | 北京邮电大学 | Reading method for gate opening ruler |
CN113989735A (en) * | 2021-09-30 | 2022-01-28 | 南京铁辰安全技术有限公司 | Artificial intelligence pyrotechnic analysis method and system based on video pictures |
CN114170487A (en) * | 2021-12-08 | 2022-03-11 | 北京计算机技术及应用研究所 | A vision-based detection method for oil pollution on water surface |
CN114359797A (en) * | 2021-12-29 | 2022-04-15 | 福建天晴数码有限公司 | A real-time detection method for nighttime anomalies in construction sites based on GAN network |
CN114626439A (en) * | 2022-02-21 | 2022-06-14 | 华南理工大学 | Detection method of fireworks around transmission lines based on improved YOLOv4 |
CN114677629A (en) * | 2022-03-30 | 2022-06-28 | 山东中科先进技术有限公司 | Smoke and fire detection early warning method and system based on YOLOV5 network |
CN114943923A (en) * | 2022-06-17 | 2022-08-26 | 中国人民解放军陆军炮兵防空兵学院 | Method and system for recognizing explosion flare smoke of cannonball based on video of deep learning |
CN114998783A (en) * | 2022-05-19 | 2022-09-02 | 安徽合为智能科技有限公司 | Front-end equipment for video analysis of smoke, fire and personnel behaviors |
CN115861922A (en) * | 2022-11-23 | 2023-03-28 | 南京恩博科技有限公司 | Sparse smoke and fire detection method and device, computer equipment and storage medium |
TWI807354B (en) * | 2021-06-28 | 2023-07-01 | 南亞塑膠工業股份有限公司 | Fire detection system and fire detection method based on artificial intelligence and image recognition |
CN116912782A (en) * | 2023-09-14 | 2023-10-20 | 四川泓宝润业工程技术有限公司 | Firework detection method based on overlapping annotation training |
CN117197978A (en) * | 2023-04-21 | 2023-12-08 | 中国消防救援学院 | Forest fire monitoring and early warning system based on deep learning |
WO2024022059A1 (en) * | 2022-07-29 | 2024-02-01 | 京东方科技集团股份有限公司 | Environment detection and alarming method and apparatus, computer device, and storage medium |
CN117953432A (en) * | 2024-03-26 | 2024-04-30 | 湖北信通通信有限公司 | Intelligent smoke and fire identification method and system based on AI algorithm |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105590401A (en) * | 2015-12-15 | 2016-05-18 | 天维尔信息科技股份有限公司 | Early-warning linkage method and system based on video images |
CN110473375A (en) * | 2019-08-14 | 2019-11-19 | 成都睿云物联科技有限公司 | Monitoring method, device, equipment and the system of forest fire |
CN110969205A (en) * | 2019-11-29 | 2020-04-07 | 南京恩博科技有限公司 | Forest smoke and fire detection method based on target detection, storage medium and equipment |
CN111091072A (en) * | 2019-11-29 | 2020-05-01 | 河海大学 | YOLOv 3-based flame and dense smoke detection method |
CN111680632A (en) * | 2020-06-10 | 2020-09-18 | 深延科技(北京)有限公司 | Smoke and fire detection method and system based on deep learning convolutional neural network |
CN111709310A (en) * | 2020-05-26 | 2020-09-25 | 重庆大学 | A Deep Learning-Based Gesture Tracking and Recognition Method |
-
2020
- 2020-09-28 CN CN202011054961.2A patent/CN112132090A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105590401A (en) * | 2015-12-15 | 2016-05-18 | 天维尔信息科技股份有限公司 | Early-warning linkage method and system based on video images |
CN110473375A (en) * | 2019-08-14 | 2019-11-19 | 成都睿云物联科技有限公司 | Monitoring method, device, equipment and the system of forest fire |
CN110969205A (en) * | 2019-11-29 | 2020-04-07 | 南京恩博科技有限公司 | Forest smoke and fire detection method based on target detection, storage medium and equipment |
CN111091072A (en) * | 2019-11-29 | 2020-05-01 | 河海大学 | YOLOv 3-based flame and dense smoke detection method |
CN111709310A (en) * | 2020-05-26 | 2020-09-25 | 重庆大学 | A Deep Learning-Based Gesture Tracking and Recognition Method |
CN111680632A (en) * | 2020-06-10 | 2020-09-18 | 深延科技(北京)有限公司 | Smoke and fire detection method and system based on deep learning convolutional neural network |
Non-Patent Citations (1)
Title |
---|
罗小权 等: "改进YOLOV3的火灾检测方法", 《计算机工程与应用》, vol. 56, no. 17, pages 187 - 196 * |
Cited By (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112598071A (en) * | 2020-12-28 | 2021-04-02 | 北京市商汤科技开发有限公司 | Open fire identification method, device, equipment and storage medium |
CN112699801A (en) * | 2020-12-30 | 2021-04-23 | 上海船舶电子设备研究所(中国船舶重工集团公司第七二六研究所) | Fire identification method and system based on video image |
CN113627223A (en) * | 2021-01-07 | 2021-11-09 | 广州中国科学院软件应用技术研究所 | Flame detection algorithm based on deep learning target detection and classification technology |
CN112861635B (en) * | 2021-01-11 | 2024-05-14 | 西北工业大学 | Fire disaster and smoke real-time detection method based on deep learning |
CN112861635A (en) * | 2021-01-11 | 2021-05-28 | 西北工业大学 | Fire and smoke real-time detection method based on deep learning |
CN112906463A (en) * | 2021-01-15 | 2021-06-04 | 上海东普信息科技有限公司 | Image-based fire detection method, device, equipment and storage medium |
CN112861673A (en) * | 2021-01-27 | 2021-05-28 | 长扬科技(北京)有限公司 | False alarm removal early warning method and system for multi-target detection of surveillance video |
CN112699858A (en) * | 2021-03-24 | 2021-04-23 | 中国人民解放军国防科技大学 | Unmanned platform smoke fog sensing method and system, computer equipment and storage medium |
CN112699858B (en) * | 2021-03-24 | 2021-05-18 | 中国人民解放军国防科技大学 | Unmanned platform smoke and dust sensing method, system, computer equipment and storage medium |
CN112884090A (en) * | 2021-04-14 | 2021-06-01 | 安徽理工大学 | Fire detection and identification method based on improved YOLOv3 |
CN113191274A (en) * | 2021-04-30 | 2021-07-30 | 西安聚全网络科技有限公司 | Oil field video intelligent safety event detection method and system based on neural network |
CN113192038A (en) * | 2021-05-07 | 2021-07-30 | 北京科技大学 | Method for identifying and monitoring abnormal smoke and fire in existing flame environment based on deep learning |
CN113192038B (en) * | 2021-05-07 | 2022-08-19 | 北京科技大学 | Method for recognizing and monitoring abnormal smoke and fire in existing flame environment based on deep learning |
CN113343779A (en) * | 2021-05-14 | 2021-09-03 | 南方电网调峰调频发电有限公司 | Environment anomaly detection method and device, computer equipment and storage medium |
CN113408361A (en) * | 2021-05-25 | 2021-09-17 | 中国矿业大学 | Deep learning-based mining conveyor belt bulk material detection method and system |
CN113011405B (en) * | 2021-05-25 | 2021-08-13 | 南京柠瑛智能科技有限公司 | Method for solving multi-frame overlapping error of ground object target identification of unmanned aerial vehicle |
CN113408361B (en) * | 2021-05-25 | 2023-09-19 | 中国矿业大学 | A method and system for detecting bulk materials on mining conveyor belts based on deep learning |
CN113011405A (en) * | 2021-05-25 | 2021-06-22 | 南京柠瑛智能科技有限公司 | Method for solving multi-frame overlapping error of ground object target identification of unmanned aerial vehicle |
CN113379999A (en) * | 2021-06-22 | 2021-09-10 | 徐州才聚智能科技有限公司 | Fire detection method and device, electronic equipment and storage medium |
CN113379999B (en) * | 2021-06-22 | 2024-05-24 | 徐州才聚智能科技有限公司 | Fire detection method, device, electronic equipment and storage medium |
TWI807354B (en) * | 2021-06-28 | 2023-07-01 | 南亞塑膠工業股份有限公司 | Fire detection system and fire detection method based on artificial intelligence and image recognition |
CN113469057B (en) * | 2021-07-02 | 2023-04-28 | 中南大学 | Fire eye video self-adaptive detection method, device, equipment and medium |
CN113469057A (en) * | 2021-07-02 | 2021-10-01 | 中南大学 | Fire hole video self-adaptive detection method, device, equipment and medium |
CN113553948A (en) * | 2021-07-23 | 2021-10-26 | 中远海运科技(北京)有限公司 | Automatic recognition and counting method for tobacco insects and computer readable medium |
CN113486857A (en) * | 2021-08-03 | 2021-10-08 | 云南大学 | Ascending safety detection method and system based on YOLOv4 |
CN113486857B (en) * | 2021-08-03 | 2023-05-12 | 云南大学 | YOLOv 4-based ascending safety detection method and system |
CN113688748B (en) * | 2021-08-27 | 2023-08-18 | 武汉大千信息技术有限公司 | Fire detection model and method |
CN113688748A (en) * | 2021-08-27 | 2021-11-23 | 武汉大千信息技术有限公司 | Fire detection model and method |
CN113706815A (en) * | 2021-08-31 | 2021-11-26 | 沈阳二一三电子科技有限公司 | Vehicle fire identification method combining YOLOv3 and optical flow method |
CN113776408A (en) * | 2021-09-13 | 2021-12-10 | 北京邮电大学 | Reading method for gate opening ruler |
CN113989735A (en) * | 2021-09-30 | 2022-01-28 | 南京铁辰安全技术有限公司 | Artificial intelligence pyrotechnic analysis method and system based on video pictures |
CN113743378B (en) * | 2021-11-03 | 2022-02-08 | 航天宏图信息技术股份有限公司 | Fire monitoring method and device based on video |
CN113743378A (en) * | 2021-11-03 | 2021-12-03 | 航天宏图信息技术股份有限公司 | Fire monitoring method and device based on video |
CN114170487A (en) * | 2021-12-08 | 2022-03-11 | 北京计算机技术及应用研究所 | A vision-based detection method for oil pollution on water surface |
CN114359797A (en) * | 2021-12-29 | 2022-04-15 | 福建天晴数码有限公司 | A real-time detection method for nighttime anomalies in construction sites based on GAN network |
CN114626439A (en) * | 2022-02-21 | 2022-06-14 | 华南理工大学 | Detection method of fireworks around transmission lines based on improved YOLOv4 |
CN114677629A (en) * | 2022-03-30 | 2022-06-28 | 山东中科先进技术有限公司 | Smoke and fire detection early warning method and system based on YOLOV5 network |
CN114998783A (en) * | 2022-05-19 | 2022-09-02 | 安徽合为智能科技有限公司 | Front-end equipment for video analysis of smoke, fire and personnel behaviors |
CN114943923B (en) * | 2022-06-17 | 2022-12-23 | 中国人民解放军陆军炮兵防空兵学院 | Method and system for recognizing explosion flare smoke of cannonball based on video of deep learning |
CN114943923A (en) * | 2022-06-17 | 2022-08-26 | 中国人民解放军陆军炮兵防空兵学院 | Method and system for recognizing explosion flare smoke of cannonball based on video of deep learning |
WO2024022059A1 (en) * | 2022-07-29 | 2024-02-01 | 京东方科技集团股份有限公司 | Environment detection and alarming method and apparatus, computer device, and storage medium |
CN115861922A (en) * | 2022-11-23 | 2023-03-28 | 南京恩博科技有限公司 | Sparse smoke and fire detection method and device, computer equipment and storage medium |
CN115861922B (en) * | 2022-11-23 | 2023-10-03 | 南京恩博科技有限公司 | Sparse smoke detection method and device, computer equipment and storage medium |
CN117197978A (en) * | 2023-04-21 | 2023-12-08 | 中国消防救援学院 | Forest fire monitoring and early warning system based on deep learning |
CN116912782B (en) * | 2023-09-14 | 2023-11-14 | 四川泓宝润业工程技术有限公司 | Firework detection method based on overlapping annotation training |
CN116912782A (en) * | 2023-09-14 | 2023-10-20 | 四川泓宝润业工程技术有限公司 | Firework detection method based on overlapping annotation training |
CN117953432A (en) * | 2024-03-26 | 2024-04-30 | 湖北信通通信有限公司 | Intelligent smoke and fire identification method and system based on AI algorithm |
CN117953432B (en) * | 2024-03-26 | 2024-06-11 | 湖北信通通信有限公司 | Intelligent smoke and fire identification method and system based on AI algorithm |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112132090A (en) | Smoke and fire automatic detection and early warning method based on YOLOV3 | |
CN101795395B (en) | System and method for monitoring crowd situation | |
CN110428522A (en) | A kind of intelligent safety and defence system of wisdom new city | |
KR101953342B1 (en) | Multi-sensor fire detection method and system | |
KR101895811B1 (en) | A high performance large coverage surveillance system | |
CN111274930B (en) | Helmet wearing and smoking behavior identification method based on deep learning | |
CN109040693B (en) | Intelligent alarm system and method | |
CN108389359B (en) | Deep learning-based urban fire alarm method | |
JP7533946B2 (en) | Optical fiber sensing system, behavior identification device, behavior identification method, and program | |
CN112784821A (en) | Building site behavior safety detection and identification method and system based on YOLOv5 | |
KR20190046351A (en) | Method and Apparatus for Detecting Intruder | |
CN113963301A (en) | A video fire smoke detection method and system based on fusion of spatiotemporal features | |
CN111163294A (en) | Building safety channel monitoring system and method for artificial intelligence target recognition | |
CN113313006A (en) | Urban illegal construction supervision method and system based on unmanned aerial vehicle and storage medium | |
CN114677640A (en) | Intelligent construction site safety monitoring system and method based on machine vision | |
Samuel et al. | AI Driven Thermal People Counting for Smart Window Facade Using Portable Low‐Cost Miniature Thermal Imaging Sensors | |
CN114444386A (en) | Fire early warning and post-disaster floor damage prediction method and system based on BIM and deep learning | |
CN118314518A (en) | An AI intelligent monitoring and management platform | |
Luo | Research on fire detection based on YOLOv5 | |
KR102107957B1 (en) | Cctv monitoring system for detecting the invasion in the exterior wall of building and method thereof | |
CN114373162B (en) | Dangerous area personnel intrusion detection method and system for transformer substation video monitoring | |
CN115631153A (en) | Pipe gallery visual defect detection method and system based on perception learning structure | |
CN115546684A (en) | Detection method of elevator video light curtain system for pet leashes | |
CN117475464A (en) | A drilling overflow personnel operation risk monitoring system and method based on image recognition | |
CN114387391A (en) | Safety monitoring method, device, computer equipment and medium for substation equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201225 |
|
RJ01 | Rejection of invention patent application after publication |