CN113657250A - Flame detection method and system based on monitoring video - Google Patents

Flame detection method and system based on monitoring video Download PDF

Info

Publication number
CN113657250A
CN113657250A CN202110934946.5A CN202110934946A CN113657250A CN 113657250 A CN113657250 A CN 113657250A CN 202110934946 A CN202110934946 A CN 202110934946A CN 113657250 A CN113657250 A CN 113657250A
Authority
CN
China
Prior art keywords
flame
candidate
frame
frames
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110934946.5A
Other languages
Chinese (zh)
Inventor
朱浩
曹颂
钟星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Tuling Video Technology Co ltd
Original Assignee
Nanjing Tuling Video Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Tuling Video Technology Co ltd filed Critical Nanjing Tuling Video Technology Co ltd
Priority to CN202110934946.5A priority Critical patent/CN113657250A/en
Publication of CN113657250A publication Critical patent/CN113657250A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a flame detection method and a system based on a monitoring video, wherein the method comprises the following steps: a moving object detection step, namely inputting a group of continuous frame image sequences, detecting whether a moving object exists in each frame, and outputting a candidate frame result to a flame detector; a flame detector step, namely receiving the candidate frame result output by the flame detector step, and identifying a candidate area by using a deep learning network if a moving object exists; a detection matching step, wherein the region detected by the flame detector step and the region detected by the moving object detection step are matched, and only the region meeting a certain intersection ratio is reserved; and a motion characteristic filtering step, wherein the regions reserved in the detection matching step are further screened by utilizing the motion characteristics between frames, and the finally determined region is the flame region. The invention integrates the motion detection and the motion characteristic extraction and discrimination methods, and obviously improves the flame detection precision.

Description

Flame detection method and system based on monitoring video
Technical Field
The invention relates to a flame detection method and system based on a monitoring video, and belongs to the technical field of video monitoring security.
Background
In recent years, the loss caused by fire is getting larger and more attention is paid to fire detection. Traditional fire detection methods are mainly divided into two categories: the first category is based on sensors, which are usually deployed indoors and are not very sensitive, and only large fires or very close flames can be detected. The second type is based on image processing technology, and uses color information of flames for identification, the accuracy is not ideal, and some yellow objects are easily identified as flames, so that false detection is caused.
The sensors can not be deployed in any place, but the deployment limit of the monitoring video is less, and a plurality of existing monitoring cameras can be directly used for collecting data, so the patent is focused on the flame detection method based on the monitoring video information.
With the rapid development of neural networks in image recognition, it becomes possible to accurately recognize an object to be recognized in an image. However, the flame itself has its special attribute, and its color information is obvious, but it is not in a normal form, i.e. it lacks effective shape information, so that it can not identify the flame accurately only by using neural network, and there is also false detection of object whose color is close to that of flame.
Patent 1: a flame recognition algorithm based on image processing technology, CN 104504382B. The patent provides a traditional image processing method, wherein the highest point and the gravity center of flame are found through an internal and external flame extraction algorithm, and coordinates of two points are respectively recorded; then, two-point connecting lines are made through the two-point connecting lines, and RGB values on the two-point connecting lines are extracted; and simultaneously comparing the RGB value on the two-point connecting line with a standard flame RGB feature library, obtaining a matching value through comparison, and judging whether the image is a flame image or not according to the size of the matching value. Patent 2: methods for flame target detection based on digital images and convolution features, CN 110751089A. The patent is mainly based on adjustment of a Faster RCNN VGG16 model, and uses a pure deep learning method to detect flame.
Disclosure of Invention
The prior art has the following disadvantages: patent 1 relates to a flame recognition algorithm based on image processing technology. The main drawback of its design is that it does not filter well some yellow objects, which may be a pedestrian's coat, a helmet worn by a takeaway, etc., using only color features for matching, which is very common, so its false detection rate cannot be guaranteed. Patent 2 relates to a flame target detection method based on digital images and convolution features. The main drawback of its design is the use of pure deep learning networks. Because the flame has no fixed shape characteristics, the deep learning network can only learn the color characteristics of the flame, and the false detection rate is high.
The invention aims to overcome the technical defects in the prior art and solve the technical problems, and provides a flame detection method and system based on a monitoring video. The method is based on the monitoring video, and the deployment can be completed only by accessing the video through a server capable of operating the method. On the basis of deep learning of the latest YoloV4 model, the method integrates motion detection, motion characteristic extraction and discrimination methods, and comprehensively considers the information of flame color, shape and motion, thereby obviously improving the flame detection precision.
The invention specifically adopts the following technical scheme: a flame detection method based on a surveillance video comprises the following steps:
the moving object detection step specifically comprises the following steps: inputting a group of continuous frame image sequences, detecting whether a moving object exists in each frame, and outputting a candidate frame result to a flame detector;
a flame detector step, specifically comprising: receiving a candidate frame result output by the flame detector step, and identifying a candidate region by using a deep learning network if a moving object exists;
the detection matching step specifically comprises the following steps: matching the region detected in the flame detector step with the region detected in the moving object detection step, and reserving only the region meeting a certain intersection ratio;
the motion characteristic filtering step specifically comprises the following steps: and (4) further screening the regions reserved by the detection matching step by utilizing the motion characteristics between frames, wherein the finally determined regions are flame regions.
As a preferred embodiment, the moving object detecting step specifically includes:
step SS 11: continuous frames captured under a static camera are used for obtaining foreground images through a self-adaptive Gaussian mixture Model (MOG);
step SS 12: performing morphological operation on the obtained foreground images to reduce the number of the foreground images;
step SS 13: using a digital binary image topological structure analysis based on boundary tracking to obtain a minimum circumscribed rectangle of each foreground image;
step SS 14: and filtering the candidate minimum circumscribed rectangle according to a non-maximum suppression algorithm to obtain a candidate frame obtained based on a background modeling method.
As a preferred embodiment, the flame detector step specifically includes:
step SS 21: scaling an input single frame image to a fixed size as input data of a detector;
step SS 22: operating a flame detection model to process input data and acquiring a candidate frame with a score exceeding a set threshold;
step SS 23: and performing candidate region filtering according to the set candidate region area ratio condition to obtain a candidate region result of the flame detector.
As a preferred embodiment, the step of detecting matching specifically includes:
step SS 31: traversing the candidate frames obtained in the flame detector step, if the score exceeds a threshold value, determining the candidate frames as flame candidate frames without subsequent filtering;
step SS 32: for candidate boxes acquired by the flame detector step whose score does not exceed the threshold, calculating the intersection ratio IOU of each candidate box C with the candidate box G acquired by the background modeling, the calculation formula is as follows:
Figure BDA0003212524370000041
if the candidate frame exceeds the set threshold, the candidate frame is considered to be legal, and the candidate frame is reserved; otherwise, it is discarded.
As a preferred embodiment, the motion characteristic filtering step specifically includes:
step SS 41: acquiring flame areas determined by the previous n frames, and if the current frame sequence is less than n, taking the flame areas determined by the previous 3 frames as final flame areas;
step SS 42: matching the candidate frame of the current frame with the flame area of the previous n frames, if the candidate frame with the intersection ratio exceeding the threshold exists, adding 1 to the detected number num of the candidate frame of the current frame, and calculating the score of each candidate frame in the following way:
Figure BDA0003212524370000042
reserving a candidate frame with the score exceeding a set threshold value as a final flame area;
step SS 43: and reserving all the determined flame frames of the current frame to a flame frame queue for comparison with the following frame.
The invention also provides a flame detection system based on the monitoring video, which comprises:
a moving object detection module to perform: inputting a group of continuous frame image sequences, detecting whether a moving object exists in each frame, and outputting a candidate frame result to a flame detector module;
a flame detector module to perform: receiving a candidate frame result output by the flame detector module, and identifying a candidate region by using a deep learning network if a moving object exists;
a detection matching module to perform: matching the region detected by the flame detector module with the region detected by the moving object detection step, and reserving only the region meeting a certain intersection ratio;
a motion feature filtering module to perform: and further screening the regions reserved by the detection matching module by utilizing the motion characteristics between frames, wherein the finally determined regions are flame regions.
As a preferred embodiment, the moving object detection module specifically performs: continuous frames captured under a static camera are used for obtaining foreground images through a self-adaptive Gaussian mixture Model (MOG); performing morphological operation on the obtained foreground images to reduce the number of the foreground images; using a digital binary image topological structure analysis based on boundary tracking to obtain a minimum circumscribed rectangle of each foreground image; and filtering the candidate minimum circumscribed rectangle according to a non-maximum suppression algorithm to obtain a candidate frame obtained based on a background modeling method.
As a preferred embodiment, the flame detector module specifically performs: scaling an input single frame image to a fixed size as input data of a detector; operating a flame detection model to process input data and acquiring a candidate frame with a score exceeding a set threshold; and performing candidate region filtering according to the set candidate region area ratio condition to obtain a candidate region result of the flame detector.
As a preferred embodiment, the detection matching module specifically executes: traversing the candidate frames obtained in the flame detector step, if the score exceeds a threshold value, determining the candidate frames as flame candidate frames without subsequent filtering; for candidate boxes acquired by the flame detector step whose score does not exceed the threshold, calculating the intersection ratio IOU of each candidate box C with the candidate box G acquired by the background modeling, the calculation formula is as follows:
Figure BDA0003212524370000051
if the candidate frame exceeds the set threshold, the candidate frame is considered to be legal, and the candidate frame is reserved; otherwise, it is discarded.
As a preferred embodiment, the motion characteristic filtering module specifically includes: acquiring flame areas determined by the previous n frames, and if the current frame sequence is less than n, taking the flame areas determined by the previous 3 frames as final flame areas;
matching the candidate frame of the current frame with the flame area of the previous n frames, if the candidate frame with the intersection ratio exceeding the threshold exists, adding 1 to the detected number num of the candidate frame of the current frame, and calculating the score of each candidate frame in the following way:
Figure BDA0003212524370000061
reserving a candidate frame with the score exceeding a set threshold value as a final flame area;
and reserving all the determined flame frames of the current frame to a flame frame queue for comparison with the following frame.
The invention achieves the following beneficial effects: the invention provides a flame detection method and a flame detection system based on a surveillance video, which are simple to deploy and reliable in precision by analyzing surveillance video data based on motion characteristics of slow flame movement and combining a traditional moving object detection and deep learning method. Secondly, the method determines a candidate frame through a background modeling and deep learning method, and screens the candidate frame by utilizing the inter-frame motion position information to finally determine the flame area. Third, the flame detection algorithm aiming at traditional flame detection and pure deep learning mainly uses the color information of flame, which is very easy to be interfered by objects with similar colors. On the basis of deep learning, the method not only retains the color and shape characteristics, but also filters the false detection existing in the existing method by combining the video interframe information. Fourthly, the flame detector which is trained independently based on the YoloV4 model is used in the invention, the current optimal level is reached on single frame identification, and the condition of missing detection of the candidate frame is ensured. Fifthly, the invention enables the detection to be more robust through the use of the interframe information, and can not cause direct influence due to the false detection or missing detection of a single frame.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of a surveillance video based flame detection method of the present invention.
FIG. 2 is a block diagram of a neural network used in the flame detector module of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
Example 1: the invention relates to a flame detection method based on a surveillance video, and figure 1 is an overall framework diagram of the method. Specifically, under a still camera, a series of image sequences with the number of N are input, and step 1: acquiring a foreground image from continuous frames captured by a state camera through a self-adaptive Gaussian mixture Model (MOG); step 2: the foreground is amplified through morphological closed operation, the corrosion operation is mainly carried out on a binary image obtained by MOG, and the effect of amplifying and clustering foreground targets is achieved; and step 3: finding all connected regions in the image on the binary image, then fitting the connected regions by using an external rectangle to obtain a detection frame, and only keeping the rectangle with a certain area ratio as a background to model and detect the flame; and 4, step 4: detecting a current frame image by using a yoloV 4-based flame detector which independently uses flame data fine tuning to obtain a candidate frame detected by the flame detector and a corresponding score; and 5: classifying the candidate frames obtained by the flame detector, directly judging the candidate frames with the score exceeding 0.8 into flames, recording the flames into each frame of flame frame queue, and comparing the flames with frames to be identified of the following frames; for candidate frames with scores not exceeding 0.8, further judgment is needed; step 6: comparing the candidate frame C which is obtained by the flame detector and needs to be further judged with the candidate frame G obtained by background modeling, and calculating the intersection ratio of the candidate frame C and the candidate frame G, wherein the calculation formula is as follows:
Figure BDA0003212524370000071
if the intersection ratio exceeds 0.1, the detector candidate frame is considered legal, otherwise, the detector candidate frame is discarded; the reason for using a lower threshold of 0.1 is that the actual area of flame motion ratio may be very small, so a lower cross-over ratio is required.
And 7: and (3) performing motion detection on previous and next frames of the candidate frame C obtained in the step (6), obtaining a flame frame G recorded in the previous 20 frames of the current frame, similarly calculating the intersection ratio of each frame of C and G, if the intersection ratio exceeds 0.7, calculating the corresponding count num +1 of the frame, and calculating the score of each frame in the following calculation mode:
Figure BDA0003212524370000072
if the score exceeds the set threshold of 0.5, the flame frames are finally determined and added into the flame frame queue for comparison of the following frames.
Example 2: the invention also provides a flame detection system based on the monitoring video, which comprises:
a moving object detection module to perform: inputting a group of continuous frame image sequences, detecting whether a moving object exists in each frame, and outputting a candidate frame result to a flame detector module;
a flame detector module to perform: receiving the candidate frame result output by the flame detector module, and identifying a candidate region by using a deep learning network if a moving object exists, as shown in FIG. 2;
a detection matching module to perform: matching the region detected by the flame detector module with the region detected by the moving object detection step, and reserving only the region meeting a certain intersection ratio;
a motion feature filtering module to perform: and further screening the regions reserved by the detection matching module by utilizing the motion characteristics between frames, wherein the finally determined regions are flame regions.
As a preferred embodiment, the moving object detection module specifically performs: continuous frames captured under a static camera are used for obtaining foreground images through a self-adaptive Gaussian mixture Model (MOG); performing morphological operation on the obtained foreground images to reduce the number of the foreground images; using a digital binary image topological structure analysis based on boundary tracking to obtain a minimum circumscribed rectangle of each foreground image; and filtering the candidate minimum circumscribed rectangle according to a non-maximum suppression algorithm to obtain a candidate frame obtained based on a background modeling method.
As a preferred embodiment, the flame detector module specifically performs: scaling an input single frame image to a fixed size as input data of a detector; operating a flame detection model to process input data and acquiring a candidate frame with a score exceeding a set threshold; and performing candidate region filtering according to the set candidate region area ratio condition to obtain a candidate region result of the flame detector.
As a preferred embodiment, the detection matching module specifically executes: traversing the candidate frame obtained by the flame detector module, and if the score exceeds a threshold value, determining the candidate frame as the flame candidate frame without subsequent filtering; for the candidate frames acquired by the flame detector module with the score not exceeding the threshold, calculating the intersection ratio of each candidate frame and the candidate frame acquired by background modeling, if the score exceeds the set threshold, considering that the candidate frames are legal, and keeping the candidate frames; otherwise, it is discarded.
As a preferred embodiment, the motion characteristic filtering module specifically includes: acquiring flame areas determined by the previous n frames, and if the current frame sequence is less than n, taking the flame areas determined by the previous 3 frames as final flame areas; matching the candidate frame of the current frame with the flame area of the previous n frames of data, if the candidate frame with the intersection ratio exceeding the threshold exists, adding 1 to the detected number num of the candidate frame of the current frame, and keeping the candidate frame with the detection ratio exceeding the set threshold as the final flame area; and reserving all the determined flame frames of the current frame to a flame frame queue for comparison with the following frame.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A flame detection method based on a surveillance video is characterized by comprising the following steps:
the moving object detection step specifically comprises the following steps: inputting a group of continuous frame image sequences, detecting whether a moving object exists in each frame, and outputting a candidate frame result to a flame detector;
a flame detector step, specifically comprising: receiving a candidate frame result output by the flame detector step, and identifying a candidate region by using a deep learning network if a moving object exists;
the detection matching step specifically comprises the following steps: matching the region detected in the flame detector step with the region detected in the moving object detection step, and reserving only the region meeting a certain intersection ratio;
the motion characteristic filtering step specifically comprises the following steps: and (4) further screening the regions reserved by the detection matching step by utilizing the motion characteristics between frames, wherein the finally determined regions are flame regions.
2. The flame detection method based on the surveillance video as claimed in claim 1, wherein the moving object detection step specifically comprises:
step SS 11: continuous frames captured under a static camera are used for obtaining foreground images through a self-adaptive Gaussian mixture Model (MOG);
step SS 12: performing morphological operation on the obtained foreground images to reduce the number of the foreground images;
step SS 13: using a digital binary image topological structure analysis based on boundary tracking to obtain a minimum circumscribed rectangle of each foreground image;
step SS 14: and filtering the candidate minimum circumscribed rectangle according to a non-maximum suppression algorithm to obtain a candidate frame obtained based on a background modeling method.
3. The surveillance video-based flame detection method according to claim 1, wherein the flame detector step specifically comprises:
step SS 21: scaling an input single frame image to a fixed size as input data of a detector;
step SS 22: operating a flame detection model to process input data and acquiring a candidate frame with a score exceeding a set threshold;
step SS 23: and performing candidate region filtering according to the set candidate region area ratio condition to obtain a candidate region result of the flame detector.
4. The surveillance video-based flame detection method according to claim 1, wherein the detecting and matching step specifically comprises:
step SS 31: traversing the candidate frames obtained in the flame detector step, if the score exceeds a threshold value, determining the candidate frames as flame candidate frames without subsequent filtering;
step SS 32: for candidate boxes acquired by the flame detector step whose score does not exceed the threshold, calculating the intersection ratio IOU of each candidate box C with the candidate box G acquired by the background modeling, the calculation formula is as follows:
Figure FDA0003212524360000021
if the candidate frame exceeds the set threshold, the candidate frame is considered to be legal, and the candidate frame is reserved; otherwise, it is discarded.
5. The surveillance video-based flame detection method according to claim 1, wherein the motion feature filtering step specifically comprises:
step SS 41: acquiring flame areas determined by the previous n frames, and if the current frame sequence is less than n, taking the flame areas determined by the previous 3 frames as final flame areas;
step SS 42: matching the candidate frame of the current frame with the flame area of the previous n frames, if the candidate frame with the intersection ratio exceeding the threshold exists, adding 1 to the detected number num of the candidate frame of the current frame, and calculating the score of each candidate frame in the following way:
Figure FDA0003212524360000022
reserving a candidate frame with the score exceeding a set threshold value as a final flame area;
step SS 43: and reserving all the determined flame frames of the current frame to a flame frame queue for comparison with the following frame.
6. A surveillance video based flame detection system, comprising:
a moving object detection module to perform: inputting a group of continuous frame image sequences, detecting whether a moving object exists in each frame, and outputting a candidate frame result to a flame detector module;
a flame detector module to perform: receiving a candidate frame result output by the flame detector module, and identifying a candidate region by using a deep learning network if a moving object exists;
a detection matching module to perform: matching the region detected by the flame detector module with the region detected by the moving object detection step, and reserving only the region meeting a certain intersection ratio;
a motion feature filtering module to perform: and further screening the regions reserved by the detection matching module by utilizing the motion characteristics between frames, wherein the finally determined regions are flame regions.
7. The surveillance video-based flame detection system of claim 6, wherein the moving object detection module specifically performs: continuous frames captured under a static camera are used for obtaining foreground images through a self-adaptive Gaussian mixture Model (MOG); performing morphological operation on the obtained foreground images to reduce the number of the foreground images; using a digital binary image topological structure analysis based on boundary tracking to obtain a minimum circumscribed rectangle of each foreground image; and filtering the candidate minimum circumscribed rectangle according to a non-maximum suppression algorithm to obtain a candidate frame obtained based on a background modeling method.
8. The surveillance video-based flame detection system of claim 6, wherein the flame detector module specifically performs: scaling an input single frame image to a fixed size as input data of a detector; operating a flame detection model to process input data and acquiring a candidate frame with a score exceeding a set threshold; and performing candidate region filtering according to the set candidate region area ratio condition to obtain a candidate region result of the flame detector.
9. The surveillance video-based flame detection system of claim 6, wherein the detection matching module specifically performs: traversing the candidate frames obtained in the flame detector step, if the score exceeds a threshold value, determining the candidate frames as flame candidate frames without subsequent filtering; for candidate boxes acquired by the flame detector step whose score does not exceed the threshold, calculating the intersection ratio IOU of each candidate box C with the candidate box G acquired by the background modeling, the calculation formula is as follows:
Figure FDA0003212524360000041
if the candidate frame exceeds the set threshold, the candidate frame is considered to be legal, and the candidate frame is reserved; otherwise, it is discarded.
10. The surveillance video-based flame detection system of claim 6, wherein the motion feature filtering module specifically comprises: acquiring flame areas determined by the previous n frames, and if the current frame sequence is less than n, taking the flame areas determined by the previous 3 frames as final flame areas;
matching the candidate frame of the current frame with the flame area of the previous n frames, if the candidate frame with the intersection ratio exceeding the threshold exists, adding 1 to the detected number num of the candidate frame of the current frame, and calculating the score of each candidate frame in the following way:
Figure FDA0003212524360000042
reserving a candidate frame with the score exceeding a set threshold value as a final flame area;
and reserving all the determined flame frames of the current frame to a flame frame queue for comparison with the following frame.
CN202110934946.5A 2021-08-16 2021-08-16 Flame detection method and system based on monitoring video Pending CN113657250A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110934946.5A CN113657250A (en) 2021-08-16 2021-08-16 Flame detection method and system based on monitoring video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110934946.5A CN113657250A (en) 2021-08-16 2021-08-16 Flame detection method and system based on monitoring video

Publications (1)

Publication Number Publication Date
CN113657250A true CN113657250A (en) 2021-11-16

Family

ID=78480398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110934946.5A Pending CN113657250A (en) 2021-08-16 2021-08-16 Flame detection method and system based on monitoring video

Country Status (1)

Country Link
CN (1) CN113657250A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463681A (en) * 2022-02-10 2022-05-10 天津大学 Fire detection method based on video monitoring platform
CN116152667A (en) * 2023-04-14 2023-05-23 英特灵达信息技术(深圳)有限公司 Fire detection method and device, electronic equipment and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11224389A (en) * 1998-02-09 1999-08-17 Hitachi Ltd Detecting method for flame, method and device for detecting fire
KR20090054522A (en) * 2007-11-27 2009-06-01 계명대학교 산학협력단 Fire detection system and method basedon visual data
CN108537215A (en) * 2018-03-23 2018-09-14 清华大学 A kind of flame detecting method based on image object detection
CN109977945A (en) * 2019-02-26 2019-07-05 博众精工科技股份有限公司 Localization method and system based on deep learning
CN110135269A (en) * 2019-04-18 2019-08-16 杭州电子科技大学 A kind of fire image detection method based on blend color model and neural network
CN110309765A (en) * 2019-06-27 2019-10-08 浙江工业大学 A kind of video frequency motion target efficient detection method
CN110516609A (en) * 2019-08-28 2019-11-29 南京邮电大学 A kind of fire video detection and method for early warning based on image multiple features fusion
CN110751014A (en) * 2019-08-29 2020-02-04 桂林电子科技大学 Flame detection system and method
CN110751089A (en) * 2019-10-18 2020-02-04 南京林业大学 Flame target detection method based on digital image and convolution characteristic
CN111401311A (en) * 2020-04-09 2020-07-10 苏州海赛人工智能有限公司 High-altitude parabolic recognition method based on image detection
CN112052797A (en) * 2020-09-07 2020-12-08 合肥科大立安安全技术有限责任公司 MaskRCNN-based video fire identification method and system
CN112418102A (en) * 2020-11-25 2021-02-26 北京市新技术应用研究所 Smoke and fire detection method and device, smoke and fire detection system and electronic equipment

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11224389A (en) * 1998-02-09 1999-08-17 Hitachi Ltd Detecting method for flame, method and device for detecting fire
KR20090054522A (en) * 2007-11-27 2009-06-01 계명대학교 산학협력단 Fire detection system and method basedon visual data
CN108537215A (en) * 2018-03-23 2018-09-14 清华大学 A kind of flame detecting method based on image object detection
CN109977945A (en) * 2019-02-26 2019-07-05 博众精工科技股份有限公司 Localization method and system based on deep learning
CN110135269A (en) * 2019-04-18 2019-08-16 杭州电子科技大学 A kind of fire image detection method based on blend color model and neural network
CN110309765A (en) * 2019-06-27 2019-10-08 浙江工业大学 A kind of video frequency motion target efficient detection method
CN110516609A (en) * 2019-08-28 2019-11-29 南京邮电大学 A kind of fire video detection and method for early warning based on image multiple features fusion
CN110751014A (en) * 2019-08-29 2020-02-04 桂林电子科技大学 Flame detection system and method
CN110751089A (en) * 2019-10-18 2020-02-04 南京林业大学 Flame target detection method based on digital image and convolution characteristic
CN111401311A (en) * 2020-04-09 2020-07-10 苏州海赛人工智能有限公司 High-altitude parabolic recognition method based on image detection
CN112052797A (en) * 2020-09-07 2020-12-08 合肥科大立安安全技术有限责任公司 MaskRCNN-based video fire identification method and system
CN112418102A (en) * 2020-11-25 2021-02-26 北京市新技术应用研究所 Smoke and fire detection method and device, smoke and fire detection system and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴凡: "基于深度学习的火灾检测算法研究与实现", 《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》, no. 02, pages 1 - 67 *
赵飞扬 等: "基于改进YOLOv3的火焰检测", 《中国科技论文》, vol. 15, no. 07, pages 820 - 826 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463681A (en) * 2022-02-10 2022-05-10 天津大学 Fire detection method based on video monitoring platform
CN116152667A (en) * 2023-04-14 2023-05-23 英特灵达信息技术(深圳)有限公司 Fire detection method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108062349B (en) Video monitoring method and system based on video structured data and deep learning
CN108053427B (en) Improved multi-target tracking method, system and device based on KCF and Kalman
CN108009473B (en) Video structuralization processing method, system and storage device based on target behavior attribute
CN113139521B (en) Pedestrian boundary crossing monitoring method for electric power monitoring
CN107622258B (en) Rapid pedestrian detection method combining static underlying characteristics and motion information
JP6549797B2 (en) Method and system for identifying head of passerby
CN108052859B (en) Abnormal behavior detection method, system and device based on clustering optical flow characteristics
CN104303193B (en) Target classification based on cluster
CN112052797A (en) MaskRCNN-based video fire identification method and system
CN109145708B (en) Pedestrian flow statistical method based on RGB and D information fusion
US8340420B2 (en) Method for recognizing objects in images
Habiboglu et al. Real-time wildfire detection using correlation descriptors
CN110298297B (en) Flame identification method and device
CN110378179B (en) Subway ticket evasion behavior detection method and system based on infrared thermal imaging
CN114842397B (en) Real-time old man falling detection method based on anomaly detection
CN101635835A (en) Intelligent video monitoring method and system thereof
CN111814635B (en) Deep learning-based firework recognition model establishment method and firework recognition method
CN114639075B (en) Method and system for identifying falling object of high altitude parabola and computer readable medium
CN108230607B (en) Image fire detection method based on regional characteristic analysis
CN113657250A (en) Flame detection method and system based on monitoring video
CN110991245A (en) Real-time smoke detection method based on deep learning and optical flow method
CN113920585A (en) Behavior recognition method and device, equipment and storage medium
KR20200060868A (en) multi-view monitoring system using object-oriented auto-tracking function
CN117475353A (en) Video-based abnormal smoke identification method and system
CN117557937A (en) Monitoring camera image anomaly detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination