CN116343110A - Flame detection method based on monitoring image - Google Patents

Flame detection method based on monitoring image Download PDF

Info

Publication number
CN116343110A
CN116343110A CN202310125070.9A CN202310125070A CN116343110A CN 116343110 A CN116343110 A CN 116343110A CN 202310125070 A CN202310125070 A CN 202310125070A CN 116343110 A CN116343110 A CN 116343110A
Authority
CN
China
Prior art keywords
flame
detection
preset
difference
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310125070.9A
Other languages
Chinese (zh)
Inventor
闫博通
琚午阳
张何伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ruixin High Throughput Technology Co ltd
Original Assignee
Beijing Ruixin High Throughput Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ruixin High Throughput Technology Co ltd filed Critical Beijing Ruixin High Throughput Technology Co ltd
Priority to CN202310125070.9A priority Critical patent/CN116343110A/en
Publication of CN116343110A publication Critical patent/CN116343110A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Fire-Detection Mechanisms (AREA)

Abstract

The invention discloses a flame detection method based on a monitoring image, which processes the image by using the property of a pixel point to primarily exclude non-flame scenes, so that the number of times of detecting the scenes which do not contain flames by using a flame detection model can be reduced, and the detection efficiency is improved; meanwhile, the flame is comprehensively judged through the detection result of the flame detection model and the frame difference before detection, so that the occurrence of false detection is reduced; in addition, the infrared mode of the camera is used for further verifying the primarily determined flame, so that the accuracy of flame detection is improved, and the occurrence of false detection is reduced.

Description

Flame detection method based on monitoring image
Technical Field
The invention relates to the technical field of image detection, in particular to a flame detection method based on a monitoring image.
Background
Flame detection is an effective method for detecting fire occurrence, avoiding property loss and reducing personnel monitoring difficulty. The current method related to flame detection generally adopts a temperature sensor or an infrared camera to determine a local heating area in a scene, or inputs an image shot by monitoring camera into a neural network trained by a data set to detect the flame.
Among them, one of the prior art is to monitor a specific area by infrared photographing or a temperature sensor and provide local temperature information to determine whether there is a flame. However, this solution is used in special situations including objects that are prone to heat up, such as exposed steel pipes, stainless steel vacuum cups, or black dust covers, etc., and false alarms may occur.
In another scheme of the prior art, the shot picture can be detected after the neural network is trained, so that whether flame exists in the current scene or not is judged. However, when flame detection areas trained by using a neural network are used for performing flame identification judgment on pictures, false detection is possible, especially red abnormal sensitivity is caused, and false reflection detection on car lamps and scene areas in images is easy.
Therefore, there is a need in the art for a flame detection method that has a low false detection rate and can be widely applied to monitoring systems.
Disclosure of Invention
In order to solve the problems, the invention provides a flame detection method based on a monitoring image, which is used for comprehensively judging whether flame exists or not by combining a detection result of a flame detection model and an inter-frame difference before model detection so as to reduce the occurrence of false detection, and aiming at the situation that a neural network can generate false detection, and the non-flame scene is removed in advance by pixel properties so as to reduce the detection times of using the flame detection model without the flame scene, thereby improving the detection efficiency. In addition, the secondary screening is carried out on the primarily determined flame through the addition of the infrared mode of the camera, so that the accuracy of flame detection is further verified, the false detection condition is reduced, and the accuracy of detection is improved.
In order to achieve the above object, the present invention provides a flame detection method based on a monitoring image, which is implemented by at least one camera, wherein the at least one camera is a camera with an infrared mode, and the flame detection method includes:
step S1: the at least one camera respectively acquires two frames of images in a detection scene as detection images, wherein a preset interval time is reserved between the two frames of images;
step S2: acquiring a plurality of loading difference areas between two frames of images by calculating the interframe difference of the two frames of images, and performing corrosion operation on all the loading difference areas to obtain a plurality of corroded difference areas so as to subtract and inhibit tiny objects in the images;
step S3: detecting a threshold value of each corroded difference region, if any corroded difference region is smaller than a preset detection region threshold value, discarding the corresponding region, otherwise, reserving the corresponding region;
step S4: expanding all the reserved difference areas after corrosion to obtain expanded difference areas so as to restore each difference area to be close to the size of the corresponding loading difference area before corrosion;
step S5: extracting RGB values and YCbCr values of all pixel points with preset proportion in longitudinal intervals in any expanded difference region, and forming a single rectangle by the corresponding pixel points;
step S6: extracting brightness information of the single rectangle, calculating the proportion of red pixel points to yellow pixel points extracted from the single rectangle, constructing an external rectangle corresponding to a difference region when the brightness information reaches and exceeds a preset brightness information threshold value and the proportion of the red pixel points to the yellow pixel points reaches and exceeds a preset red-yellow proportion threshold value, and recording coordinates of an upper left vertex and a lower right vertex of the external rectangle; otherwise, discarding the corresponding difference region;
step S7: inputting the next frame of images of the two frames of images in the step S1 into a trained flame detection model to detect targets, presetting a target detection threshold, and outputting coordinates of the upper left vertex and the lower right vertex of a corresponding detection frame when a detection area reaches the preset target detection threshold; otherwise, determining that the corresponding detection area has no flame; wherein the flame detection model is a YOLOv5 network model;
step S8: acquiring an intersection of the detection frame output in the step S7 and the circumscribed rectangle of the corresponding difference region obtained in the step S6, and calculating the ratio of the intersection to the corresponding loading difference region in the step S2:
if the ratio is greater than or equal to the preset ratio, the existence of flame in the detection area is primarily judged, an intersection frame of the intersection is recorded, and then the flame verification process is continued through the infrared mode of the camera.
In an embodiment of the present invention, the preset interval time in step S1 is 2 seconds.
In an embodiment of the present invention, the preset detection area threshold in step S3 is a preset detection area size with a fixed width and a fixed height.
In an embodiment of the present invention, the convolution kernels used in the etching operation in step S2 and the expanding operation in step S4 are both 5*5.
In an embodiment of the present invention, the preset ratio of the longitudinal intervals in step S5 is 1:4.
In an embodiment of the present invention, the specific process of constructing the circumscribed rectangle corresponding to the difference region in step S6 includes:
step S601: traversing all pixel points in the corresponding difference region;
step S602: setting the width of the image as the x-axis direction and the height as the y-axis direction, and recording the minimum x, y coordinate values and the maximum x, y coordinate values in all pixel points;
step S603: the minimum x, y coordinate value is set as the upper left vertex coordinate of the circumscribed rectangle, and the maximum x, y coordinate value is set as the lower right vertex coordinate of the circumscribed rectangle.
In an embodiment of the present invention, the flame verification process further includes:
step S9: automatically setting the corresponding camera to be in an infrared mode through an interface provided by the corresponding camera;
step S10: acquiring image data in an infrared mode, counting pixel values of all pixel points in the intersection frame corresponding to the step S8 in the infrared image, and judging that flame exists if the total number of pixels with the pixel values larger than a preset pixel threshold value is larger than or equal to the preset total number;
step S11: and (3) finishing flame verification, and adjusting the camera mode back to the non-infrared mode through an interface provided by the corresponding camera.
Compared with the prior art, the flame detection method based on the monitoring image provided by the invention has the advantages that the non-flame scene is primarily removed by using the pixel point property to process the image, so that the number of times of detecting the scene which does not contain flame by using a flame detection model can be reduced, and the detection efficiency is improved; meanwhile, the flame is comprehensively judged through the detection result of the flame detection model and the frame difference before detection, so that the occurrence of false detection is reduced; in addition, the infrared mode of the camera is used for further verifying the primarily determined flame, so that the accuracy of flame detection is improved, and the occurrence of false detection is reduced.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an embodiment of the present invention.
Description of the embodiments
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a schematic flow chart of an embodiment of the present invention, as shown in fig. 1, the embodiment provides a flame detection method based on a monitoring image, which is implemented by at least one camera, wherein the at least one camera is a camera with an infrared mode, and the flame detection method includes:
step S1: the at least one camera respectively acquires two frames of images in a detection scene as detection images, wherein a preset interval time is reserved between the two frames of images;
in this embodiment, the preset interval time in step S1 is 2 seconds. In other embodiments, the preset interval time may also be set to other values according to actual requirements, which is not limited by the present invention.
Step S2: acquiring a plurality of loading difference areas between two frames of images by calculating the interframe difference of the two frames of images, and performing corrosion operation on all the loading difference areas to obtain a plurality of corroded difference areas so as to subtract and inhibit tiny objects in the images; the calculation process of the inter-frame difference and the image corrosion method are conventional calculation methods in the image processing field, so that detailed description is omitted herein;
step S3: detecting a threshold value of each corroded difference region, if any corroded difference region is smaller than a preset detection region threshold value, discarding the corresponding region, otherwise, reserving the corresponding region;
in this embodiment, the preset detection area threshold in step S3 is preset to a detection area size with a fixed width and a fixed height, and the specific width and the specific height can be set according to the requirement, which is not limited in the present invention.
Step S4: expanding all the reserved difference areas after corrosion to obtain expanded difference areas so as to restore each difference area to be close to the size of the corresponding loading difference area before corrosion;
in this embodiment, the convolution kernels used in the etching operation in step S2 and the expanding operation in step S4 are both 5*5. In other embodiments, the size of the convolution kernel of the erosion and expansion operation may be set as desired, and the present invention is not limited thereto, but the convolution kernel of the erosion and expansion is typically chosen to be the same size in order to maintain the region size after the erosion and expansion to be comparable to the previous one.
Step S5: extracting RGB values (one color standard in the industry) and YCbCr values (one color space) of all pixel points in a longitudinal interval preset proportion in any one of the expanded difference areas, and forming a single rectangle by the corresponding pixel points;
in this embodiment, the preset ratio of the longitudinal interval in step S5 is 1:4, and in other embodiments, other ratios may be set according to the requirements, which is not limited by the present invention.
Step S6: extracting brightness information of the single rectangle, calculating the proportion of red pixel points to yellow pixel points extracted from the single rectangle, constructing an external rectangle corresponding to a difference region when the brightness information reaches and exceeds a preset brightness information threshold value and the proportion of the red pixel points to the yellow pixel points reaches and exceeds a preset red-yellow proportion threshold value, and recording coordinates of an upper left vertex and a lower right vertex of the external rectangle; otherwise, discarding the corresponding difference region; according to the embodiment, the ratio of the red pixels to the yellow pixels is calculated, so that the scenes which cannot possibly contain flames or the situations which contain only red objects but not flames in the scenes are firstly eliminated, the scenes are not required to be input into a detection network for detection, the detection times of the detection network on the non-flame scenes can be reduced, and the overall detection efficiency is improved;
in this embodiment, the specific process of constructing the circumscribed rectangle corresponding to the difference region in step S6 includes:
step S601: traversing all pixel points in the corresponding difference region;
step S602: setting the width of the image as the x-axis direction and the height as the y-axis direction, and recording the minimum x, y coordinate values and the maximum x, y coordinate values in all pixel points;
step S603: the minimum x, y coordinate value is set as the upper left vertex coordinate of the circumscribed rectangle, and the maximum x, y coordinate value is set as the lower right vertex coordinate of the circumscribed rectangle.
Step S7: inputting the next frame of images of the two frames of images in the step S1 into a trained flame detection model to detect targets, presetting a target detection threshold, and outputting coordinates of the upper left vertex and the lower right vertex of a corresponding detection frame when a detection area reaches the preset target detection threshold; otherwise, determining that the corresponding detection area has no flame; the flame detection model is a YOLOv5 network model, and the flame detection model used in the embodiment is a model trained in advance;
step S8: acquiring an intersection of the detection frame output in the step S7 and the circumscribed rectangle of the corresponding difference region obtained in the step S6, and calculating the ratio of the intersection to the corresponding loading difference region in the step S2:
if the ratio is greater than or equal to the preset ratio, the existence of flame in the detection area is primarily judged, an intersection frame of the intersection is recorded, and then the flame verification process is continued through the infrared mode of the camera.
In this embodiment, by comprehensively considering the results of steps S6 and S7, the accuracy of flame identification and judgment can be improved.
In this embodiment, the flame verification process further includes:
step S9: automatically setting the corresponding camera to be in an infrared mode through an interface provided by the corresponding camera; for example, if the camera is correspondingly configured with a cradle head, the interface may be the cradle head, and the setting of the camera is adjusted through the cradle head; if the camera is not provided with the cradle head, the setting mode of the camera can be adjusted through a control interface of the camera, and the method is not limited;
step S10: acquiring image data in an infrared mode, counting pixel values of all pixel points in the intersection frame corresponding to the step S8 in the infrared image, and judging that flame exists if the total number of pixels with the pixel values larger than a preset pixel threshold value is larger than or equal to the preset total number;
step S11: and finishing flame verification and adjusting the camera mode back to the non-infrared mode through an interface provided by the corresponding camera.
In general, in the infrared image, the closer the pixel is to white, the more remarkable the infrared data is indicated, so in the implementation, the primary flame identification judgment is firstly carried out through the flame detection model of YOLOv5 after the normal monitoring image is processed, and then the secondary verification is carried out on the primary flame through the infrared mode of the camera, so that the possibility of false detection caused by a neural network only in the prior art can be reduced, and the accuracy of flame detection is improved.
According to the flame detection method based on the monitoring image, provided by the invention, the non-flame scene is primarily eliminated by processing the image by using the pixel point property, so that the number of times of detecting the scene which does not contain flame by using a flame detection model can be reduced, and the detection efficiency is improved; meanwhile, the flame is comprehensively judged through the detection result of the flame detection model and the frame difference before detection, so that the occurrence of false detection is reduced; in addition, the infrared mode of the camera is used for further verifying the primarily determined flame, so that the accuracy of flame detection is improved, and the occurrence of false detection is reduced.
Those of ordinary skill in the art will appreciate that: the drawing is a schematic diagram of one embodiment and the modules or flows in the drawing are not necessarily required to practice the invention.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (7)

1. The flame detection method based on the monitoring image is realized by at least one camera, and is characterized by comprising the following steps:
step S1: the at least one camera respectively acquires two frames of images in a detection scene as detection images, wherein a preset interval time is reserved between the two frames of images;
step S2: acquiring a plurality of loading difference areas between two frames of images by calculating the interframe difference of the two frames of images, and performing corrosion operation on all the loading difference areas to obtain a plurality of corroded difference areas so as to subtract and inhibit tiny objects in the images;
step S3: detecting a threshold value of each corroded difference region, if any corroded difference region is smaller than a preset detection region threshold value, discarding the corresponding region, otherwise, reserving the corresponding region;
step S4: expanding all the reserved difference areas after corrosion to obtain expanded difference areas so as to restore each difference area to be close to the size of the corresponding loading difference area before corrosion;
step S5: extracting RGB values and YCbCr values of all pixel points with preset proportion in longitudinal intervals in any expanded difference region, and forming a single rectangle by the corresponding pixel points;
step S6: extracting brightness information of the single rectangle, calculating the proportion of red pixel points to yellow pixel points extracted from the single rectangle, constructing an external rectangle corresponding to a difference region when the brightness information reaches and exceeds a preset brightness information threshold value and the proportion of the red pixel points to the yellow pixel points reaches and exceeds a preset red-yellow proportion threshold value, and recording coordinates of an upper left vertex and a lower right vertex of the external rectangle; otherwise, discarding the corresponding difference region;
step S7: inputting the next frame of images of the two frames of images in the step S1 into a trained flame detection model to detect targets, presetting a target detection threshold, and outputting coordinates of the upper left vertex and the lower right vertex of a corresponding detection frame when a detection area reaches the preset target detection threshold; otherwise, determining that the corresponding detection area has no flame; wherein the flame detection model is a YOLOv5 network model;
step S8: acquiring an intersection of the detection frame output in the step S7 and the circumscribed rectangle of the corresponding difference region obtained in the step S6, and calculating the ratio of the intersection to the corresponding loading difference region in the step S2:
if the ratio is greater than or equal to the preset ratio, the existence of flame in the detection area is primarily judged, an intersection frame of the intersection is recorded, and then the flame verification process is continued through the infrared mode of the camera.
2. The method for detecting a flame based on a monitoring image according to claim 1, wherein the preset interval time in step S1 is 2 seconds.
3. The method of claim 1, wherein the predetermined detection area threshold in step S3 is a predetermined detection area size of a fixed width and a fixed height.
4. The monitored image based flame detection method of claim 1, wherein the erosion operation of step S2 and the dilation operation of step S4 each employ a convolution kernel of 5*5.
5. The method for detecting flames based on a monitoring image according to claim 1, wherein the longitudinal interval preset ratio in step S5 is 1:4.
6. The method for detecting flame based on monitoring image according to claim 1, wherein the specific process of constructing the circumscribed rectangle corresponding to the difference region in step S6 comprises:
step S601: traversing all pixel points in the corresponding difference region;
step S602: setting the width of the image as the x-axis direction and the height as the y-axis direction, and recording the minimum x, y coordinate values and the maximum x, y coordinate values in all pixel points;
step S603: the minimum x, y coordinate value is set as the upper left vertex coordinate of the circumscribed rectangle, and the maximum x, y coordinate value is set as the lower right vertex coordinate of the circumscribed rectangle.
7. The monitored image based flame detection method of claim 1, wherein said flame verification process further comprises:
step S9: automatically setting the corresponding camera to be in an infrared mode through an interface provided by the corresponding camera;
step S10: acquiring image data in an infrared mode, counting pixel values of all pixel points in the intersection frame corresponding to the step S8 in the infrared image, and judging that flame exists if the total number of pixels with the pixel values larger than a preset pixel threshold value is larger than or equal to the preset total number;
step S11: and (3) finishing flame verification, and adjusting the camera mode back to the non-infrared mode through an interface provided by the corresponding camera.
CN202310125070.9A 2023-02-07 2023-02-07 Flame detection method based on monitoring image Pending CN116343110A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310125070.9A CN116343110A (en) 2023-02-07 2023-02-07 Flame detection method based on monitoring image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310125070.9A CN116343110A (en) 2023-02-07 2023-02-07 Flame detection method based on monitoring image

Publications (1)

Publication Number Publication Date
CN116343110A true CN116343110A (en) 2023-06-27

Family

ID=86875470

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310125070.9A Pending CN116343110A (en) 2023-02-07 2023-02-07 Flame detection method based on monitoring image

Country Status (1)

Country Link
CN (1) CN116343110A (en)

Similar Documents

Publication Publication Date Title
CN112560657B (en) Method, device, computer device and storage medium for identifying smoke and fire
KR101075063B1 (en) Fire-Flame Detection Using Fuzzy Logic
KR102336030B1 (en) Electric vehicle charger fire detection and charger condition prediction system
KR100858140B1 (en) Method and system for detecting a fire by image processing
KR101998639B1 (en) Intelligent system for ignition point surveillance using composite image of thermal camera and color camera
US20150003675A1 (en) Image processing apparatus and method
KR101224548B1 (en) Fire imaging detection system and method
KR101066900B1 (en) An apparatus of dection for moving from cctv camera
JP3486229B2 (en) Image change detection device
CN113408479A (en) Flame detection method and device, computer equipment and storage medium
US8311345B2 (en) Method and system for detecting flame
KR101044903B1 (en) Fire detecting method using hidden markov models in video surveillance and monitoring system
JPH09233461A (en) Infrared ray fire monitoring device
CN115049955A (en) Fire detection analysis method and device based on video analysis technology
JP2002304677A (en) Method and device for detecting intruder
JPH0973541A (en) Object detection device/method
CN116343110A (en) Flame detection method based on monitoring image
CN105718881B (en) The zero illumination environment monitoring smoke dust method based on infrared video gray level image
JP2005070985A (en) Image processor, method and program
CN115767018A (en) Engine room fire detection monitoring system based on CCTV system
Thepade et al. Fire Detection System Using Color and Flickering Behaviour of Fire with Kekre's LUV Color Space
WO2017092589A1 (en) Method and device for determining portrait contour in image
CN114549406A (en) Hot rolling line management method, device and system, computing equipment and storage medium
JP2001023055A (en) Flame detector and flame detection method
CN113340352A (en) Valve hall monitoring method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 711c, 7 / F, block a, building 1, yard 19, Ronghua Middle Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing 102600

Applicant after: Beijing Zhongke Flux Technology Co.,Ltd.

Address before: Room 711c, 7 / F, block a, building 1, yard 19, Ronghua Middle Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing 102600

Applicant before: Beijing Ruixin high throughput technology Co.,Ltd.