CN111179279A - Comprehensive flame detection method based on ultraviolet and binocular vision - Google Patents

Comprehensive flame detection method based on ultraviolet and binocular vision Download PDF

Info

Publication number
CN111179279A
CN111179279A CN201911326088.5A CN201911326088A CN111179279A CN 111179279 A CN111179279 A CN 111179279A CN 201911326088 A CN201911326088 A CN 201911326088A CN 111179279 A CN111179279 A CN 111179279A
Authority
CN
China
Prior art keywords
flame
area
image
ultraviolet
binocular vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911326088.5A
Other languages
Chinese (zh)
Inventor
刘伟
方黎勇
王思维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Zhima Technology Co ltd
Original Assignee
Chengdu Zhima Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Zhima Technology Co ltd filed Critical Chengdu Zhima Technology Co ltd
Priority to CN201911326088.5A priority Critical patent/CN111179279A/en
Publication of CN111179279A publication Critical patent/CN111179279A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Fire-Detection Mechanisms (AREA)

Abstract

The invention provides an ultraviolet and video based comprehensive flame detection method, which comprises the following steps: s1, detecting the monitored environment in real time through an ultraviolet and binocular vision composite flame detector, firstly detecting the ultraviolet spectrum of the flame by an ultraviolet sensor, and triggering a video camera to capture under abnormal conditions to obtain the current environment image data; s2, performing image analysis on the obtained video image data, using motion detection; based on the combination of region growing and an HSI color model, segmenting the suspected flame region image; then, judging a suspected flame area through flame characteristic detection; and (4) carrying out binocular vision reconstruction according to the camera parameters to obtain the actual area and distance of the flame. And S3, judging the current moment of the flame and the flame grade of the whole combustion process according to the actual area of the flame.

Description

Comprehensive flame detection method based on ultraviolet and binocular vision
Technical Field
The invention relates to the field of image analysis, in particular to a comprehensive flame detection method based on ultraviolet and binocular vision.
Background
The conventional fire detection method has two main directions, and the fire detection principle can be divided into two basic types, namely contact type and non-contact type according to the relation between a detection element and a detection object. The contact detection mainly includes a temperature-sensitive type and a smoke-sensitive type fire detector, and the non-contact type mainly includes an image type detector.
1. The temperature sensing detector responds to an abnormally high temperature or abnormally high temperature rise rate fire detector. The temperature-sensing fire detector has higher reliability than a smoke detector and has low requirement on the environment. However, it is not suitable for use in places where black smoke, dust, vapor and oil mist may be generated because of its slow response to an initial fire.
2. The smoke-sensitive fire detector is developed according to the characteristic of a fire detector capable of responding to visible or invisible smoke particles, and is a device for converting the change of smoke concentration of a detection part into an electric signal to realize alarm. The smoke-sensitive fire detector can detect dangerous situations in time at the initial stage of a fire, but is only suitable for places where the fire generates large smoke or places where smoldering is easy to generate, and has great limitation when being used in environments with too fast ventilation or large smoke at ordinary times.
3. The image type detector analyzes the current environment by utilizing a camera through an image processing technology and detects whether fire characteristics exist or not. The image processing technology is used for detecting fire, and most of the existing algorithms apply fixed threshold values, so that the improvement of the detection rate and the reduction of the false detection rate are difficult to meet simultaneously.
Disclosure of Invention
The invention aims to at least solve the technical problems in the prior art, and particularly creatively provides a comprehensive flame detection method based on ultraviolet and binocular vision.
In order to achieve the above purpose, the invention provides a comprehensive flame detection method based on ultraviolet and binocular vision, which comprises the following steps:
s1, detecting the monitored environment in real time through an ultraviolet and binocular vision composite flame detector, detecting the ultraviolet spectrum of the flame through an ultraviolet sensor, and triggering a video camera to capture under abnormal conditions to obtain the current environment image data;
s2, performing image analysis on the obtained environment image data, using motion detection; based on the combination of region growing and an HSI color model, segmenting the suspected flame region image; then, judging a suspected flame area through flame characteristic detection; and (4) carrying out binocular vision reconstruction according to the camera parameters to obtain the actual area and distance of the flame.
And S3, judging the current moment of the flame and the flame grade of the whole combustion process according to the actual area and distance of the flame.
Preferably, the S2 includes:
s2-1, performing motion detection by using background difference operation, and removing a static interference source in the image;
s2-2, segmenting the image by using region growing and HSI color model to obtain the suspected flame region in the image.
And S2-3, identifying and judging the suspected flame by utilizing the flickering characteristic of the fire source and utilizing the area transformation of the approximate flame color area.
And S2-4, removing a circular dynamic interference source similar to the car lamp in the image by utilizing the non-circular characteristic of the fire source and the roundness calculation of the region to obtain a final fire source region.
And S2-5, performing binocular reconstruction according to the camera parameters to obtain the actual area and distance of the flame.
Preferably, the motion detection in S2-1 includes:
when a background model is established, the three parameters of the maximum and minimum gray values and the maximum time difference value are generally selected for model reference; establishing a background image by adopting an MOG2 algorithm during background modeling; after a background image is established, performing differential operation on a current frame and the background image, and segmenting a motion area of the image to obtain a segmented image; the algorithm is as follows:
Ik(i,j)=b′k(i,j)+mk(i,j)+nk(i,j)
dk(i,j)=Ik(i,j)-bK(i,j)
wherein Ik(i, j) is pixel information of the current picture at the (i, j) position in the two-dimensional coordinate plane, b'k(i, j) is pixel information of the current background image at the (i, j) position, mk(i, j) represents pixel information of the moving object at the (i, j) position; n isk(i, j) represents noise information in the image, dk(i, j) is pixel information of the foreground image, bK(i, j) is background information.
Preferably, the image segmentation in S2-2 includes:
and (4) segmenting the image, and segmenting the suspected flame area by adopting area growing and HSI color space. The image segmentation by the region growing method is an algorithm for clustering images according to the similarity and connectivity of pixels, whether pixel points of four neighborhoods or eight neighborhoods of the seed points are similar to the seed points or not is sequentially judged by setting a plurality of seed points, and if the pixel points are similar to the seed points, a seed point set is added until the seed point set is empty.
And combining the region growth with an HSI color space, judging the suspected flame region through an HSI color model, taking the pixel point of the region as a seed point, and completing the segmentation of the suspected flame region through a region growth algorithm.
Preferably, the detecting of the flash frequency characteristics of the flame in S2-3 includes:
calculating the change of the flame area at adjacent moments so as to determine the flicker frequency of the flame; the algorithm is as follows:
DPt=(At+1-At)×(At-At-1)(t∈[1,N-2])
wherein DPtIs the product of the area differences of adjacent frames, At-1、At、At+1Is the connected domain area of adjacent 3 frames, N is the total number of video frames in 1 second; by DPtCan determine the area of the detection region in this time periodWhether or not there is a change in DPtWhen the value of (1) meets a set threshold value, namely the area is changed, and the flame flicker frequency is increased by 1;
preferably, the detecting of the circularity feature in S2-4 includes:
the circularity C is used for describing the complexity of the shape of an object, the interference source of a fire is an object with smooth and regular shape such as a street lamp and a car lamp, the image shape of flame is complex, and the characteristic can be well distinguished from the interference source; can be used as one of the criteria for detecting flame; the formula is as follows:
C=P2/At
wherein P represents the perimeter of the connected component boundary, AtRepresents the connected domain area; 4 pi is the minimum value of circularity, and the shape complexity is proportional to the value of C; by taking C ═ P2/AtAnd multiplied by 4 pi to normalize it to a number between 0 and 1, as follows:
C=4πAt/P2
preferably, the 2-5 flame area reconstruction includes:
on the basis of segmenting and extracting the flame region, the binocular vision is adopted to measure the flame area and distance in the actual environment. In binocular vision, two cameras at different positions shoot the same scene, and the three-dimensional coordinates of a space point are obtained by calculating the parallax of the point in two images.
Let a feature point P (x, y, z) be in space, and in the image imaged by the binocular camera, the imaging point of the left camera is Pleft(xleft,yleft) Right camera imaging point is Pright(xright,yright). The left and right cameras are installed at the same horizontal position, so the ordinate of the feature point in the imaging position in both cameras is the same, i.e., yleft=yright. The geometrical coordinates of the feature points in the actual space can be calculated according to the triangular relation as follows:
Figure BDA0002328416240000041
d=xleft-xright
wherein B is the distance between the projection centers of the two cameras; f is the focal length of the camera. Before binocular vision is used for measurement, the cameras need to be calibrated to obtain internal and external parameters, and the focal length f of the cameras, the distance B between the projection centers of the two cameras, the translation T between the two cameras and the rotation angle are obtained through calibration.
Preferably, the S3 includes:
s3-1 flame level determination of current frame
S3-2 grade determination of whole flame combustion process
Preferably, the 3-1 current frame flame grade division method
And setting the returned detection data as 00 normal and setting the returned detection data as 01 alarm data based on the flame combustion current grade calculation of the ultraviolet detector and the video. When the data returned by the ultraviolet detector is 01, starting the camera to take a snapshot, and recording the current frame as k (k belongs to N)*) And processing the current frame. Analyzing the grade of flame, adopting an image processing technology to segment the flame from a background image, separating a foreground image by adopting a background difference method when the flame is segmented, and then segmenting the flame area in the foreground image according to the static characteristic and the dynamic characteristic of the flame. And reconstructing the flame through binocular vision to obtain the real area of the flame, and comparing the real area with the current environment space plane. Recording the area S of the flame zonefThe cross-sectional area of the current environmental space region is SaCalculating the percentage as Tk(k∈N*). In which N is*Is a positive integer; preferably, the S3-2 grade judgment of the whole flame combustion process is based on the calculation of the grade of the whole flame combustion process of the ultraviolet detector and the video, and the pixel percentage T of the flame area in each frame in the previous combustion process is obtainedk(k∈N*) The mean value E and the maximum value T are calculatedmaxBy E and TmaxCalculating a weighted average
Figure BDA0002328416240000054
Figure BDA0002328416240000051
Figure BDA0002328416240000052
Under different environments, setting corresponding values of a and b, and combining calculation to obtain weighted average
Figure BDA0002328416240000053
The flame level of the whole combustion process is judged by taking the value of the weighted average as a basis in combination with the current environment.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
the ultraviolet fire detector has the advantages of sensitivity, reliability, dust pollution resistance, moisture resistance, corrosion resistance, and the like, and has high reaction speed. The invention designs a fire detection and measurement system by combining an ultraviolet fire detector with a computer vision technology, detects an ultraviolet spectrum in a wave band of 180-240 nm by using the ultraviolet detector, and further confirms the detected fire possibly by using the computer vision technology. And aiming at the condition that the traditional fire detector can not detect the fire, the fire detector provided by the invention has the advantages that the function of measuring the fire is added on the basis of fire detection, so that firefighters can master more detailed fire information.
The invention detects the fire by combining the ultraviolet fire detector and the computer vision technology, has stronger robustness in large space and complex environment compared with the traditional fire detector, and reduces the probability of false detection and missed detection. Compared with the traditional fire detection technology, when the fire is detected, more fire information can not be obtained. The invention can measure the size of the fire in real time while detecting, and can provide more fire information for the outside.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic diagram of the system of the present invention;
FIG. 2 is a flow chart of the method of the present invention;
FIG. 3 is a flow chart of the motion detection decision of the present invention;
FIG. 4 is a schematic model of binocular vision imaging of the present invention;
FIG. 5 is a schematic diagram of a flame raw frame in accordance with an embodiment of the invention;
FIG. 6 is a diagram of a foreground image according to the background subtraction method of the present invention;
FIGS. 7A and 7B are graphs of image segmentation effects based on region growing in combination with HIS color models according to the present invention;
FIG. 8A is a schematic view of a flame original frame;
FIG. 8B is a depth map of the flame;
fig. 8C and 8D are images captured by left and right cameras of a binocular camera, respectively;
fig. 8E is the actual distance of the flame from the binocular camera measured by binocular reconstruction.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
As shown in fig. 1, the ultraviolet and video composite detector monitors the current environment, transmits detected data back to the server for analysis, and sends out a warning when a fire disaster is detected.
As shown in fig. 2, the system monitors the current environment through an ultraviolet fire detector at the beginning, and when the ultraviolet fire detector detects that an ultraviolet spectrum of suspected flame exists in the space, a camera is triggered to continuously capture the current environment until the flame is finished, and video images of the whole process from the occurrence of the flame to the end of the flame are completely recorded; and analyzing the captured image while capturing, performing image segmentation by using motion detection, region growth and HSV color model, judging flame by using flame characteristic detection, performing binocular vision reconstruction according to camera parameters, calculating the actual area of the flame, and classifying the flame grade.
Based on the flame burning time calculation of the ultraviolet detector, setting returned detection data of the ultraviolet detector as 00 normal and setting returned detection data of the ultraviolet detector as 01 alarm data, counting the number k of the 01, and triggering a camera to take a snapshot.
Flame segmentation method in image
a) Motion detection
As shown in fig. 3, the motion detection, background subtraction method step:
establishing a background image
secondly, the background image and the current frame are subjected to differential operation
comparing the pixel information after the difference operation with a threshold value
The most important of these three steps is how to create an accurate background image, which can have accurate results when the difference operation is subsequently performed. When the background model is built, the three parameters of the maximum and minimum gray values and the maximum time difference value are usually selected for model reference. And establishing a background image by using an MOG2 algorithm during background modeling. After the background image is established, the current frame and the background image are subjected to differential operation, and the motion area of the image is segmented to obtain a segmented image. The algorithm is as follows:
Ik(i,j)=b′k(i,j)+mk(i,j)+nk(i,j)
dk(i,j)=Ik(i,j)-bK(i,j)
wherein Ik(i, j) is pixel information of the current picture at the (i, j) position in the two-dimensional coordinate plane, b'k(i, j) is pixel information of the current background image at the (i, j) position, mkAnd (i, j) represents the pixel information of the moving object at the (i, j) position. n isk(i, j) represents noise information in the image, dkAnd (i, j) is pixel information of the foreground image.
In order to obtain a moving object, a threshold segmentation method is also commonly used according to a certain judgment standard:
b) segmentation of images based on region growing in combination with HSI color space
And judging the current image according to the HSI color model to screen pixel points meeting the flame color. The screened pixel points are used as seed points for regional growth, M (x, y) is used for expressing fire seeds, and the whole regional growth process is as follows:
(1) using Vi(x, y) is used as the judgment sign for traversing or not, if the pixel point P isi(x, y) has been traversed, then ViThe value of (x, y) is 1, otherwise, it is written 0.
(2) Marking flame seeds with Mi(x, y) at the point P where the original image F (x, y) is mappedi(x, y) is the starting point of region growth, marked Vi(x, y) is 1. Traversal point PiThe eight neighborhood points around (x, y) are judged whether to be flame pixel points or not by using an HSI model, if a certain traversed pixel point is a flame pixel point, the M (x, y) value is marked to be 1, and meanwhile, the V (x, y) value is marked to be 1; otherwise, the value of M (x, y) is unchanged, and the value of V (x, y) is 1.
(3) If a certain point P of traversali+1And (x, y) judging the flame pixel points as new flame seeds after the judgment, and repeating the step (2). And returning to the last starting point to continue eight-neighborhood traversal until the V values of the eight point pixels around a certain pixel point are all 1 until the point returns to the most initial point P (x, y).
c) Flame dynamics signature detection, i.e. flicker signature:
and if the color characteristics of the flame are met, analyzing the dynamic characteristics of the flame, and analyzing the dynamic characteristics of the flame by adopting a method for analyzing the flash frequency characteristics of the flame. The flicker frequency of the flame is distributed between 3 Hz and 25Hz, and the main frequency is between 7 Hz and 12 Hz. The flash frequency characteristic of the flame is that the area of the flame is changed continuously due to the continuous change of the airflow, so that the flickering state is shown. The algorithm is as follows:
DPt=(At+1-At)×(At-At-1)(t∈[1,N-2])
wherein DPtIs the product of the area differences of adjacent frames, At-1、At、At+1Is the connected domain area of the adjacent 3 frames, and N is the total number of video frames in 1 second. By DPtCan judge whether the area of the detection region is changed in the time period when DP is usedtWhen the value of (d) is in accordance with a set threshold value, that is, the area is changed, and the flame flicker frequency is increased by 1.
d) Non-circular flame characteristics:
the circularity C is mainly used for describing the complexity of the shape of an object, general interference sources of a fire disaster are smooth and regular objects such as street lamps and car lamps, the shape of an image of flame is complex, and the characteristic can be well distinguished from the interference sources. Can be used as one of the criteria for detecting flames. The general formula is:
C=P2/At
wherein P represents the perimeter of the connected component boundary, AtRepresents the connected domain area; 4 pi is the minimum value of circularity, and the shape complexity is proportional to the value of C; by taking C ═ P2/AtAnd multiplied by 4 pi to normalize it to a number between 0 and 1, as follows:
C=4πAt/P2
e) according to the camera parameters, binocular reconstruction is carried out to obtain the actual area and distance of the flame
As shown in fig. 4, based on the flame region segmentation and extraction, the area and distance of the flame in the actual environment are measured by using binocular vision. In binocular vision, two cameras at different positions shoot the same scene, and the three-dimensional coordinates of a space point are obtained by calculating the parallax of the point in two images.
Let a feature point P (x, y, z) be in space, and in the image imaged by the binocular camera, the imaging point of the left camera is Pleft(xleft,yleft) Right camera imaging point is Pright(xright,yright). The left and right cameras are installed at the same horizontal position, so the ordinate of the feature point in the imaging position in both cameras is the same, i.e., yleft=yright. The geometrical coordinates of the feature points in the actual space can be calculated according to the triangular relation as follows:
f)
Figure BDA0002328416240000101
g)d=xleft-xright
wherein B is the distance between the projection centers of the two cameras; f is the focal length of the camera. Before binocular vision is used for measurement, the camera needs to be calibrated to obtain internal and external parameters, and the focal length f of the camera, the baseline distance B, the translation T between the two cameras and the rotation angle are obtained through calibration.
Flame grade dividing method based on actual area of flame
judging flame grade of current frame
And setting the returned detection data as 00 normal and setting the returned detection data as 01 alarm data based on the flame combustion current grade calculation of the ultraviolet detector and the video. When the data returned by the ultraviolet detector is 01, starting the camera to take a snapshot, and recording the current frame as k (k belongs to N)*) And processing the current frame. Analyzing the grade of flame, adopting an image processing technology to segment the flame from a background image, separating a foreground image by adopting a background difference method when the flame is segmented, and then segmenting the flame area in the foreground image according to the static characteristic and the dynamic characteristic of the flame. And reconstructing the flame through binocular vision to obtain the real area of the flame, and comparing the real area with the current environment space plane. Recording the area S of the flame zonefCurrent environmentThe cross-sectional area of the space region is SaCalculating the percentage as Tk(k∈N*)。
② grade judgment of whole flame combustion process
Calculating the whole flame combustion process grade based on an ultraviolet detector and a video to obtain the pixel percentage T of each frame of flame area in the previous combustion processk(k∈N*) The mean value E and the maximum value T are calculatedmaxBy E and TmaxCalculating a weighted average
Figure BDA0002328416240000111
Figure BDA0002328416240000112
Figure BDA0002328416240000113
Under different environments, setting corresponding values of a and b, and combining calculation to obtain weighted average
Figure BDA0002328416240000114
The flame level of the whole combustion process is judged by taking the value of the weighted average as a basis in combination with the current environment.
Examples
1. Environmental monitoring
And detecting the ultraviolet spectrum of a 180-240 nm waveband in the current environment by using an ultraviolet fire detector, and triggering a camera to take a snapshot if the ultraviolet spectrum of the waveband is detected.
2. Motion detection
And analyzing the captured picture, and firstly, carrying out motion detection on the acquired picture. Background subtraction, which is one of the simplest and most efficient methods for motion recognition at present, is to obtain an accurate foreground image, wherein an important step is how to establish an accurate background image. The method for background modeling mainly adopts the MOG2 algorithm at present, the opencv3.0 version adds a KNN background modeling algorithm, and under the condition of a small number of moving targets, the KNN algorithm is slightly better than the MOG2 algorithm in terms of the accuracy of the result background image establishment, but the KNN algorithm needs about 200ms in processing time by processing the same 1080 & 720 picture, but the MOG2 algorithm needs about 70ms in processing time, so the MOG2 algorithm is far better than the KNN algorithm in operating speed, the MOG2 algorithm is a very excellent algorithm in terms of foreground continuity and processing speed, and the system chooses to use the MOG2 algorithm to model the background image. Fig. 5 shows a motion region obtained by the background subtraction method. Wherein, FIG. 4 is a flame original frame, and FIG. 5 is a background difference method foreground image;
3. segmentation of images based on region growing in combination with HSI color models
And after the motion area is extracted, segmenting a suspected flame area in the motion area, and segmenting the image by adopting area growth and HSI color model. The color is one of the important factors for selecting fire seeds, and in order to enable flame identification to be more accurate, dozens of pictures of flame at different time and in different environments and pictures similar to the flame color such as: storing the HSI value data of dozens of flame pictures and non-flame pictures in Excel, analyzing the distribution condition of each component of the flame pictures, obtaining the threshold value of the flame pictures under each component, and obtaining a component table shown in table 1:
table 1: flame color HSI value range
Color model H component Component S Component I
HSI 0-60 20-100 100-255
4. Flame dynamics detection
And if the color characteristics of the flame are met, analyzing the dynamic characteristics of the flame, and adopting a method for analyzing the flash frequency characteristics of the flame. The flash frequency characteristic of the flame is that the area of the flame is changed continuously due to the continuous change of the airflow, so that the flickering state is shown. The algorithm is that adjacent 3 frames of image connected domains are calculated, then difference operation is carried out on the adjacent two frames of connected domains, and finally multiplication is carried out. DPt=(At+1-At)×(At-At-1)(t∈[1,N-2]) Wherein DPtIs the product of the area differences of adjacent frames, At-1、At、At+1Is the connected component area of the adjacent 3 frames. By DPtCan judge whether the area of the detection region is changed in the time period when DP is usedtWhen the value of (d) is in accordance with a set threshold value, that is, the area is changed, and the flicker frequency is increased by 1.
The circularity characteristic statistics of the normalized fire and interferents are shown in table two:
table 2: circularity feature statistics for fire and interferents
Figure BDA0002328416240000121
Figure BDA0002328416240000131
5. Binocular vision flame area reconstruction
The binocular camera is calibrated by adopting a classic Zhangyingyou chessboard pattern calibration method, and the parameters are obtained as shown in the following table 3:
TABLE 3 Camera parameters
Internal reference Ginseng radix extract
Camera matrix K1,K2 Rotation matrix R
Distortion coefficient D1,D2 Translation vector t
As shown in fig. 6, fig. 7A, 7B, and fig. 8A to 8E, obtaining camera parameters and then performing binocular correction mainly includes two aspects: distortion correction and stereo correction. By calibrating result K1,K2,D1,D2R, t are calculated as follows:
left eye correction matrix (rotation matrix) R1(3x3)
Right eye rectification matrix (rotation matrix) R2(3x3)
Left eye projection matrix P1(3x4)
Right eye projection matrix P2(3x4)
disparity-to-depth mapping matrix Q (4x4)
Through the 5 matrixes, the length B of the base line can be obtained, the left and right eye images are subjected to distortion removal and three-dimensional correction to obtain left and right eye corrected images, the left and right eye corrected images are subjected to three-dimensional matching through an SGM (stereo matching method) three-dimensional matching algorithm according to the corrected images, a parallax map is calculated, after the parallax map is obtained, the depth value is calculated through the geometric relation of the focal length and the base line, and the three-dimensional coordinate is calculated through combination with camera internal parameters.
6. Flame grade determination based on actual area of flame
judging flame grade of current frame
And analyzing the grade of the real-time flame, reconstructing the flame through binocular vision, obtaining the real area of the flame, and comparing the real area with the current environment space plane. Recording the area S of the flame zonefThe cross-sectional area of the current environmental space region is SaCalculating the percentage as Tk(k∈N*)。
② grade judgment of whole flame combustion process
Calculating the whole flame combustion process grade based on an ultraviolet detector and a video to obtain the pixel percentage T of each frame of flame area in the previous combustion processk(k∈N*) The mean value E and the maximum value T are calculatedmaxBy E and TmaxCalculating a weighted average
Figure BDA0002328416240000141
Figure BDA0002328416240000142
Figure BDA0002328416240000143
Under different environments, setting corresponding values of a and b, and combining calculation to obtain weighted average
Figure BDA0002328416240000144
The flame level of the whole combustion process is judged by taking the value of the weighted average as a basis in combination with the current environment.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (8)

1. A comprehensive flame detection method based on ultraviolet and binocular vision is characterized by comprising the following steps:
s1, detecting the monitored environment in real time through an ultraviolet and binocular vision composite flame detector, detecting the ultraviolet spectrum of the flame through an ultraviolet sensor, and triggering a video camera to capture under abnormal conditions to obtain the current environment image data;
s2, performing image analysis on the obtained environment image data, using motion detection; based on the combination of region growing and an HSI color model, segmenting the suspected flame region image; then, judging a suspected flame area through flame characteristic detection; according to the camera parameters, performing binocular vision reconstruction to obtain the actual area and distance of the flame;
and S3, judging the current moment of the flame and the flame grade of the whole combustion process according to the actual area and distance of the flame.
2. The comprehensive flame detection method based on ultraviolet and binocular vision according to claim 1, wherein the S2 includes:
s2-1, performing motion detection by using background difference operation, and removing a static interference source in the image;
s2-2, segmenting the image by using region growing and HSI color model to obtain a suspected flame region in the image;
s2-3, identifying and judging the suspected flame by using the flickering characteristic of the fire source and the area transformation of the approximate flame color area;
s2-4, removing a circular dynamic interference source similar to a car lamp in the image by utilizing the non-circular characteristic of the fire source and the circular degree calculation of the area to obtain a final fire source area;
and S2-5, performing binocular reconstruction according to the camera parameters to obtain the actual area and distance of the flame.
3. The comprehensive flame detection method based on ultraviolet and binocular vision according to claim 2, wherein the motion detection in the S2-1 includes:
when a background model is established, the three parameters of the maximum and minimum gray values and the maximum time difference value are generally selected for model reference; establishing a background image by adopting an MOG2 algorithm during background modeling; after a background image is established, performing differential operation on a current frame and the background image, and segmenting a motion area of the image to obtain a segmented image; the algorithm is as follows:
Ik(i,j)=b′k(i,j)+mk(i,j)+nk(i,j)
dk(i,j)=Ik(i,j)-bK(i,j)
wherein Ik(i, j) is pixel information of the current picture at the (i, j) position in the two-dimensional coordinate plane, b'k(i, j) is pixel information of the current background image at the (i, j) position, mk(i, j) represents pixel information of the moving object at the (i, j) position; n isk(i, j) represents noise information in the image, dk(i, j) is pixel information of the foreground image, bK(i, j) is background information.
4. The comprehensive flame detection method based on ultraviolet and binocular vision according to claim 2, wherein the image segmentation in the S2-2 step comprises:
segmenting the image, and segmenting the suspected flame area by adopting an area growing and HSI color model; the image segmentation by the region growing method is an algorithm for clustering images according to the similarity and connectivity of pixels, whether pixel points of four neighborhoods or eight neighborhoods of the seed points are similar to the seed points or not is sequentially judged by setting a plurality of seed points, and if so, a seed point set is added until the seed point set is empty;
and combining the region growth with an HSI color model, judging the suspected flame region through the HSI color model, taking the pixel point of the region as a seed point, and completing the segmentation of the suspected flame region through a region growth algorithm.
5. The comprehensive flame detection method based on ultraviolet and binocular vision according to claim 2, wherein the flame dynamics feature detection in the S2-3 step comprises:
calculating the change of the flame area at adjacent moments so as to determine the flicker frequency of the flame; the algorithm is as follows:
DPt=(At+1-At)×(At-At-1)(t∈[1,N-2])
wherein DPtIs the product of the area differences of adjacent frames, At-1、At、At+1Is the connected domain area of adjacent 3 frames, N is the total number of video frames in 1 second; by DPtCan judge whether the area of the detection region is changed in the time period when DP is usedtWhen the value of (d) is in accordance with a set threshold value, that is, the area is changed, and the flicker frequency is increased by 1.
6. The comprehensive flame detection method based on ultraviolet and binocular vision according to claim 2, wherein the flame dynamics feature detection in the S2-4 step comprises:
the circularity C is used for describing the complexity of the shape of an object, the interference source of a fire is an object with smooth and regular shape such as a street lamp and a car lamp, the image shape of flame is complex, and the characteristic can be well distinguished from the interference source; can be used as one of the criteria for detecting flame; the formula is as follows:
C=P2/At
wherein P represents the perimeter of the connected component boundary, AtRepresents the connected domain area; 4 pi is the minimum value of circularity, and the shape complexity is proportional to the value of C; by taking C ═ P2/AtAnd multiplied by 4 pi to normalize it to a number between 0 and 1, as follows:
C=4πAt/P2
7. the comprehensive flame detection method based on ultraviolet and binocular vision according to claim 2, wherein the binocular reconstruction in the S2-5 comprises:
on the basis of segmenting and extracting the flame region, measuring the flame area and the distance in the actual environment by adopting binocular vision; the binocular vision is that two cameras at different positions shoot the same scene, and the three-dimensional coordinates of a space point are obtained by calculating the parallax of the point in the two images;
let a feature point P (x, y, z) be in space, and in the image imaged by the binocular camera, the imaging point of the left camera is Pleft(xleft,yleft) Right camera imaging point is Pright(xright,yright) (ii) a The left and right cameras are installed at the same horizontal position, so the ordinate of the feature point in the imaging position in both cameras is the same, i.e., yleft=yright(ii) a The geometrical coordinates of the feature points in the actual space can be calculated according to the triangular relation as follows:
Figure FDA0002328416230000031
(original formula in the molecule y does not write the lower label left, please confirm)
Figure FDA0002328416230000032
d=xleft-xright
Wherein B is the distance between the projection centers of the two cameras; f is the focal length of the camera; before binocular vision is used for measurement, the cameras need to be calibrated to obtain internal and external parameters, and the focal length f of the cameras, the distance B between the projection centers of the two cameras, the translation T between the two cameras and the rotation angle are obtained through calibration.
8. The comprehensive flame detection method based on ultraviolet and binocular vision according to claim 1, wherein the S3 includes:
s3-1, judging the flame grade of the current frame;
based on the current flame combustion grade calculation of the ultraviolet detector and the video, setting the returned detection data as 00 normal and setting 01 alarm data; when the data returned by the ultraviolet detector is 01, starting the camera to take a snapshot, and recording the current frame as k (k belongs to N)*) Processing the current frame; analyzing the flame grade, adopting image processing technique to segment the flame from background image, and proceeding flameWhen in segmentation, a background difference method is firstly adopted to separate the foreground image, and then the flame region in the foreground image is segmented according to the static characteristic and the dynamic characteristic of flame; reconstructing the flame through binocular vision to obtain the real area of the flame, and comparing the real area with the current environment space plane; recording the area S of the flame zonefThe cross-sectional area of the current environmental space region is SaCalculating the percentage as Tk(k∈N*) In which N is*Is a positive integer;
s3-2 grade determination of whole flame combustion process
Calculating the whole flame combustion process grade based on an ultraviolet detector and a video to obtain the pixel percentage T of each frame of flame area in the previous combustion processk(k∈N*) The mean value E and the maximum value T are calculatedmaxBy E and TmaxCalculating a weighted average
Figure FDA0002328416230000041
Figure FDA0002328416230000042
Figure FDA0002328416230000043
Setting the corresponding values of a and b under different environments, indicating the meanings of a and b, and obtaining the weighted average by combining calculation
Figure FDA0002328416230000044
The flame level of the whole combustion process is judged by taking the value of the weighted average as a basis in combination with the current environment.
CN201911326088.5A 2019-12-20 2019-12-20 Comprehensive flame detection method based on ultraviolet and binocular vision Pending CN111179279A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911326088.5A CN111179279A (en) 2019-12-20 2019-12-20 Comprehensive flame detection method based on ultraviolet and binocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911326088.5A CN111179279A (en) 2019-12-20 2019-12-20 Comprehensive flame detection method based on ultraviolet and binocular vision

Publications (1)

Publication Number Publication Date
CN111179279A true CN111179279A (en) 2020-05-19

Family

ID=70653921

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911326088.5A Pending CN111179279A (en) 2019-12-20 2019-12-20 Comprehensive flame detection method based on ultraviolet and binocular vision

Country Status (1)

Country Link
CN (1) CN111179279A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798515A (en) * 2020-06-30 2020-10-20 大连亚泰华光电技术有限公司 Stereoscopic vision monitoring method for burning condition identification
CN111951508A (en) * 2020-07-03 2020-11-17 北京中安安博文化科技有限公司 Fire classification method, device, medium and electronic equipment
CN111953933A (en) * 2020-07-03 2020-11-17 北京中安安博文化科技有限公司 Method, device, medium and electronic equipment for determining fire area
CN111986437A (en) * 2020-07-02 2020-11-24 湖南翰坤实业有限公司 Fire source detection and positioning method and system
CN111985489A (en) * 2020-09-01 2020-11-24 安徽炬视科技有限公司 Night light and flame classification discrimination algorithm combining target tracking and motion analysis
CN111986436A (en) * 2020-09-02 2020-11-24 成都指码科技有限公司 Comprehensive flame detection method based on ultraviolet and deep neural networks
CN112069975A (en) * 2020-09-02 2020-12-11 成都指码科技有限公司 Comprehensive flame detection method based on ultraviolet, infrared and vision
CN112464813A (en) * 2020-11-26 2021-03-09 国网北京市电力公司 Method and device for monitoring mountain fire
CN113379998A (en) * 2021-06-09 2021-09-10 南京品傲光电科技有限公司 Automatic fire alarm system in petrochemical tank district
CN113990017A (en) * 2021-11-21 2022-01-28 特斯联科技集团有限公司 Forest and grassland fire early warning system and method based on PNN neural network
CN114152347A (en) * 2021-09-30 2022-03-08 国网黑龙江省电力有限公司电力科学研究院 Transformer substation power equipment fault positioning and fire research and judgment comprehensive detection method
CN114425133A (en) * 2022-02-09 2022-05-03 吕德生 Indoor flame autonomous inspection and fire extinguishing method
CN115294722A (en) * 2022-08-02 2022-11-04 汉熵通信有限公司 Flame detection device and method thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU3201101A (en) * 2000-02-07 2001-08-14 Intelligent Security Limited Smoke and flame detection
CN106846375A (en) * 2016-12-30 2017-06-13 广东工业大学 A kind of flame detecting method for being applied to autonomous firefighting robot

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU3201101A (en) * 2000-02-07 2001-08-14 Intelligent Security Limited Smoke and flame detection
CN106846375A (en) * 2016-12-30 2017-06-13 广东工业大学 A kind of flame detecting method for being applied to autonomous firefighting robot

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
田佳霖: "《基于火焰特性分析的视频火灾检测》", 《信息技术与信息化》 *
蔡百会: "《基于视频图像的烟、火检测系统研究》", 《中国优秀博士学位论文全文数据库》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798515B (en) * 2020-06-30 2024-01-12 大连亚泰华光电技术有限公司 Stereoscopic vision monitoring method for recognizing incineration condition
CN111798515A (en) * 2020-06-30 2020-10-20 大连亚泰华光电技术有限公司 Stereoscopic vision monitoring method for burning condition identification
CN111986437A (en) * 2020-07-02 2020-11-24 湖南翰坤实业有限公司 Fire source detection and positioning method and system
CN111953933B (en) * 2020-07-03 2022-07-05 北京中安安博文化科技有限公司 Method, device, medium and electronic equipment for determining fire area
CN111951508A (en) * 2020-07-03 2020-11-17 北京中安安博文化科技有限公司 Fire classification method, device, medium and electronic equipment
CN111953933A (en) * 2020-07-03 2020-11-17 北京中安安博文化科技有限公司 Method, device, medium and electronic equipment for determining fire area
CN111985489A (en) * 2020-09-01 2020-11-24 安徽炬视科技有限公司 Night light and flame classification discrimination algorithm combining target tracking and motion analysis
CN111985489B (en) * 2020-09-01 2024-04-02 安徽炬视科技有限公司 Night lamplight and flame classification discrimination algorithm combining target tracking and motion analysis
CN111986436B (en) * 2020-09-02 2022-12-13 成都视道信息技术有限公司 Comprehensive flame detection method based on ultraviolet and deep neural networks
CN111986436A (en) * 2020-09-02 2020-11-24 成都指码科技有限公司 Comprehensive flame detection method based on ultraviolet and deep neural networks
CN112069975B (en) * 2020-09-02 2024-06-04 成都视道信息技术有限公司 Comprehensive flame detection method based on ultraviolet, infrared and vision
CN112069975A (en) * 2020-09-02 2020-12-11 成都指码科技有限公司 Comprehensive flame detection method based on ultraviolet, infrared and vision
CN112464813B (en) * 2020-11-26 2024-07-05 国网北京市电力公司 Mountain fire monitoring method and device
CN112464813A (en) * 2020-11-26 2021-03-09 国网北京市电力公司 Method and device for monitoring mountain fire
CN113379998A (en) * 2021-06-09 2021-09-10 南京品傲光电科技有限公司 Automatic fire alarm system in petrochemical tank district
CN114152347A (en) * 2021-09-30 2022-03-08 国网黑龙江省电力有限公司电力科学研究院 Transformer substation power equipment fault positioning and fire research and judgment comprehensive detection method
CN113990017B (en) * 2021-11-21 2022-04-29 特斯联科技集团有限公司 Forest and grassland fire early warning system and method based on PNN neural network
CN113990017A (en) * 2021-11-21 2022-01-28 特斯联科技集团有限公司 Forest and grassland fire early warning system and method based on PNN neural network
CN114425133A (en) * 2022-02-09 2022-05-03 吕德生 Indoor flame autonomous inspection and fire extinguishing method
CN114425133B (en) * 2022-02-09 2023-10-17 吕德生 Indoor flame autonomous inspection and fire extinguishing method
CN115294722B (en) * 2022-08-02 2024-01-26 汉熵通信有限公司 Flame detection device and method thereof
CN115294722A (en) * 2022-08-02 2022-11-04 汉熵通信有限公司 Flame detection device and method thereof

Similar Documents

Publication Publication Date Title
CN111179279A (en) Comprehensive flame detection method based on ultraviolet and binocular vision
CN109743879B (en) Underground pipe gallery leakage detection method based on dynamic infrared thermography processing
CN109816678B (en) Automatic nozzle atomization angle detection system and method based on vision
EP2118862B1 (en) System and method for video detection of smoke and flame
CN105046868B (en) A kind of fire alarm method based on thermal infrared imager in long and narrow environment
CN107085714B (en) Forest fire detection method based on video
CN111739250B (en) Fire detection method and system combining image processing technology and infrared sensor
WO2016199244A1 (en) Object recognition device and object recognition system
CN111047655B (en) High-definition camera cloth defect detection method based on convolutional neural network
CN112069975A (en) Comprehensive flame detection method based on ultraviolet, infrared and vision
CN102881106A (en) Dual-detection forest fire identification system through thermal imaging video and identification method thereof
CN109084350A (en) A kind of kitchen ventilator and oil smoke concentration detection method having filtering functions vision-based detection module
JP7143174B2 (en) Smoke detection device and smoke identification method
CN101316371B (en) Flame detecting method and device
CN111753794B (en) Fruit quality classification method, device, electronic equipment and readable storage medium
JP2010097412A (en) Smoke detecting apparatus
CN108038510A (en) A kind of detection method based on doubtful flame region feature
CN110189375A (en) A kind of images steganalysis method based on monocular vision measurement
CN109544535B (en) Peeping camera detection method and system based on optical filtering characteristics of infrared cut-off filter
CN114399882A (en) Fire source detection, identification and early warning method for fire-fighting robot
CN103456123B (en) A kind of video smoke detection method based on flowing with diffusion characteristic
TWI493510B (en) Falling down detection method
Stent et al. An Image-Based System for Change Detection on Tunnel Linings.
CN111353350B (en) Flame detection and positioning method based on combined sensor image fusion technology
CN108010076B (en) End face appearance modeling method for intensive industrial bar image detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: No. 109, 1st Floor, Building 2, No. 11, Tianying Road, High tech Zone, Chengdu, Sichuan 611700

Applicant after: Chengdu Shidao Information Technology Co.,Ltd.

Address before: 611731 floor 2, No. 4, Xinhang Road, West Park, high tech Zone (West Zone), Chengdu, Sichuan

Applicant before: CHENGDU ZHIMA TECHNOLOGY CO.,LTD.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20200519

RJ01 Rejection of invention patent application after publication