CN110516609B - Fire disaster video detection and early warning method based on image multi-feature fusion - Google Patents

Fire disaster video detection and early warning method based on image multi-feature fusion Download PDF

Info

Publication number
CN110516609B
CN110516609B CN201910802918.0A CN201910802918A CN110516609B CN 110516609 B CN110516609 B CN 110516609B CN 201910802918 A CN201910802918 A CN 201910802918A CN 110516609 B CN110516609 B CN 110516609B
Authority
CN
China
Prior art keywords
area
flame
smoke
suspected
fire
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910802918.0A
Other languages
Chinese (zh)
Other versions
CN110516609A (en
Inventor
陈美娟
何爱龙
管铭锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN201910802918.0A priority Critical patent/CN110516609B/en
Publication of CN110516609A publication Critical patent/CN110516609A/en
Application granted granted Critical
Publication of CN110516609B publication Critical patent/CN110516609B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Fire-Detection Mechanisms (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a fire video detection and early warning method based on image multi-feature fusion, which comprises the steps of firstly preprocessing after acquiring an image sequence of a video; then extracting a foreground region and then obtaining a detected candidate region; secondly, extracting static characteristics and dynamic characteristics from the candidate region, judging whether the flame is contained or not as the input of an SVM classifier when the flame is detected, and obtaining whether the smoke is contained or not after logical combination selection and calculation are carried out on the judgment result of the characteristics when the smoke is detected; and finally, if the fire disaster detector detects that the fire disaster detector contains flame or smoke, judging the fire disaster according to the growth trend, and alarming the fire disaster on a monitoring site when the fire disaster is judged to be formed, otherwise, only warning the fire disaster in the background. The invention can be combined with the existing monitoring system, is applied to places such as markets, warehouses and the like, reduces the fire detection early warning cost, has good generalization capability and applicability of the detection method, can provide reliable fire detection and early warning functions, and has practical value.

Description

Fire disaster video detection and early warning method based on image multi-feature fusion
Technical Field
The invention relates to the field of image processing and recognition in computer vision, in particular to a fire disaster video detection and early warning method based on image multi-feature fusion.
Background
The fire hazard is one of the main threats threatening the safety of human life and property, and along with the development of the society, the harm of the fire hazard to human society and the natural environment is more and more serious. Therefore, it is an important research subject to accurately and rapidly detect a fire. The traditional fire detection equipment at present mainly uses a temperature-sensing and smoke-sensing detector, but the detector mainly using a sensor needs to reach a detection threshold value to alarm, the delay is serious, and the detection effect can be achieved only by laying the detector in a large scale in space due to a small detection range, so that the cost is very high. In addition, in high buildings, forests, tunnels and the like, such devices often fail to accurately detect fires in time due to dilution of space and airflow.
With the development of a video monitoring system, monitoring cameras are distributed throughout streets and alleys, and the fire detection task can be completed on the basis of the existing video monitoring system by combining the image processing and image recognition technologies, so that the cost can be reduced, the anti-interference capability is improved, and the system is well suitable for complex environments with large space and much airflow; there are many image-based fire detection methods, but these methods have the following problems: on the premise of ensuring reliable detection, the delay needs to be further improved; the detection of flame is mainly used, and the detection capability of the flame detecting device for smoldering fire is insufficient; the lack of a mechanism for distinguishing whether the detection result constitutes a fire or not and for classifying and alarming the fire causes panic to public alarms under normal fire use or controllable conditions.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the defects in the prior art, the invention provides a fire video detection and early warning method based on image multi-feature fusion. The invention can combine the existing video monitoring system to simultaneously realize the detection of flame and smoke, and firstly, the median filtering is utilized to carry out denoising pretreatment on the image; a new background model is adopted to better extract a motion area; the generalization capability and the detection efficiency of the detection method are improved by adopting a support vector machine classifier and a logic arithmetic unit capable of adjusting the detection sensitivity; and finally, a judgment mechanism for judging whether the fire is possible to form is introduced to process the fire in a grading way, so that the reliability of fire early warning is improved, and the harm caused by the fire is reduced to the maximum extent.
The technical scheme is as follows: in order to achieve the purpose, the invention adopts the technical scheme that:
a fire video detection and early warning method based on image multi-feature fusion is characterized in that after an image sequence of a video is obtained, preprocessing is firstly carried out to reduce noise and improve image quality; then extracting a foreground region by adopting a moving region extraction method to eliminate most interference, selecting the foreground region by using a color model, and performing morphological closing operation to obtain a detected candidate region; secondly, extracting static characteristics and dynamic characteristics from the candidate region, judging whether the flame is contained or not as the input of an SVM classifier when the flame is detected, and obtaining whether the smoke is contained or not after logical combination selection and calculation are carried out on the judgment result of the characteristics when the smoke is detected; and finally, if the fire disaster detector detects that the fire disaster detector contains flame or smoke, judging the fire disaster according to the growth trend, and alarming the fire disaster on a monitoring site when the fire disaster is judged to be formed, otherwise, only warning the fire disaster in the background. The invention can be combined with the existing monitoring system, applied to places such as shopping malls, storage and the like, reduces the fire detection early warning cost, has good generalization capability and applicability of the detection method, can provide reliable fire detection and early warning functions, has practical value, and specifically comprises the following steps:
s1, carrying out denoising pretreatment on the obtained video image sequence by using a median filtering method;
s2, establishing a background model for the preprocessed image by using an improved method based on a gradient motion historical map to extract a motion region;
s3, extracting the pixels of the motion area by using the color models of flame and smoke respectively to obtain a color suspected area;
s4, performing morphological closing operation on the color suspected area to obtain a smooth and continuous suspected area;
s5, extracting dynamic and static characteristics of the suspected area;
s6, inputting the extracted flame characteristic values as SVM classifier to judge whether the flame is contained, simultaneously, respectively calculating the extracted smoke characteristics and inputting the calculation results into a logic arithmetic unit to judge whether the smoke is contained;
s7 inputs a continuous sequence of images containing flames or smoke into a fire discriminator to determine whether or not a fire is established, and different warning processes are performed.
Further, the step S1 of using median filtering on the acquired video image sequence specifically includes:
and traversing and calculating the gray value of each pixel point of the initial image, and calculating the median of the gray values in the L multiplied by L neighborhood of each pixel point to replace the original gray value of the pixel, so that the noise in the image, particularly the salt and pepper noise, can be effectively removed, and the image quality is improved.
Further, the step S2 of extracting the motion region by using a method based on a gradient motion history map specifically includes the following steps:
s21 first calculates a binary gradient map BGI (x, y) by the following formula:
Figure BDA0002182835980000021
wherein, BGIx(x, y) and BGIy(x, y) are images after binarization processing is carried out on gradient images in the directions of an x axis and a y axis respectively, and the gradient images are calculated from gray level images;
s22 calculates a gradient motion history map GMHI (x, y), which records motion history information of each pixel, as follows:
Figure BDA0002182835980000022
t is the current time, T is a preset time interval, the time is usually represented by a frame, and only the motion history information in the latest T frame is recorded;
s23 calculates the effective motion map EMI (x, y) by the following formula:
Figure BDA0002182835980000031
the method includes the steps that MAX (x, y) and MIN (x, y) are maximum GMHI values and minimum GMHI values in an m multiplied by m field of a pixel point (x, y) in a gradient motion historical map respectively, MIN and MAX are preset constants, when a difference value of the MAX (x, y) and the MIN (x, y) is larger than MAX, the pixel point (x, y) is represented as a static pixel or a pixel point on a background edge, and conversely, when the difference value is smaller than MIN, the pixel point (x, y) is represented as a static pixel or a pixel on an inner edge of the background, so that a motion area is recorded in an effective motion map.
Further, in step S3, establishing a color model for the flame and smoke and extracting a suspected color area from the moving area, the method specifically includes:
s31 early stage flame temperature is low, so color is mainly distributed in red, orange, and yellow regions, and red is a main component, and the color features of the flame are described based on RGB color space according to the features, in order to prevent interference of luminance information on color information, and the luminance component in the flame image is large, HSV and YCbCr color spaces are used to further define the luminance and color features, and the flame color model rule is based on three color spaces of RGB, HSV, and YCbCr, and the rule phase is matched to obtain a color suspected region of flame, and the specific rule includes:
rule 1: r (x, y)>RT
Rule 2: r (x, y) ≧ G (x, y) > B (x, y)
Rule 3: s (x, y) ≥ 255-R (x, y)) ST/RT
Rule 4: h1<H(x,y)<H2,S1<S(x,y)<S2,V1<V(x,y)<V2
Rule 5: y (x, Y) is more than or equal to Cr (x, Y) is more than or equal to Cb (x, Y)
Rule 6: y (x, Y)>Ymean,Cb(x,y)<Cbmean,Cr(x,y)>Crmean
Rule 7: cr (x, y) -Cb (x, y) ≥ lambda
In the formula, R (x, y), G (x, y) and B (x, y) respectively represent red, green and blue components of the pixel point (x, y) in RGB space, and R (x, y) represents red, green and blue components in RGB spaceTH (x, y), S (x, y) and V (x, y) are three components of hue, saturation and brightness of the pixel point (x, y) in HSV color space respectively, and S is a threshold value of a red componentTIs a red component of RTCorresponding to the value of saturation in HSV space. H1、H2、S1、S2、V1、V2Is a threshold value preset for hue, saturation and brightness, Y (x, Y), Cb (x, Y) and Cr (x, Y) are respectively the brightness and red components of pixel point (x, Y) in YCbCr spaceBlue component, lambda is the non-negative threshold value of the difference value of the two components Cr (x, Y) and Cb (x, Y), and Y ismean、Cbmean、CrmeanRespectively taking the average values of all pixels in the current image on three components of Y, Cb and Cr;
the smoke of S32 is the main sign of early fire and smoldering fire, and is often represented as gray, so its R, G, B three component values are approximately equal, and its brightness value also varies within the corresponding interval due to the difference of the combustion products and the separation into light gray and dark gray. The adopted smog color model rule is based on RGB and HSI spaces, and the rule is subjected to phase comparison to obtain a color suspected area, wherein the rule specifically comprises the following steps:
rule 1: delta is less than or equal to | R (x, y) -G (x, y) |1
Rule 2: delta is less than or equal to | R (x, y) -B (x, y) |2
Rule 3: delta is less than or equal to | G (x, y) -B (x, y) |3
Rule 4: l is1≤I(x,y)≤L2||D1≤I(x,y)≤D2
Wherein, delta1、δ2、δ3Is a threshold value of the absolute difference between R, G, B three components, L1、L2、D1、D2The upper and lower limits of the variation of the luminance I (x, y) in light gray and dark gray, respectively, are constant.
Further, in step S4, a morphological operation of expanding and then corroding the suspected color area is performed, and the small holes and the small seams are filled and leveled, so as to obtain a smooth and continuous suspected flame area and a smooth and continuous suspected smoke area.
Further, in the step S5, the extracting of the dynamic and static characteristics of the suspected areas of the flame and the smoke respectively includes:
s51 extracts dynamic and static characteristic parameters of the flame from the suspected flame region, specifically including edge complexity, randomness, similarity, disorder change rate, and jitter frequency of the suspected flame region, and respectively expressed as E, Δ a, θ, ω, and p, and the specific calculation method is as follows:
static characteristics of A
Edge complexity
The edge of the flame between the continuous frames is different and varies greatly, the complexity of the edge is different from that of other interference objects, and the edge can be used as a characteristic of other objects in a partition and is defined as follows:
Figure BDA0002182835980000041
e is the average edge complexity of k continuous flame regions of the current frame, the larger E represents the higher edge complexity, PjThe perimeter of the jth continuous suspected flame area in the frame can be represented by the number of pixels of the edge, AjThe area of the region can be represented by the number of pixels in the region;
b dynamic characteristics
1 degree of randomness
The flame changes from one frame to another frame with a certain degree of randomness, so the difference between two continuous frames can be counted to represent the characteristic, and the calculation formula is as follows:
ΔA=|At-At-1|
Atand At-1Respectively representing the pixel quantity of candidate areas of a t frame and a t-1 frame, wherein Delta A is the absolute pixel quantity difference of the candidate areas of two continuous frames;
2 degree of similarity
The shape of the flame looks irregular but shows a certain similarity in a short time interval, and adjacent frames change within a certain range, which is different from other moving light sources or interferents with similar colors, and the description formula is as follows:
Figure BDA0002182835980000051
wherein, FtAnd Ft-1The areas of suspected flames in the t-th and t-1-th frames, respectively, Ft∩Ft-1To calculate the area of the overlapping area of the suspected flame areas in two adjacent frames, Ft∪Ft-1After the areas of suspected flames in two adjacent frames are calculated and mergedArea of the region (d);
3 rate of change of disorder degree
In the process of flame combustion, the flame shape can be changed all the time due to the influence of factors such as combustion substances, airflow and the like, and a disorder degree change rate formula for describing the change is as follows:
FDt(x,y)=Ft(x,y)-Ft-1(x,y)
FDt+1(x,y)=Ft+1(x,y)-Ft(x,y)
Figure BDA0002182835980000052
wherein ω represents the disorder degree change rate, Ft(x, y) denotes the t-th frame pseudo-flame area, FDt(x, y) represents the degree of disorder of the region, Vt(i) Denotes FDt(x, y) a value of an ith pixel, N representing an amount of pixels in the suspected flame region;
4 jitter frequency
The jitter frequency is a characteristic which has high reliability, strong anti-interference capability and easy description in dynamic characteristics. The main idea of the jitter frequency detection is to count the frame difference result of two adjacent frames in a certain period, and the jitter frequency is calculated in the following manner:
Figure BDA0002182835980000053
Figure BDA0002182835980000054
where T is the period of the statistics, Di(t) is the value of the ith pixel in the suspected flame area in the tth frame, N is the pixel amount in the area, piThe accumulated difference of pixel values in any pixel point T frame in the region is shown, and p is the jitter frequency;
s52, after obtaining the suspected smoke area, extracting and analyzing the dynamic and static characteristic parameters of the smoke, wherein the parameters specifically comprise the area growth value, the circularity and the displacement of the smoke area, and the specific calculation methods of the characteristics are as follows:
1 area growth analysis
The smoke will increase after the occurrence of the smoke, so the pixel quantity of the candidate area will show an increasing characteristic, and the calculation formula is:
Figure BDA0002182835980000061
where T is the period of the statistic, StAnd St-1The area of the smoke candidate area of the t frame and the t-1 frame is obtained, and g is the average growth rate;
2 degree of circularity analysis
The circularity describes the complexity of the shape of the object, when the object is circular, the circularity is the lowest and has a value of 1, and when the shape of the object is more complex, the circularity is the larger, the circularity calculation formula is:
Figure BDA0002182835980000062
wherein L istIs the perimeter of the smoke candidate region in the t-th frame, StThe area of the smoke candidate area in the t frame is shown;
3 degree of displacement analysis
After the smoke appears, the shape, the area and the shape of the smoke are changed constantly, but the central position and the main body area of the smoke are kept in a certain area, so that the change of the central position is not enough to be considered singly, and the dynamic change of the area needs to be considered, so that the expression is more reasonable by using the area overlapping degree, namely the area overlapping degree of the same suspected target area between adjacent frames is studied, and the mathematical expression is as follows:
Figure BDA0002182835980000063
s (t, j) and S (t-1, j) represent the area of the jth candidate region in the t-th and t-1 frames, respectively, SmIndicates that the two frames are includedThe area of the minimum bounding rectangle of the jth candidate region;
finally, the obtained characteristic parameters are respectively compared with a predefined threshold value to obtain an output result whether the output result accords with the smoke characteristics, and the output result is true or false;
further, the step S6 is to use the extracted flame feature values as SVM classifier inputs to determine whether flame is contained, and to separately calculate the extracted smoke features and input the calculation results to the logic arithmetic unit to determine whether smoke is contained, and the specific steps include:
s61, the SVM classifier has a faster detection speed than other classifiers and a neural network, a better detection effect can be obtained on a small sample training set, and the generalization capability is excellent, so that the SVM classifier is adopted to classify flame images, parameters in the SVM are optimized firstly, training samples are collected to train an SVM model, the characteristic value of flame is used as the input of the SVM, whether flame exists in a video or not is judged, wherein the input characteristic vector is as follows: x is the number ofi=[E,△A,θ,ω,p](ii) a The output result of the SVM is as follows:
Figure BDA0002182835980000064
wherein, the weight value we=αeye,K(x,xe) Is a kernel function, αeAs lagrange multipliers, constraints
Figure BDA0002182835980000071
αeMore than or equal to 0, e is 1,2, n; the kernel function is a radial basis kernel function, and the formula is as follows:
K(xe,xf)=exp(-γ||xe-xf||2)
wherein, | | x-xe||2Is the squared euclidean distance between the two eigenvectors, gamma being a parameter of the kernel function;
s62, true and false results obtained by comparing the smoke growth value, the circularity and the displacement with a threshold value are input into a logic arithmetic unit, so that whether smoke exists in the image or not is judged, the logic arithmetic unit can flexibly select and combine logical arithmetic combinations such as AND, OR, NOT, XOR and the like to calculate the three results, and according to different detection scenes, different arithmetic modes are adopted to adjust the sensitivity of the detection algorithm and increase the generalization capability of the algorithm, for example:
for highly sensitive areas such as petrochemical storage and the like, an OR gate can be used, and smoke is judged to be contained when a suspected smoke area meets a condition;
for monitoring a general production and living area, at least any two conditions meet the requirement of judging that the image contains smoke;
in places such as workshops containing fire work and smoke dust, the output results of the three judgment conditions can be subjected to logical operation by using an AND gate, and the smoke is judged to be contained when the three conditions are simultaneously met.
Further, the step S7 inputs the image sequence containing flame or smoke into the fire discriminator to determine whether a fire is formed, and performs different processes, including:
s71 obtaining two continuous frames to judge the area containing flame or smoke and calculating the area M of each corresponding areaaAnd Ma-1,MaAnd Ma-1The areas of corresponding regions in the current frame and the previous frame, respectively;
s72, calculating the area growth rate beta of the flame or smoke between two continuous frames, wherein the calculation formula is as follows:
Figure BDA0002182835980000072
s73 if the area growth rate beta is greater than zero and greater than the threshold value GTIf the initial value is 0, the variable Count is increased by 1, and the calculation formula is:
Figure BDA0002182835980000073
s74 when the growth rate of the continuous h frames is larger than the threshold value GTWhen the fire is detected, i.e. when the Count is equal to h, it is determined that the monitored site contains the fire, and the fire is detectedCarrying out fire alarm on a monitoring site, otherwise, only carrying out fire alarm prompt on a monitoring background, wherein the threshold value G of the growth rate of flame and smokeTThe specific values of (a) are different.
The invention has the following beneficial effects:
the invention provides a fire video detection and early warning method based on image multi-feature fusion, which is characterized in that a fire image is processed, and a plurality of features are extracted and analyzed on the basis of a suspected fire area, so that flame and smoke shown in a fire are detected at the same time, the method is suitable for detecting both open fire conditions and smoldering fire conditions of smoke, and has better application range and reliability;
2, denoising an original image through median filtering, improving the image quality, and combining a foreground extraction method based on gradient motion historical image improvement to achieve the purposes of improving the motion region extraction effect, removing most useless pixels and improving the calculation speed;
3, the method further detects the motion area by combining a multi-color space and establishing a color model rule, fills small holes and gaps by adopting morphological closing operation, smoothes and highlights a target area to obtain a complete suspected flame or smoke area, eliminates the motion area which does not accord with a color model, further reduces the detection range, obtains the suspected flame or smoke area as a candidate area, and ensures the extraction and judgment of later dynamic and static characteristics, thereby reducing the false alarm rate;
4, by researching the characteristic engineering, the invention adopts a plurality of fire characteristics which are not easy to be interfered, and optimizes the mathematical calculation process of the characteristics, so that the characteristics are more concise and effective, the calculation time is shortened, and the anti-interference capability of the detection method is improved;
the suspected area is further judged through the SVM classifier, so that the method has excellent generalization capability, can obtain better detection accuracy and shorter detection time than other algorithms on a small sample training set, and is more suitable for a video monitoring system;
the logic arithmetic unit provided by the invention can change the sensitivity of the detection method according to different scenes by calculating the judgment results of multiple characteristics of smoke through the combination of different logic gates, thereby achieving better early warning effect in different scenes;
7 the fire judgment module provided by the invention provides a fire grading early warning mechanism, when the fire condition is judged to form a fire trend, the fire grading early warning mechanism alarms a monitoring field and informs personnel to evacuate orderly, otherwise, the fire condition early warning prompt is only carried out on management personnel at a background, and processing measures are taken to prevent the production and entertainment personnel in a monitoring area from unnecessarily hurry, even trample and other accidents.
Drawings
FIG. 1 is an overall flowchart of a fire video detection and early warning method based on image multi-feature fusion according to an embodiment of the present invention;
FIG. 2 is a motion region extraction image of an embodiment of the present invention;
FIG. 3 is a flow chart of the calculation of the fire determination module according to the embodiment of the present invention;
fig. 4 is a diagram illustrating an exemplary video sample detection result according to an embodiment of the present invention.
Detailed Description
For a better understanding of the present invention, the present invention will be described in detail with reference to the accompanying drawings and the following embodiments, and it is to be understood that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments may be obtained by those skilled in the art without any inventive work based on the embodiments of the present invention.
Example 1
As shown in fig. 1, the present embodiment provides a fire video detecting and early warning method based on image multi-feature fusion, which includes the following steps:
s1, carrying out denoising pretreatment on the obtained video image sequence by using a median filtering method;
s2, establishing a background model for the preprocessed image by using an improved method based on a gradient motion historical map to extract a motion region;
s3, extracting the pixels of the motion area by using the color models of flame and smoke respectively to obtain a suspected color area;
s4, performing morphological closing operation on the color suspected area to obtain a smooth and continuous suspected area;
s5, extracting dynamic and static characteristics of the suspected area;
s6, inputting the extracted flame characteristic value as an SVM classifier to judge whether the flame is contained, simultaneously calculating the extracted smoke characteristic and inputting the result to a logic arithmetic unit to judge whether the smoke is contained;
s7 inputs the continuous image sequence containing flame or smoke into a fire discriminator to determine whether a fire is formed, and different warning processes are performed.
The embodiment combines the existing video monitoring system, can realize the detection of flame and smoke simultaneously in the monitoring area, thereby realizing the function of fire early warning. The image quality is improved by denoising preprocessing, then a background model is adopted to extract a motion area, most useless pixels are eliminated, the calculation speed is greatly improved, the adopted characteristics are not easily interfered by noise through characteristic engineering research, the mathematical description of the characteristics is convenient and direct, the detection speed and accuracy are increased, the generalization capability and the detection efficiency of the detection method are increased through a support vector machine classifier and a logic arithmetic unit capable of adjusting the detection sensitivity, and finally the method for judging the fire is introduced to process the fire in a grading mode to increase the reliability of fire alarm, so that the harm caused by the fire is reduced to the maximum extent.
Example 2
The embodiment is further optimized based on embodiment 1, and specifically includes:
the step S1 of using median filtering on the obtained video image sequence specifically includes:
and traversing and calculating the gray value of each pixel point of the initial image, and calculating the median of the gray values in the 3 x 3 neighborhood of each pixel point to replace the original gray value of the pixel, so that the noise in the image, particularly the salt and pepper noise, can be effectively removed, and the image quality is improved.
In step S2, a motion region is extracted by using a method based on a gradient motion history map, where the motion region extraction result is shown in fig. 2, white is a motion region, and black is a background, and the method specifically includes the following steps:
s21 first calculates a binary gradient map BGI (x, y) by the following formula:
Figure BDA0002182835980000091
wherein, BGIx(x, y) and BGIy(x, y) are images obtained after binarization processing is carried out on gradient maps in the directions of an x axis and a y axis respectively, and the gradient maps are calculated from gray-scale maps after median filtering;
s22 calculates a gradient motion history map GMHI (x, y), which records motion history information of each pixel, as follows:
Figure BDA0002182835980000101
t is the current time, T is a preset time interval, generally, a frame is used for representing the time, and is set to be 10, and only the motion history information in the latest 10 frames is recorded;
s23 calculates the effective motion map EMI (x, y) by the following formula:
Figure BDA0002182835980000102
wherein, MAX (x, y) and MIN (x, y) are the maximum and minimum GMHI values in the 5 × 5 field of the pixel (x, y) in the gradient motion history map, MIN and MAX are preset constants and are respectively set to 0 and 10, when the difference between MAX (x, y) and MIN (x, y) is greater than MAX, it indicates that the pixel (x, y) is a stationary pixel or a pixel on the edge of the background, otherwise, when the difference is less than MIN, it indicates that the pixel (x, y) is a stationary pixel or a pixel on the inner edge of the background, therefore, the effective motion map records the motion region.
In the step S3, a color model is established for the flame and smoke, and a suspected color area is extracted from the moving area, and the specific steps include:
s31 early stage flame temperature is low, so color is mainly distributed in red, orange, and yellow regions, and red is a main component, and the color features of the flame are described based on RGB color space according to the features, in order to prevent interference of luminance information on color information, and the luminance component in the flame image is large, HSV and YCbCr color spaces are used to further define the luminance and color features, and the flame color model rule is based on three color spaces of RGB, HSV, and YCbCr, and the rule phase is matched to obtain a color suspected region of flame, and the specific rule includes:
rule 1: r (x, y)>RT
Rule 2: r (x, y) ≧ G (x, y) > B (x, y)
Rule 3: s (x, y) ≥ 255-R (x, y)) ST/RT
Rule 4: h1<H(x,y)<H2,S1<S(x,y)<S2,V1<V(x,y)<V2
Rule 5: y (x, Y) is more than or equal to Cr (x, Y) is more than or equal to Cb (x, Y)
Rule 6: y (x, Y)>Ymean,Cb(x,y)<Cbmean,Cr(x,y)>Crmean
Rule 7: cr (x, y) -Cb (x, y) ≥ lambda
In the formula, R (x, y), G (x, y) and B (x, y) respectively represent red, green and blue components of the pixel point (x, y) in RGB space, and R (x, y) represents red, green and blue components in RGB spaceTThe threshold value for the red component is 115 in this embodiment, H (x, y), S (x, y), and V (x, y) are three components of hue, saturation, and brightness of the pixel (x, y) in HSV color space, respectively, and S is a value of the threshold value for the red componentTIs a red component of RTThe value corresponding to the saturation in the HSV space is set to 55 in this embodiment. H1、H2、S1、S2、V1、V2The threshold values are preset for hue, saturation and brightness, and are respectively 0.02, 0.30, 0.20, 1.00, 0.98 and 1.00 in this embodiment, and Y (x, Y), Cb (x, Y) and Cr (x, Y) are respectively the brightness, red component, blue,And the blue component, lambda is a non-negative threshold value of the difference value of the two components Cr (x, y) and Cb (x, y), and is set to be 40. Y ismean、Cbmean、CrmeanRespectively obtaining the average values of all pixels in the current image on three components of Y, Cb and Cr;
the smoke of S32 is the main sign of early fire and smoldering fire, and is often represented as gray, so its R, G, B three component values are approximately equal, and its brightness value also varies within the corresponding interval due to the difference of the combustion products and the separation into light gray and dark gray. The adopted smog color model rule is based on RGB and HSI spaces, and the rule is subjected to phase comparison to obtain a color suspected area, wherein the rule specifically comprises the following steps:
rule 1: delta is less than or equal to | R (x, y) -G (x, y) |1
Rule 2: delta is less than or equal to | R (x, y) -B (x, y) |2
Rule 3: delta is less than or equal to | G (x, y) -B (x, y) |3
Rule 4: l is1≤I(x,y)≤L2||D1≤I(x,y)≤D2
Wherein, delta1、δ2、δ3The threshold value for the absolute difference between the three components R, G, B is set as 16, 48, 34, L in this embodiment1、L2、D1、D2The upper and lower limit constants of the luminance I (x, y) change in light gray and dark gray, respectively, and are set to 150, 220, 67, and 130 in this embodiment.
In the step S4, a morphological operation of expanding and then corroding is performed on the suspected color area, and the small holes and the small seams are filled and leveled, so as to obtain a smooth and continuous suspected flame area and a smooth and continuous suspected smoke area.
In the step S5, the extraction of the dynamic and static characteristics of the suspected areas of the flame and the smoke is performed respectively, and the specific steps include:
s51 extracts dynamic and static characteristic parameters of the flame from the suspected flame region, specifically including edge complexity, randomness, similarity, disorder change rate, and jitter frequency of the flame region, and respectively expressed as E, Δ a, θ, ω, and p, and the specific calculation method is as follows:
static characteristics of A
Edge complexity
The edge of the flame between the continuous frames is different and varies greatly, the complexity of the edge is different from that of other interference objects, and the edge can be used as a characteristic of other objects in a partition and is defined as follows:
Figure BDA0002182835980000111
e is the average edge complexity of k continuous flame regions of the current frame, the larger E represents the higher edge complexity, PjThe perimeter of the jth continuous suspected flame area in the frame can be represented by the number of pixels of the edge, AjThe area of the region can be represented by the number of pixels in the region;
b dynamic characteristics
1 degree of randomness
The flame changes from one frame to another frame with a certain degree of randomness, so the difference between two continuous frames can be counted to represent the characteristic, and the calculation formula is as follows:
ΔA=|At-At-1|
Atand At-1Respectively representing the pixel quantity of candidate areas of a t frame and a t-1 frame, wherein Delta A is the absolute pixel quantity difference of the candidate areas of two continuous frames;
2 degree of similarity
The shape of the flame looks irregular but shows a certain similarity in a short time interval, and adjacent frames change within a certain range, which is different from other moving light sources or interferents with similar colors, and the description formula is as follows:
Figure BDA0002182835980000121
wherein, FtAnd Ft-1The areas of suspected flames in the t-th and t-1-th frames, respectively, Ft∩Ft-1To calculate the area of the overlapping area of the suspected flame areas in two adjacent frames, Ft∪Ft-1Calculating the area of the area after the areas of the suspected flames in the two adjacent frames are subjected to union set;
3 rate of change of disorder degree
In the process of flame combustion, the flame shape can be changed all the time due to the influence of factors such as combustion substances, airflow and the like, and a disorder degree change rate formula for describing the change is as follows:
FDt(x,y)=Ft(x,y)-Ft-1(x,y)
FDt+1(x,y)=Ft+1(x,y)-Ft(x,y)
Figure BDA0002182835980000122
wherein ω represents the disorder degree change rate, Ft(x, y) denotes the area of suspected flame in the t-th frame, FDt(x, y) represents the degree of disorder in the region t, Vt(i) Denotes FDt(x, y) a value of an ith pixel, N representing an amount of pixels in the suspected flame region;
4 jitter frequency
The jitter frequency is a characteristic which has high reliability, strong anti-interference capability and easy description in dynamic characteristics. The main idea of the jitter frequency detection is to count the frame difference result of two adjacent frames in a certain period, and the jitter frequency is calculated in the following manner:
Figure BDA0002182835980000123
Figure BDA0002182835980000131
where T is the statistical period 10, Di(t) is the value of the ith pixel in the suspected flame area in the tth frame, N is the pixel amount in the area, piThe accumulated difference of pixel values in any pixel point T frame in the region is shown, and p is the jitter frequency;
s52, after obtaining the suspected smoke area, extracting and analyzing the dynamic and static characteristic parameters of the smoke, wherein the parameters specifically comprise the area growth value, the circularity and the displacement of the smoke area, and the specific calculation methods of the characteristics are as follows:
1 area growth analysis
The smoke will increase after the occurrence of the smoke, so the pixel quantity of the candidate area will show an increasing characteristic, and the calculation formula is:
Figure BDA0002182835980000132
where T is the statistical period 10, StAnd St-1The areas of the smoke candidate regions of the t-th frame and the t-1 th frame are calculated by pixel quantity, g is an average growth rate, the lower limit of a variation range is set to be 0.005, and the upper limit is set to be 0.1;
2 degree of circularity analysis
Circularity describes the complexity of the shape of an object, and when the object is circular, its circularity is the lowest, and its value is 1. The more complex the shape of the object, the more its circularity is. The circularity calculation formula is:
Figure BDA0002182835980000133
wherein L istThe perimeter of the smoke candidate area in the t frame is expressed by the area edge pixel quantity, StThe area of the smoke candidate area in the t-th frame is represented by the pixel quantity of the candidate area, the variation range of c is set to be 3-40, and the output result is true in the range;
3 degree of displacement analysis
After the smoke appears, the shape, the area and the shape of the smoke are changed constantly, but the central position and the main body area of the smoke are kept in a certain area, so that the change of the central position is not enough to be considered singly, and the dynamic change of the area needs to be considered, so that the expression is more reasonable by using the area overlapping degree, namely the area overlapping degree of the same suspected target area between adjacent frames is studied, and the mathematical expression is as follows:
Figure BDA0002182835980000134
s (t, j) and S (t-1, j) represent the area of the jth candidate region in the t-th and t-1 frames, respectively, SmRepresenting the area of the smallest bounding rectangle containing the jth candidate region in the two frames, where the area is replaced in the calculation by the amount of pixels in the candidate region, when d>When 0.5 hour, the smoke is judged to be smoke, and the output is true;
and finally, comparing the acquired characteristic parameters with a predefined threshold value respectively to obtain an output result of whether the smoke characteristics are met, wherein the output result comprises true and false.
In step S6, the extracted flame feature parameters are input as SVM classifiers to determine whether flames are contained, and in addition, the extracted smoke features are analyzed respectively and the analysis results are input to a logic operator to determine whether smoke is contained, which includes the following specific steps:
the S61SVM classifier has a faster detection speed than other classifiers and a neural network, can obtain a better detection effect on a small sample training set, and has excellent generalization capability, so that the SVM classifier is adopted to classify flame images, firstly, training samples are collected to train an SVM model, the characteristic parameters of flame are used as the input of an SVM, whether flame exists in a video is judged, and the input characteristic vector is as follows: x is the number ofi=[E,△A,θ,ω,p](ii) a The output result of the SVM is as follows:
Figure BDA0002182835980000141
wherein, the weight value we=αeye,K(xi,xj) Is a kernel function, αeAs lagrange multipliers, constraints
Figure BDA0002182835980000142
αeMore than or equal to 0, e is 1,2, n; the kernel function is a radial basis kernel function, and the formula is as follows:
K(xe,xf)=exp(-γ||xe-xf||2)
wherein, | | xe-xf||2The squared Euclidean distance between two feature vectors is adopted, gamma is a parameter of a kernel function, and when the parameter is 0.008, a better classification result is obtained;
s62, true and false results obtained by comparing the smoke growth value, the circularity and the displacement with a threshold value are input into a logic arithmetic unit, so that whether smoke exists in the image or not is judged, the logic arithmetic unit can flexibly select and combine logical arithmetic combinations such as AND, OR, NOT, XOR and the like to calculate the three results, and according to different detection scenes, different arithmetic modes are adopted to adjust the sensitivity of the detection algorithm and increase the generalization capability of the algorithm, for example:
for highly sensitive areas such as petrochemical storage and the like, an OR gate can be used, and smoke is judged to be contained when a suspected smoke area meets a condition;
for monitoring a general production and living area, at least any two conditions meet the requirement of judging that the image contains smoke;
in places such as workshops containing fire work and smoke dust, the output results of the three judgment conditions can be subjected to logical operation by using an AND gate, and the smoke is judged to be contained when the three conditions are simultaneously met.
As shown in fig. 3, the step S7 inputs the image sequence containing flame or smoke into the fire discriminator to determine whether a fire is formed, and performs different processes, including:
s71 obtaining two continuous frames to judge the area containing flame or smoke and calculating the area M of each corresponding areaaAnd Ma-1,MaAnd Ma-1The areas of corresponding regions in the current frame and the previous frame, respectively, here expressed in terms of the amount of pixels in the region;
s72, calculating the area growth rate beta of the flame or smoke between two continuous frames, wherein the calculation formula is as follows:
Figure BDA0002182835980000151
s73 if the area growth rate beta is greater than zero and greater than the threshold value GTIf the initial value is 0, the variable Count is increased by 1, and the calculation formula is:
Figure BDA0002182835980000152
s74 when the growth rate of five continuous frames is greater than the threshold value GTIf the number is 5, judging that the monitoring site contains fire, carrying out fire alarm on the monitoring site, otherwise, only carrying out fire alarm prompt on the monitoring background, wherein the growth rate threshold G for flame and smoke isTIs different, the threshold value G of the flameTSet to 0.18 and smoke set to 0.01.
In this embodiment, the training set is derived from a fire video library of the university of Bilkent, the test set is collected in the Internet, the test result is shown in FIG. 4, where a is a flame video, b is a smoke video, and e is a non-fire video with color interference, the fire area is identified by the algorithm and its position is marked by a rectangular frame, but the non-fire video is not detected.
The above description is only a preferred embodiment of the present invention, and not intended to limit the present invention, the scope of the present invention is defined by the appended claims, and all structural changes that can be made by using the contents of the description and the drawings of the present invention are intended to be embraced therein.

Claims (5)

1. A fire video detection and early warning method based on image multi-feature fusion is characterized by comprising the following steps:
step S1, acquiring a video image, and performing denoising pretreatment on the acquired video image sequence;
step S2, establishing a background model for the image after the denoising pretreatment by using an improved method based on a gradient motion historical map to extract a motion region;
in step S21, first, a binary gradient map BGI (x, y) is calculated as:
Figure FDA0003530635220000011
wherein, BGIx(x, y) and BGIy(x, y) are images after binarization processing is carried out on gradient images in the directions of an x axis and a y axis respectively, and the gradient images are calculated from gray level images;
step S22, calculating a gradient motion history map GMHI (x, y), and recording motion history information of each pixel, wherein the calculation formula is as follows:
Figure FDA0003530635220000012
wherein t represents the current frame;
step S23, calculating an effective motion map EMI (x, y), the calculation formula is as follows:
Figure FDA0003530635220000013
wherein, MAX (x, y) and MIN (x, y) are the maximum and minimum GMHI values in the m × m field of the pixel (x, y) in the gradient motion history map, MIN and MAX are preset constants, when the difference between MAX (x, y) and MIN (x, y) is greater than MAX, it indicates that the pixel (x, y) is a stationary pixel or a pixel on the edge of the background, and when the difference is less than MIN, it indicates that the pixel (x, y) is a stationary pixel or a pixel on the inner edge of the background;
step S3, extracting the pixels of the motion area by using the color models of flame and smoke respectively to obtain a suspected color area;
step S4, performing morphological closing operation on the color suspected area to obtain a smooth and continuous suspected area;
step S5, extracting dynamic and static characteristics of the suspected area;
the method comprises the following steps of respectively extracting dynamic and static characteristics of suspected areas of flame and smoke, and specifically comprises the following steps:
step S51 is to extract dynamic and static characteristic parameters of the flame from the suspected flame region, specifically including edge complexity, randomness, similarity, disorder change rate, and jitter frequency of the flame region, and respectively expressed as E, Δ a, θ, ω, and p, and the specific calculation method is as follows:
static characteristics of A
The edge complexity is defined as follows:
Figure FDA0003530635220000021
wherein E is the average edge complexity of k continuous flame regions of the current frame, the larger E represents the higher edge complexity, and PjIs the perimeter of the jth continuous suspected flame region of the frame, AjIs the area of the region;
b dynamic characteristics
The random degree formula is as follows:
ΔA=|At-At-1|
wherein A istAnd At-1Respectively representing the pixel quantity of candidate areas of a t frame and a t-1 frame, wherein Delta A is the absolute pixel quantity difference of the candidate areas of two continuous frames;
the similarity θ is formulated as:
Figure FDA0003530635220000022
wherein, Ft(x, y) and Ft-1(x, y) indicates the suspected flame area of the t-th frame and t-1 frame, respectively, Ft(x,y)∩Ft-1(x, y) calculating the overlapping area of the suspected flame areas in two adjacent frames, Ft(x,y)∪Ft-1(x, y) calculating the area of the areas of the suspected flame areas in the two adjacent frames after the suspected flame areas are subjected to union set;
the disorder degree change rate formula is as follows:
FDt(x,y)=Ft(x,y)-Ft-1(x,y)
FDt+1(x,y)=Ft+1(x,y)-Ft(x,y)
Figure FDA0003530635220000023
wherein ω represents the disorder degree change rate, Ft(x, y) denotes a pseudo-flame area at the t-th frame, FDt(x, y) represents the degree of disorder in the region t, Vt(i) Denotes FDt(x, y) a pixel value of an ith pixel, N representing an amount of pixels in the region;
the jitter frequency mode is as follows:
Figure FDA0003530635220000024
Figure FDA0003530635220000025
where T is the period of the statistics, Di(t) is the value of the ith pixel in the suspected flame area in the tth frame, N is the pixel amount in the area, piThe accumulated difference of pixel values in any pixel point T frame in the region is shown, and p is the jitter frequency;
step S52, after the suspected smoke area is obtained, the dynamic and static characteristic parameters of the smoke are extracted and analyzed;
1, area growth analysis
Figure FDA0003530635220000031
Where T is the period of the statistic, StAnd St-1The areas of the smoke candidate areas of the t frame and the t-1 frame are obtained, g is the average growth rate, and if the g is within a predefined range, the output of the area growth analysis is true;
2, analysis of circularity
Figure FDA0003530635220000032
Wherein L istThe perimeter of the smoke candidate area in the t frame is expressed by the area edge pixel quantity, StC is the circularity of the suspected smoke area, and when the circularity is within a predefined range, the feature analysis output result is true;
3, degree of Displacement analysis
Figure FDA0003530635220000033
Wherein S (t, j) and S (t-1, j) respectively represent the area of the jth candidate region in the t-th and t-1 frames, and SmThe area of the minimum circumscribed rectangle containing the jth candidate region in the two frames is represented, the area is replaced by the pixel quantity in the candidate region in calculation, and when d is larger than a predefined threshold value, the smoke is judged to be smoke, and the output is true;
finally, the obtained characteristic parameters are respectively compared with a predefined threshold value to obtain an output result whether the smoke characteristics are met, and the output result comprises true and false;
step S6, the extracted flame characteristic value is used as SVM classifier input to judge whether flame is contained, simultaneously, the extracted smoke characteristic is calculated and the result is input to a logic arithmetic unit to judge whether smoke is contained;
step S7, inputting the continuous image sequence containing flame or smoke into a fire discriminator to judge whether a fire is formed, thereby respectively making different early warning treatments;
step S71, acquiring the area M of the flame or smoke area in two continuous framesaAnd Ma-1,MaAnd Ma-1The areas of corresponding regions in the current frame and the previous frame, respectively;
step S72, calculating the area growth rate beta of the flame or the smoke between two continuous frames, wherein the calculation formula is as follows:
Figure FDA0003530635220000034
step S73, if the area increase rate beta is larger than zero and larger than the threshold value GTIf the initial value is 0, the variable Count is increased by 1, and the calculation formula is:
Figure FDA0003530635220000041
step S74, when the growth rate of the continuous h frames is larger than the threshold value GTIf the fire alarm is not carried out, only a monitoring background is used for carrying out fire alarm prompting, wherein the threshold value G of the growth rate of flame and smoke is used for judging that the monitoring site contains the fire, otherwise, the fire alarm is carried out on the monitoring site, and the fire alarm prompting is carried out only on the monitoring backgroundTThe specific values of (a) are different.
2. The fire video detection and early warning method based on image multi-feature fusion as claimed in claim 1, wherein: in step S1, a median filtering method is used to perform denoising preprocessing on the obtained video image sequence.
3. The fire video detection and early warning method based on image multi-feature fusion as claimed in claim 2, characterized in that: in step S3, a color model is established for the flame and smoke, and a suspected color area is extracted from the moving area, and the specific steps include:
step S31, designing a flame color model rule based on RGB, HSV and YCbCr color spaces, and performing phase comparison on the rule to obtain a color suspected region of the flame, wherein the specific rule comprises the following steps:
rule 1: r (x, y)>RT
Rule 2: r (x, y) ≧ G (x, y) > B (x, y)
Rule 3: s (x, y) ≥ 255-R (x, y)). ST/RT
Rule 4: h1<H(x,y)<H2,S1<S(x,y)<S2,V1<V(x,y)<V2
Rule 5: y (x, Y) is more than or equal to Cr (x, Y) is more than or equal to Cb (x, Y)
Rule 6: y (x, Y)>Ymean,Cb(x,y)<Cbmean,Cr(x,y)>Crmean
Rule 7: cr (x, y) -Cb (x, y) ≥ lambda
In the formula, R (x, y), G (x, y) and B (x, y) respectively represent red, green and blue components of the pixel point (x, y) in RGB space, and R (x, y) represents red, green and blue components in RGB spaceTH (x, y), S (x, y) and V (x, y) are three components of hue, saturation and brightness of the pixel point (x, y) in HSV color space respectively, and S is a threshold value of a red componentTRed component equal to RTCorresponding to the value of saturation in HSV space, H1、H2、S1、S2、V1、V2The color value is a threshold value preset for hue, saturation and brightness, Y (x, Y), Cb (x, Y) and Cr (x, Y) are respectively the brightness, red component and blue component of a pixel point (x, Y) in a YCbCr space, lambda is a non-negative threshold value of the difference value of the two components Cr (x, Y) and Cb (x, Y), and Y is a non-negative threshold value of the difference value of the two components Cr (x, Y) and Cb (x, Y)mean、Cbmean、CrmeanRespectively taking the average values of all pixels in the current image on three components of Y, Cb and Cr;
step S32, designing a smoke color model rule based on RGB and HSI space, and performing phase-joining on the rule to obtain a color suspected area, wherein the rule specifically comprises the following steps:
rule 1: delta is less than or equal to | R (x, y) -G (x, y) |1
Rule 2: delta is less than or equal to | R (x, y) -B (x, y) |2
Rule 3: delta is less than or equal to | G (x, y) -B (x, y) |3
Rule 4: l is1≤I(x,y)≤L2||D1≤I(x,y)≤D2
Wherein, delta1、δ2、δ3Is a threshold value of the absolute difference between R, G, B three components, L1、L2、D1、D2The upper and lower limits of the variation of the luminance I (x, y) in light gray and dark gray, respectively, are constant.
4. The fire video detection and early warning method based on image multi-feature fusion as claimed in claim 3, characterized in that: in the step S4, a morphological operation of expanding and then corroding is performed on the suspected color area, and the small holes and the small seams are filled and leveled, so as to obtain a smooth and continuous suspected flame area and a smooth and continuous suspected smoke area.
5. The fire video detection and early warning method based on image multi-feature fusion as claimed in claim 4, wherein: step S6 is to input the extracted flame features as SVM classifier to determine whether there is flame, and to analyze the extracted smoke features and input the analysis result to the logic operator to determine whether there is smoke, and the specific steps include:
step S61, firstly, determining the optimal parameters in the SVM, acquiring training samples to train the SVM model, then taking the characteristics of flame extraction as the input of the SVM, and judging whether flame exists in the video;
step S62, extracting each characteristic of the suspected area of the smoke, comparing the calculated characteristic with a threshold value, inputting the obtained true and false results into a logic arithmetic unit so as to judge whether the image contains the smoke, wherein the logic arithmetic unit calculates the judgment results of the three characteristics of the smoke by adopting a mode of combining AND, OR, NOT and XOR logic arithmetic, and adjusts the sensitivity of the detection algorithm by adopting different logic arithmetic combination modes according to different detection scenes so as to increase the generalization capability of the algorithm;
and step S63, when the SVM judges that the flame is contained or the output of the logic arithmetic unit is true, the monitored area is judged to contain the fire.
CN201910802918.0A 2019-08-28 2019-08-28 Fire disaster video detection and early warning method based on image multi-feature fusion Active CN110516609B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910802918.0A CN110516609B (en) 2019-08-28 2019-08-28 Fire disaster video detection and early warning method based on image multi-feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910802918.0A CN110516609B (en) 2019-08-28 2019-08-28 Fire disaster video detection and early warning method based on image multi-feature fusion

Publications (2)

Publication Number Publication Date
CN110516609A CN110516609A (en) 2019-11-29
CN110516609B true CN110516609B (en) 2022-04-22

Family

ID=68627498

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910802918.0A Active CN110516609B (en) 2019-08-28 2019-08-28 Fire disaster video detection and early warning method based on image multi-feature fusion

Country Status (1)

Country Link
CN (1) CN110516609B (en)

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127433B (en) * 2019-12-24 2020-09-25 深圳集智数字科技有限公司 Method and device for detecting flame
CN111274896B (en) * 2020-01-15 2023-09-26 深圳市守行智能科技有限公司 Smoke and fire recognition algorithm
CN111310566A (en) * 2020-01-16 2020-06-19 国网山西省电力公司电力科学研究院 Static and dynamic multi-feature fusion mountain fire detection method and system
CN111540155B (en) * 2020-03-27 2022-05-24 北京联合大学 Intelligent household fire detector
CN111666834A (en) * 2020-05-20 2020-09-15 哈尔滨理工大学 Forest fire automatic monitoring and recognizing system and method based on image recognition technology
CN111626188B (en) * 2020-05-26 2022-05-06 西南大学 Indoor uncontrollable open fire monitoring method and system
CN111797726A (en) * 2020-06-18 2020-10-20 浙江大华技术股份有限公司 Flame detection method and device, electronic equipment and storage medium
CN111882568B (en) * 2020-06-28 2023-09-15 北京石油化工学院 Fire image edge extraction processing method, terminal and system
CN111523528B (en) * 2020-07-03 2020-10-20 平安国际智慧城市科技股份有限公司 Strategy sending method and device based on scale recognition model and computer equipment
CN111986436B (en) * 2020-09-02 2022-12-13 成都视道信息技术有限公司 Comprehensive flame detection method based on ultraviolet and deep neural networks
CN112347937B (en) * 2020-11-06 2023-11-10 南京朗联消防科技有限公司 Indoor fire monitoring system and method based on visual perception
CN112329656B (en) * 2020-11-10 2022-05-10 广西大学 Feature extraction method for human action key frame in video stream
CN112487994A (en) * 2020-12-01 2021-03-12 上海鸢安智能科技有限公司 Smoke and fire detection method and system, storage medium and terminal
CN112669287B (en) * 2020-12-29 2024-04-23 重庆大学 Electrical equipment temperature monitoring method based on image recognition
CN112927461B (en) * 2021-02-24 2023-06-16 武汉辰磊科技有限公司 Early warning decision method and device for charging pile of new energy automobile
CN112949536B (en) * 2021-03-16 2022-09-16 中信重工开诚智能装备有限公司 Fire alarm method based on cloud platform
CN113298027B (en) * 2021-06-15 2023-01-13 济南博观智能科技有限公司 Flame detection method and device, electronic equipment and storage medium
CN113411570B (en) * 2021-06-16 2023-07-14 福建师范大学 Monitoring video brightness anomaly detection method based on cross-period feature discrimination and fusion
CN113537099B (en) * 2021-07-21 2022-11-29 招商局重庆交通科研设计院有限公司 Dynamic detection method for fire smoke in highway tunnel
CN113657250A (en) * 2021-08-16 2021-11-16 南京图菱视频科技有限公司 Flame detection method and system based on monitoring video
CN113744326B (en) * 2021-08-25 2023-08-22 昆明理工大学 Fire detection method based on seed region growth rule in YCRCB color space
CN113780195A (en) * 2021-09-15 2021-12-10 北京林业大学 Forest fire smoke root node detection method based on block extraction
CN114627610A (en) * 2022-03-14 2022-06-14 周瑛 Disaster situation processing method and device based on image recognition
CN114399719B (en) * 2022-03-25 2022-06-17 合肥中科融道智能科技有限公司 Transformer substation fire video monitoring method
CN115394039A (en) * 2022-08-26 2022-11-25 新创碳谷控股有限公司 Flame detection method and device based on double-color space segmentation and storage medium
CN115359616B (en) * 2022-08-26 2023-04-07 新创碳谷集团有限公司 Method for monitoring fire condition in oxidation furnace, computer equipment and storage medium
CN115713833A (en) * 2022-08-30 2023-02-24 新创碳谷集团有限公司 Flame detection method and device based on area characteristics and storage medium
CN115394040B (en) * 2022-08-30 2023-05-23 新创碳谷集团有限公司 Flame detection method, computer equipment and storage medium
CN115223105B (en) * 2022-09-20 2022-12-09 万链指数(青岛)信息科技有限公司 Big data based risk information monitoring and analyzing method and system
CN116630843A (en) * 2023-04-13 2023-08-22 安徽中科数智信息科技有限公司 Fire prevention supervision and management method and system for fire rescue
CN116453064B (en) * 2023-06-16 2023-08-18 烟台黄金职业学院 Method for identifying abnormal road conditions of tunnel road section based on monitoring data
CN116977327B (en) * 2023-09-14 2023-12-15 山东拓新电气有限公司 Smoke detection method and system for roller-driven belt conveyor

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101315667A (en) * 2008-07-04 2008-12-03 南京航空航天大学 Multi-characteristic synthetic recognition method for outdoor early fire disaster
CN108399359A (en) * 2018-01-18 2018-08-14 中山大学 Fire detection method for early warning in real time under a kind of video sequence
CN108447219A (en) * 2018-05-21 2018-08-24 中国计量大学 System and method for detecting fire hazard based on video image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7609856B2 (en) * 2007-11-13 2009-10-27 Huper Laboratories Co., Ltd. Smoke detection method based on video processing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101315667A (en) * 2008-07-04 2008-12-03 南京航空航天大学 Multi-characteristic synthetic recognition method for outdoor early fire disaster
CN108399359A (en) * 2018-01-18 2018-08-14 中山大学 Fire detection method for early warning in real time under a kind of video sequence
CN108447219A (en) * 2018-05-21 2018-08-24 中国计量大学 System and method for detecting fire hazard based on video image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Real-time multi-feature based fire flame detection in video;Rui Chi等;《IET Image Processing》;20171231;第11卷(第1期);第31-37页 *
多特征融合的火焰检测算法;吴茜茵等;《智能系统学报》;20150430;第10卷(第2期);第240-247页 *

Also Published As

Publication number Publication date
CN110516609A (en) 2019-11-29

Similar Documents

Publication Publication Date Title
CN110516609B (en) Fire disaster video detection and early warning method based on image multi-feature fusion
Appana et al. A video-based smoke detection using smoke flow pattern and spatial-temporal energy analyses for alarm systems
Emmy Prema et al. Multi feature analysis of smoke in YUV color space for early forest fire detection
CN107085714B (en) Forest fire detection method based on video
CN105404847B (en) A kind of residue real-time detection method
US7876229B2 (en) Flare monitoring
Khalil et al. Fire detection using multi color space and background modeling
Manfredi et al. Detection of static groups and crowds gathered in open spaces by texture classification
Li et al. Autonomous flame detection in videos with a Dirichlet process Gaussian mixture color model
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN111428631B (en) Visual identification and sorting method for unmanned aerial vehicle flight control signals
Pundir et al. Deep belief network for smoke detection
CN110874592A (en) Forest fire smoke image detection method based on total bounded variation
Gunawaardena et al. Computer vision based fire alarming system
CN108399359A (en) Fire detection method for early warning in real time under a kind of video sequence
CN113177467A (en) Flame identification method, system, device and medium
Wang et al. Early smoke detection in video using swaying and diffusion feature
Chen et al. Fire detection using spatial-temporal analysis
KR101690050B1 (en) Intelligent video security system
KR101581162B1 (en) Automatic detection method, apparatus and system of flame, smoke and object movement based on real time images
Deldjoo et al. A novel fuzzy-based smoke detection system using dynamic and static smoke features
Munshi Fire detection methods based on various color spaces and gaussian mixture models
Avalhais et al. Fire detection on unconstrained videos using color-aware spatial modeling and motion flow
Abidha et al. Reducing false alarms in vision based fire detection with nb classifier in eadf framework
JP6457728B2 (en) Laminar smoke detection device and laminar smoke detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant