CN105139429B - A kind of fire detection method based on flame notable figure and spatial pyramid histogram - Google Patents

A kind of fire detection method based on flame notable figure and spatial pyramid histogram Download PDF

Info

Publication number
CN105139429B
CN105139429B CN201510503877.7A CN201510503877A CN105139429B CN 105139429 B CN105139429 B CN 105139429B CN 201510503877 A CN201510503877 A CN 201510503877A CN 105139429 B CN105139429 B CN 105139429B
Authority
CN
China
Prior art keywords
mrow
flame
msub
image
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510503877.7A
Other languages
Chinese (zh)
Other versions
CN105139429A (en
Inventor
陈喆
殷福亮
李政霖
耿晓馥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201510503877.7A priority Critical patent/CN105139429B/en
Publication of CN105139429A publication Critical patent/CN105139429A/en
Application granted granted Critical
Publication of CN105139429B publication Critical patent/CN105139429B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/12Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
    • G08B17/125Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A kind of fire detection method based on flame notable figure and spatial pyramid histogram, comprises the following steps:S1:The Strength Changes value and prospect degree of each pixel in image are calculated, make the difference method using successive frame obtains flame notable figure with Gaussian Mixture flame color model;S2:According to flame notable figure, the pixel of candidate's flame is filtered out using thresholding method, constructs the mask image containing candidate's flame pixels point;S3:Piecemeal processing is carried out to original image according to mask image and judges whether each sub-block contains flame;S4:Corresponding sub-block in previous frame image is found by sub-block, and the Distance Judgment image corresponded to using successive frame between the spatial pyramid histogram of sub-block whether there is flame.

Description

Fire detection method based on flame saliency map and spatial pyramid histogram
Technical Field
The invention relates to the technical field of image processing, in particular to a fire detection method based on a flame saliency map and a spatial pyramid histogram.
Background
In modern society, fires cause significant losses to the life and property safety of people every year. In China, about 4 thousands of fires happen every year, 2 thousands of people die, 3 thousands of people die, and the direct property loss reaches 10 billions of yuan. The fire disaster (especially the initial fire disaster) can be found timely and accurately, and the alarm is given, so that the trapped people and property can have sufficient time to transfer, and the fire fighter can put out a fire under the condition of controllable fire condition. Therefore, monitoring of fire is a very important issue in the field of fire control.
Early automatic fire detection techniques mostly used smoke or thermal sensors to determine whether a fire exists by sampling particles, gases, temperature or radiant heat, etc. Although automatic detection is basically achieved, such fire detection techniques have great limitations. Firstly, the effectiveness of the technology is limited to a closed space, and once the technology is applied to a large-area space (particularly under an outdoor environment), the detection rate is greatly reduced; secondly, the technology is greatly influenced by the environment, and the fire detection effect can be greatly influenced by the environmental changes such as wind speed, wind direction, rain, snow and the like; thirdly, the detection time is long, because a certain time is needed for the gas, particles and the like to reach the sensor; meanwhile, the temperature sensor, the smoke sensor and the like have high manufacturing cost and are easy to damage. The popularization of the monitoring camera provides new possibility for fire detection, fire monitoring is carried out by adopting manpower at first, a large amount of manpower resources are wasted, and the monitoring effect cannot be guaranteed. Generally, the number of monitoring personnel is limited, and the monitoring can not be omitted everywhere. Meanwhile, since human attention cannot be concentrated for a long time, the monitoring effect is reduced as the working time increases. The limitations of the prior art described above make them impractical for a wide range of applications in real-life fire detection.
The fire detection technology based on image processing mainly has the following four advantages: (1) compared with the traditional method, the technology has better detection results, namely higher detection rate and lower false detection rate; (2) when applied to a large space, the presence of a fire can be detected accurately and quickly; (3) the detection effect is stable and is hardly influenced by the external environment; (4) the fire detection system can be added into the existing monitoring system, the cost is low, and the financial burden of the fire prevention system can be reduced. Therefore, the fire detection technology based on the image processing technology has wide development prospect.
Celi et al in the prior art use the rapid change in flame shape and area over time as characteristic of the Automatic fire detection in video sequences. Firstly, a pixel point is screened by using a self-adaptive background model and a flame color model, then morphological erosion and expansion processing is carried out on a candidate area, and whether flame exists in a video is judged according to the change condition of a space mean value and an area of a processed closed area along with time. Although the technology can describe the dynamic characteristics of the flame more accurately, the technology has higher detection rate. But the method is sensitive to the existence of noise, and has higher false detection rate when the video quality is poor or moving interferents exist.
In 2009, Habibglu et al, in the document "Covariance matrix-based fire and flame detection method in video", proposed to use the time-dependent variation of the Covariance matrix as a feature to determine whether a flame is present. The technology firstly utilizes a flame color model to screen pixel points, then utilizes covariance matrixes of a plurality of frames of images and values of three color channels as characteristics, utilizes manually calibrated training data to train a classifier, and further classifies candidate pixel points. The technology uses the time-varying condition of the covariance matrix of the pixel points as features for classification, although the detection performance is improved to a certain extent, false detection is easy to occur under the condition that a moving interfering object similar to the flame color exists, and the calculation complexity is high.
Disclosure of Invention
According to the problems existing in the prior art, the invention discloses a fire detection method based on a flame saliency map and a spatial pyramid histogram, which comprises the following steps:
s1: calculating the intensity change value and the foreground degree of each pixel point in the image, and obtaining a flame saliency map by adopting a continuous frame difference method and a Gaussian mixture flame color model;
s2: screening out pixel points of the candidate flames by adopting a threshold segmentation method according to the flame saliency map, and constructing a mask image containing the pixel points of the candidate flames;
s3: the original image is processed according to the mask image as follows: if the value of the pixel point in the mask image is 1, the pixel point in the original image at the corresponding position keeps the original value, otherwise, the value of the pixel point in the original image is zero; dividing the processed original image into a plurality of sub-blocks, and calculating the number of line blocks, the number of column blocks and the number of candidate flame pixels in each sub-block of the divided image so as to judge whether the sub-block contains flames or not;
s4: and searching sub-blocks corresponding to the previous frame of image one by one, and judging whether the image has flame or not by utilizing the distance between the space pyramid histograms of the sub-blocks corresponding to the continuous frames.
The flame saliency map obtained in the S1 is specifically obtained in the following way:
s11: calculating the intensity change value of each pixel point in the image:
wherein: pdiff(x, y, t) is the intensity change value of the pixel point located at the (x, y) position at the time t, and I (x, y, t) is the intensity value of the pixel point located at the (x, y) position at the time t;
s12: calculating the foreground degree of each pixel point in the image, wherein the foreground degree is the probability value that the pixel point belongs to the foreground in the foreground detection stage:
wherein, PF(x, y, t) represents the foreground degree of a pixel point at the position (x, y) at the time t, and the foreground degree is the characteristic of describing the continuous change of the flame area intensity by summing the weighted logarithm values of the intensity change of N frames before the time t;
s13: according to the trained Gaussian mixture flame color model, calculating the probability that the color of each pixel point is the flame color in the RGB color space:
wherein q (x, y, t) is a color vector of the pixel point at the position (x, y) at the time t in the RGB color space, and the representation form is
q(x,y,t)={R(x,y,t),G(x,y,t),B(x,y,t)} (4)
K is the number of unimodal Gaussian density function components in the trained Gaussian mixture flame color model, mukSum-sigmakThe mean vector and covariance matrix of the k-th component in the trained Gaussian mixture flame color model, αkIs the weight of each Gaussian component, and the expression of the function η (-) is
Wherein D is the dimension of the vector, and in the present invention, D is 3;
s14: constructing a flame saliency map according to the foreground degree and the flame color probability:
calculating the saliency of each pixel point according to the pixel foreground degree calculated by the formula (2) and the flame color probability calculated by the formula (3)
fs(x,y,t)=PF(x,y,t)+logPc(q(x,y,t)) (6)
Wherein f issThe (x, y, t) is the significance of the pixel point at the position (x, y) at the time t, and is also the pixel value of the pixel point at the position (x, y) in the flame significance map.
In S2, the following method is adopted:
according to the flame significance map obtained by the formula (6), the interference points are filtered through threshold segmentation, and a mask image containing candidate flame pixel points is obtained:
wherein f issFor the flame saliency map of the input video image, τfThreshold value obtained for the test, if Ms(x, y, t) ═ 1, which indicates that the pixel point may be a flame pixel and needs to be further determined; if M issWhen (x, y, t) ═ 0, this indicates that the spot is not within the flame region and no subsequent processing is required.
S3 processes the subblocks by subblock in the following manner:
s31: w according to the width and height of the sub-blockbAnd HbPartitioning the image, if the width and the height of the image are W respectivelyiAnd HiThe number of line blocks N into which the image is to be dividedrNumber of sum column blocks NcWill be as
Wherein,the operator represents a floor operation;
s32: counting the number of the candidate flame pixels in each sub-block in the following mode, judging the sub-block with less flame pixels as a non-flame sub-block, and not carrying out subsequent processing:
wherein M isb,i(t) indicates the possibility that the ith sub-block at time t contains a flameThe value of 1 indicates that the ith sub-block at the time t possibly contains flame, the spatial pyramid histogram statistics is carried out on the ith sub-block, and the value of 0 indicates that the ith sub-block does not contain flame and does not need to be subjected to subsequent processing; b isi(T) denotes a region represented by the i-th sub-block, TbAnd the ratio threshold of the candidate flame pixel points in the whole sub-block pixels is set.
And S4, calculating the space pyramid histogram of each subblock of the image, and judging whether the subblock contains flames or not according to the distance between the pyramid histograms of corresponding blocks in the continuous multi-frame images:
s41: selecting R channel image to calculate space pyramid histogram, setting the R channel value of pixel point not in flame area to zero and keeping the original value unchanged at the candidate pixel point according to the mask image containing candidate flame pixel point
S42: for each sub-block M that may contain a flameb,iThe subblock with (t) ═ 1 calculates its spatial pyramid histogram H (B) on the F (x, y, t) imagei(t)), namely, respectively carrying out blocking operation again on the subblocks according to the resolutions of 0-L, and dividing the width and the height of each subblock into 2 at the first level resolutionlSegment, where L is 1, 2, … L, subdividing the sub-block into a number of tiles, counting the color histogram of each tile on the R channel, and finally multiplying the histogram vector of each tile by a weight βlThe space pyramid histogram of the sub-block is formed by connecting the two end to end:
s43: for each subblock possibly containing flame in the current frame, finding a corresponding subblock in the previous frame image, namely searching and calculating a space pyramid histogram of all subblocks in a range from-R to R by taking the currently processed subblock of the current frame as a central starting position in the previous frame image, and calculating the distance between the histogram of the subblock and the histogram of the subblock processed by the current frame, wherein the subblock with the minimum distance is the subblock corresponding to the subblock processed by the current frame;
s44: calculating the distance between the current processing subblock and the spatial pyramid histogram of the corresponding subblock in the previous frame according to the corresponding subblock determined in the step S43; averaging the calculation results of multiple frames, and then using the averaged result as a final decision basis, as shown in formula (15):
wherein B isc,i(t-1) denotes the corresponding subblock of the ith subblock in the previous frame at time t, NmRepresents the number of frames averaged;
s45: judging whether flame exists in the currently processed subblock according to the distance calculated in the step S44, wherein the method is as shown in the formula (16)
Wherein, TsAnd if at least one sub-block in the image is judged to have the flame, the frame image is considered to have the flame.
Due to the adoption of the technical scheme, the fire detection method based on the flame saliency map and the spatial pyramid histogram, provided by the invention, has the advantages that the establishment process of the flame saliency map is combined with the foreground detection stage based on the continuous frame differencing method and the soft decision result of the flame color model based on the Gaussian mixture model, so that the reduction of the detection rate caused by the hard decision of the two stages respectively is avoided. The proposed spatial pyramid histogram can effectively describe the dynamic characteristics of the flame, and is helpful for reducing false detection caused by flame color interferents.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of the disclosed method;
FIG. 2 is a diagram of a part of experimental results based on a spatial pyramid histogram in the present invention.
Detailed Description
In order to make the technical solutions and advantages of the present invention clearer, the following describes the technical solutions in the embodiments of the present invention clearly and completely with reference to the drawings in the embodiments of the present invention:
as shown in fig. 1, a fire detection method based on a flame saliency map and a spatial pyramid histogram specifically includes the following steps:
s1: calculating the intensity change value and the foreground degree of each pixel point in the image, and obtaining a flame saliency map by adopting a continuous frame difference method and a Gaussian mixture flame color model; obtaining the probability that a pixel has flame color by applying a flame model, respectively performing soft decision in two stages, and then integrating the results to obtain a flame significance map; the flame significance map is the same as the image in size, and each point represents the probability that a pixel point at a corresponding position in the image is possibly a pixel point in a flame region. S1 specifically adopts the following method:
s11: calculating the intensity change value of each pixel point in the image:
wherein: pdiff(x, y, t) is the intensity change value of the pixel point located at the (x, y) position at the time t, and I (x, y, t) is the intensity value of the pixel point located at the (x, y) position at the time t;
s12: calculating the foreground degree of each pixel point in the image, wherein the foreground degree is the probability value that the pixel point belongs to the foreground in the foreground detection stage:
wherein, PF(x, y, t) represents the foreground degree of a pixel point at the position (x, y) at the time t, and the foreground degree is the characteristic of describing the continuous change of the flame area intensity by summing the weighted logarithm values of the intensity change of N frames before the time t;
s13: according to the trained Gaussian mixture flame color model, calculating the probability that the color of each pixel point is the flame color in the RGB color space:
wherein q (x, y, t) is a color vector of the pixel point at the position (x, y) at the time t in the RGB color space, and the representation form is
q(x,y,t)={R(x,y,t),G(x,y,t),B(x,y,t)} (4)
K is the number of unimodal Gaussian density function components in the trained Gaussian mixture flame color model, mukSum-sigmakThe mean vector and covariance matrix of the k-th component in the trained Gaussian mixture flame color model, αkIs the weight of each Gaussian component, and the expression of the function η (-) is
Where D is the dimension of the vector, which is 3 in this context.
S14: constructing a flame saliency map according to the foreground degree and the flame color probability:
calculating the saliency of each pixel point according to the pixel foreground degree calculated by the formula (2) and the flame color probability calculated by the formula (3)
fs(x,y,t)=PF(x,y,t)+logPc(q(x,y,t)) (6)
Wherein f issThe (x, y, t) is the significance of the pixel point at the position (x, y) at the time t, and is also the pixel value of the pixel point at the position (x, y) in the flame significance map.
The GMM flame color model mentioned in the above process refers to a Gaussian Mixture Model (GMM) used to describe the probability that the color of a certain pixel is the flame color. The model is obtained by training a large number of manually calibrated flame pixel points in advance, so that the probability that the color of a certain pixel point belongs to the flame color can be described, and the used color space is RGB space. The initial value of the GMM flame color model parameter is obtained by a Kmeans algorithm, and then the optimal parameter is obtained by the optimization of an EM algorithm. The Gaussian Mixture Model (GMM) training process is as follows:
1) adopting a Gaussian mixture model of K Gaussian mixture components to obtain n observation data Q ═ Q1,q2,…,qN]Applying k-means algorithm to obtain initial parameter value
2) E, step E: calculating the probability of the ith data in the k Gaussian component in the training data set
And M: updating each parameter value and weight value of each Gaussian component according to the calculation result of the step E, wherein the specific calculation process is
3) And (3) repeating the step (2) until the likelihood function (shown by the formula (1.5)) converges.
S2: and screening out pixel points of the candidate flames by adopting a threshold segmentation method according to the flame saliency map, and constructing a mask image containing the pixel points of the candidate flames. The method specifically adopts the following steps:
according to the flame significance map obtained by the formula (6), the interference points are filtered through threshold segmentation, and a mask image containing candidate flame pixel points is obtained:
wherein f issFor the flame saliency map of the input video image, τfThreshold value obtained for the test, if Ms(x, y, t) ═ 1, which indicates that the pixel point may be a flame pixel and needs to be further determined; if M issWhen (x, y, t) ═ 0, this indicates that the spot is not within the flame region and no subsequent processing is required.
S3: the original image is processed according to the mask image as follows: if the value of the pixel point in the mask image is 1, the pixel point in the original image at the corresponding position keeps the original value, otherwise, the value of the pixel point in the original image is zero; dividing the processed original image into a plurality of sub-blocks, and calculating the number of line blocks, the number of column blocks and the number of candidate flame pixels in each sub-block of the divided image so as to judge whether the sub-block contains flames or not;
s31: w according to the width and height of the sub-blockbAnd HbPartitioning the image, if the width and the height of the image are W respectivelyiAnd HiThe number of line blocks N into which the image is to be dividedrNumber of sum column blocks NcWill be as
Wherein,the operator represents a floor operation;
s32: counting the number of candidate flame pixels in each sub-block in the following mode, judging the sub-block with less number of flame pixels as a non-flame sub-block, and not carrying out subsequent processing;
wherein M isb,i(t) represents the possibility that the ith sub-block at the time t contains flame, the value of 1 represents that the ith sub-block at the time t possibly contains flame, the spatial pyramid histogram statistics is carried out on the flame, and the value of 0 represents that the sub-block does not contain flame and does not need to be subjected to subsequent processing; b isi(T) denotes a region represented by the i-th sub-block, TbAnd the ratio threshold of the candidate flame pixel points in the whole sub-block pixels is set.
S4: and searching sub-blocks corresponding to the previous frame of image one by one, and judging whether the image has flame or not by utilizing the distance between the space pyramid histograms of the sub-blocks corresponding to the continuous frames.
In the calculation of the spatial pyramid histogram, a series of grids are established with different resolutions of 0-L, namely, the width and height dimensions are respectively divided equally into 2 at the first level resolutionlSegment, dividing the image into 4 in totallAnd a sub-region, wherein the histogram of the statistical color or gray level in each candidate sub-region possibly containing flame. Finally, the histogram vectors of all sub-regions under different resolutions are connected end to end in sequence to form pyramid histogram vectors, and for one image, the dimension of the space pyramid histogram is
Wherein, BnumThe number of bins of the histogram.
And S4, calculating the space pyramid histogram of each subblock of the image, and judging whether the subblock contains flames or not according to the distance between the pyramid histograms of corresponding blocks in the continuous multi-frame images:
s41: selecting R channel image to calculate space pyramid histogram, setting the R channel value of pixel point not in flame area to zero and keeping the original value unchanged at the candidate pixel point according to the mask image containing candidate flame pixel point
S42: for each sub-block M that may contain a flameb,iThe subblock with (t) ═ 1 calculates its spatial pyramid histogram H (B) on the F (x, y, t) imagei(t)), namely, respectively carrying out blocking operation again on the subblocks according to the resolutions of 0-L, and dividing the width and the height of each subblock into 2 at the first level resolutionlSegment, where L is 1, 2, … L, subdividing the sub-block into a number of small blocks, each small block being counted atColor histogram on R channel, and finally multiplying histogram vector of each patch by weight βlThe space pyramid histogram of the sub-block is formed by connecting the two end to end:
s43: for each subblock possibly containing flame in the current frame, finding a corresponding subblock in the previous frame image, namely searching and calculating a space pyramid histogram of all subblocks in a range from-R to R by taking the currently processed subblock of the current frame as a central starting position in the previous frame image, and calculating the distance between the histogram of the subblock and the histogram of the subblock processed by the current frame, wherein the subblock with the minimum distance is the subblock corresponding to the subblock processed by the current frame;
if B isiThe subblock denoted by (t) is R (x)i1:xi2,yi1:yi2T), the above-mentioned specific process is of the formula (13)
Wherein, Bc,i(t-m) is a sub-block corresponding to the sub-block processed by the current frame in the previous m frames, and H () represents a spatial pyramid histogram of the calculation region, dis [, ]]Representing a function for calculating the distance between two histograms, histogram intersection functions may be applied[5]Computing
Wherein Hx(i),Hy(i) Representing the ith dimension data, D, of two histogram vectors, respectivelyHRepresenting the dimensions of the vector.
S44: calculating the distance between the current processing subblock and the spatial pyramid histogram of the corresponding subblock in the previous frame according to the corresponding subblock determined in the step S43; averaging the calculation results of multiple frames, and then using the averaged result as a final decision basis, as shown in formula (15):
wherein B isc,i(t-1) denotes the corresponding subblock of the ith subblock in the previous frame at time t, NmRepresents the number of frames averaged;
s45: judging whether flame exists in the currently processed subblock according to the distance calculated in the step S44, wherein the method is as shown in the formula (16)
Wherein, TsAnd if at least one sub-block in the image is judged to have the flame, the frame image is considered to have the flame.
To verify the effectiveness of the present invention, computer simulation experiments were performed. In the experiment, the experimental parameters are CPUIntereR CoreTM i52.4GHz, 6G memory, the video card is AMD Radon HD 6470, the system is Window7 flagship edition, and the software programming environment is Matlab2014 a.
The video for the experiment of the present invention is a color video photographed by a general CCD camera, and originated from a test video published by a laboratory for image-processing-based fire detection field research, such as the korean enlightenment university, and a video photographed specifically for the performance of the detection method.
The two standards for measuring the quality of a flame detection algorithm are mainly provided, and the detection rate rdAnd false detection rate (false alarm rate) faWhich are defined as formula (20) and formula (21), respectively.
Wherein n ispRepresenting the number of frames of the image containing flames, nnNumber of frames of image representing no flame, ntpThe number of frames that the algorithm detects that contain a flame and that the image does contain a flame, i.e., the number of true positive frames, nfpIt represents the number of flame frames when there is no flame in the actual image but the algorithm is applied, i.e. the number of false positive frames. Detection rate rdAnd false alarm rate faBoth reflect the quality of the performance of the flame detection algorithm. The detection rate represents the probability of being detected when the flame exists, and the higher the detection rate is, the higher the reliability of the detection system is. The false alarm rate represents the probability of an alarm being issued in the absence of a flame, and may reflect the stability of the algorithm to some extent. The lower the false alarm rate, the lower the interference degree of the detection system by other objects is, and the higher the practical operability is.
Methods and articles of manufacture improved herein [1]]And document [2]]Simulation comparison experiments are carried out, and specific simulation results are shown in the following figure and the following table. Wherein the test video is from the fire detection laboratory of the university of Korea[6]
TABLE 1 comparison of Algorithm Performance
As can be seen from table 1, the method document [1] proposed herein is superior to the method document [2] in both the detection rate and the false detection rate in terms of the false detection rate. For burning trees in fig. 2(a), a large amount of smoke around has certain effect of sheltering from to flame, if the ordinary hard classification color model is applied, a large amount of flame region pixel points will be omitted, but the flame saliency map can reduce or even avoid the above-mentioned situation, and a better detection result is obtained. Fig. 2 (b) shows that the video is shot at night, the ambient light is dark, and the successful detection of the scene shows that the algorithm can achieve a better detection effect under both bright and dark environments. In fig. 2 (c), the flame is far from the camera, and the flame burning area is small and difficult to detect. Moving pedestrians in fig. 2(d) and 2(f), red vehicles in fig. 2(e), and the like all have strong interference effects, but the algorithm result based on the spatial pyramid histogram excludes the interferers, and a low false detection rate is obtained.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.
Reference to the literature
[1]CELIK T,DEMIREL H,OZKARAMANLI H.Automatic fire detection in videosequences.Fire Safety Journal,2006,6(3):233–240.
[2]HABIBOGLU Y H,GUNAY O,CETIN A E.Covariance matrix-based fire andflame detection method in video.Mach Vision Appl,2012,23(6):1103-1113.
[3]T REYIN B U.Fire detection algorithms using multimodal signal andimage analysis;Bilkent University,2009.
[4] Wangshuwen, research on a distributed microphone array positioning method, university of great courseware, 2013.
[5]LAZEBNIK S,SCHMID C,PONCE J.Beyond bags of features:Spatialpyramid matching for recognizing natural scene categories.Proceedings of theIEEE Computer Society Conference on Computer Vision and Pattern Recognition,2006.
[6]KO B,CHEONG K H,NAM J Y.Early fire detection algorithm based onirregular patterns of flames and hierarchical Bayesian Networks.Fire SafetyJ,2010,45(4):262-70。

Claims (4)

1. A fire detection method based on a flame saliency map and a spatial pyramid histogram is characterized in that: the method comprises the following steps:
s1: calculating the intensity change value and the foreground degree of each pixel point in the image, and obtaining a flame saliency map by adopting a continuous frame difference method and a Gaussian mixture flame color model;
s2: screening out pixel points of the candidate flames by adopting a threshold segmentation method according to the flame saliency map, and constructing a mask image containing the pixel points of the candidate flames;
s3: the original image is processed according to the mask image as follows: if the value of the pixel point in the mask image is 1, the pixel point in the original image at the corresponding position keeps the original value, otherwise, the value of the pixel point in the original image is zero; dividing the processed original image into a plurality of sub-blocks, and calculating the number of line blocks, the number of column blocks and the number of candidate flame pixels in each sub-block of the divided image so as to judge whether the sub-block contains flames or not;
s4: searching sub-blocks corresponding to the previous frame of image one by one, and judging whether the image has flame or not by using the distance between the space pyramid histograms of the sub-blocks corresponding to the continuous frames;
it is also characterized in that: the following method is specifically adopted in S1:
s11: calculating the intensity change value of each pixel point in the image:
<mrow> <msub> <mi>P</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>f</mi> <mi>f</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mo>|</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mn>255</mn> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
wherein: pdiff(x, y, t) is the intensity change value of the pixel point located at the (x, y) position at the time t, and I (x, y, t) is the intensity value of the pixel point located at the (x, y) position at the time t;
s12: calculating the foreground degree of each pixel point in the image, wherein the foreground degree is the probability value that the pixel point belongs to the foreground in the foreground detection stage:
<mrow> <msub> <mi>P</mi> <mi>F</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msub> <mi>&amp;omega;</mi> <mi>i</mi> </msub> <mi>log</mi> <mi> </mi> <msub> <mi>P</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>f</mi> <mi>f</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
wherein, PF(x, y, t) represents the foreground degree of the pixel point at the position (x, y) at the time t, the foreground degree is obtained by summing the weighted logarithm values of the intensity changes of N frames before the time t to describe the characteristic of the continuous change of the intensity of the flame area, and w isiIs a weighting coefficient;
s13: according to the trained Gaussian mixture flame color model, calculating the probability that the color of each pixel point is the flame color in the RGB color space:
<mrow> <msub> <mi>P</mi> <mi>c</mi> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>t</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <msub> <mi>&amp;alpha;</mi> <mi>k</mi> </msub> <msub> <mi>&amp;eta;</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>t</mi> </mrow> <mo>)</mo> <mo>|</mo> <msub> <mi>&amp;mu;</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>&amp;Sigma;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
wherein q (x, y, t) is a color vector of the pixel point at the position (x, y) at the time t in the RGB color space, and the representation form is
q(x,y,t)={R(x,y,t),G(x,y,t),B(x,y,t)} (4)
K is the number of unimodal Gaussian density function components in the trained Gaussian mixture flame color model, mukSum-sigmakThe mean vector and covariance matrix of the k-th component in the trained Gaussian mixture flame color model, αkIs the weight of each gaussian component, function ηkThe expression of (a) is
<mrow> <msub> <mi>&amp;eta;</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>|</mo> <msub> <mi>&amp;mu;</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>&amp;Sigma;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <msup> <mrow> <mo>(</mo> <mn>2</mn> <mi>&amp;pi;</mi> <mo>)</mo> </mrow> <mrow> <mi>D</mi> <mo>/</mo> <mn>2</mn> </mrow> </msup> <mo>|</mo> <msub> <mi>&amp;Sigma;</mi> <mi>k</mi> </msub> <msup> <mo>|</mo> <mrow> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </msup> </mrow> </mfrac> <mi>exp</mi> <mo>&amp;lsqb;</mo> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msup> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <msubsup> <mi>&amp;Sigma;</mi> <mi>k</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
Wherein D is the dimension of the vector q, wherein D ═ 3;
s14: constructing a flame saliency map according to the foreground degree and the flame color probability:
calculating the saliency of each pixel point according to the pixel foreground degree calculated by the formula (2) and the flame color probability calculated by the formula (3)
fs(x,y,t)=PF(x,y,t)+logPc(q(x,y,t)) (6)
Wherein f issThe (x, y, t) is the significance of the pixel point at the position (x, y) at the time t, and is also the pixel value of the pixel point at the position (x, y) in the flame significance map.
2. The fire detection method based on the flame saliency map and the spatial pyramid histogram of claim 1, wherein: in S2, the following method is adopted:
according to the flame significance map obtained by the formula (6), the interference points are filtered through threshold segmentation, and a mask image containing candidate flame pixel points is obtained:
wherein f issFor the flame saliency map of the input video image, τfThreshold value obtained for the test, if Ms(x, y, t) ═ 1, which indicates that the pixel point may be a flame pixel and needs to be further determined; if M issWhen (x, y, t) ═ 0, this indicates that the spot is not within the flame region and no subsequent processing is required.
3. The fire detection method based on the flame saliency map and the spatial pyramid histogram of claim 1, wherein: s3 processes the subblocks by subblock in the following manner:
s31: w according to the width and height of the sub-blockbAnd HbPartitioning the image, if the width and the height of the image are W respectivelyiAnd HiThe number of line blocks N into which the image is to be dividedrNumber of sum column blocks NcWill be as
Wherein,the operator represents a floor operation;
s32: counting the number of the candidate flame pixels in each sub-block in the following mode, judging the sub-block with less flame pixels as a non-flame sub-block, and not carrying out subsequent processing:
wherein M isb,i(t) represents the possibility that the ith sub-block at the time t contains flame, the value of 1 represents that the ith sub-block at the time t possibly contains flame, the spatial pyramid histogram statistics is carried out on the flame, and the value of 0 represents that the sub-block does not contain flame and does not need to be subjected to subsequent processing; ms(x, y, t) represents the mask image containing candidate flame pixel points, Bi(T) denotes a region represented by the i-th sub-block, TbAnd the ratio threshold of the candidate flame pixel points in the whole sub-block pixels is set.
4. A fire detection method based on a flame saliency map and spatial pyramid histogram as claimed in claim 3, characterized in that: and S4, calculating the space pyramid histogram of each subblock of the image, and judging whether the subblock contains flames or not according to the distance between the pyramid histograms of corresponding blocks in the continuous multi-frame images:
s41: selecting R channel image to calculate space pyramid histogram, setting the R channel value of pixel point not in flame area to zero and keeping the original value unchanged at the candidate pixel point according to the mask image containing candidate flame pixel point
S42: for each sub-block M that may contain a flameb,iSubblock B with (t) ═ 1i(t) computing its spatial pyramid histogram H (B) on the F (x, y, t) imagei(t)), namely, respectively carrying out blocking operation again on the subblocks according to the resolutions of 0-L, and dividing the width and the height of each subblock into 2 at the first level resolutionlSegment, where L1, 2, … L, subdividing the sub-block into several small blocks, Ms(x, y, t) represents a mask image containing candidate flame pixel points, a color histogram of each small block on an R channel is counted, and finally the histogram vector of each small block is multiplied by a weight βlThe space pyramid histogram of the sub-block is formed by connecting the two end to end:
<mrow> <msub> <mi>&amp;beta;</mi> <mi>l</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mn>2</mn> <mrow> <mi>L</mi> <mo>-</mo> <mi>l</mi> </mrow> </msup> </mfrac> <mo>=</mo> <msup> <mn>2</mn> <mrow> <mi>l</mi> <mo>-</mo> <mi>L</mi> </mrow> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow>
s43: for each subblock possibly containing flame in the current frame, finding a corresponding subblock in the previous frame image, namely searching and calculating a space pyramid histogram of all subblocks in a range from-R to R by taking the currently processed subblock of the current frame as a central starting position in the previous frame image, and calculating the distance between the histogram of the subblock and the histogram of the subblock processed by the current frame, wherein the subblock with the minimum distance is the subblock corresponding to the subblock processed by the current frame;
s44: calculating the distance between the current processing subblock and the spatial pyramid histogram of the corresponding subblock in the previous frame according to the corresponding subblock determined in the step S43; averaging the calculation results of multiple frames, and then using the averaged result as a final decision basis, as shown in formula (15):
<mrow> <msub> <mi>Dis</mi> <mrow> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>B</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mi>m</mi> </msub> </mfrac> <mrow> <mo>(</mo> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> <mrow> <mo>&amp;lsqb;</mo> <mrow> <mi>H</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>B</mi> <mrow> <mi>c</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <mo>)</mo> </mrow> <mo>,</mo> <mi>H</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>B</mi> <mrow> <mi>c</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> <mrow> <mo>(</mo> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> <mo>)</mo> </mrow> </mrow> <mo>&amp;rsqb;</mo> </mrow> <mo>+</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mrow> <mi>t</mi> <mo>-</mo> <msub> <mi>N</mi> <mi>m</mi> </msub> <mo>+</mo> <mn>1</mn> </mrow> </munderover> <mi>d</mi> <mi>i</mi> <mi>s</mi> <mrow> <mo>&amp;lsqb;</mo> <mrow> <mi>H</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>B</mi> <mrow> <mi>c</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> <mo>)</mo> </mrow> <mo>,</mo> <mi>H</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>B</mi> <mrow> <mi>c</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>-</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> <mo>)</mo> </mrow> </mrow> <mo>&amp;rsqb;</mo> </mrow> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>15</mn> <mo>)</mo> </mrow> </mrow>
wherein B isc,i(t) represents the sub-block corresponding to the ith sub-block in the current frame at time t, Bc,i(t-1) denotes the corresponding subblock of the ith subblock in the previous frame at time t, NmIndicates the number of frames on average, dis [, ]]The function representing the distance between two histograms is calculated as follows
<mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> <mrow> <mo>(</mo> <msub> <mi>H</mi> <mi>x</mi> </msub> <mo>,</mo> <msub> <mi>H</mi> <mi>y</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>D</mi> <mi>H</mi> </msub> </munderover> <mi>m</mi> <mi>i</mi> <mi>n</mi> <mrow> <mo>(</mo> <msub> <mi>H</mi> <mi>x</mi> </msub> <mo>(</mo> <mi>i</mi> <mo>)</mo> <mo>,</mo> <msub> <mi>H</mi> <mi>y</mi> </msub> <mo>(</mo> <mi>i</mi> <mo>)</mo> <mo>)</mo> </mrow> </mrow>
Wherein Hx(i),Hy(i) Respectively representing two histogram vectors HxAnd HyI-th dimension data of DHRepresents a vector HxAnd HyDimension (d);
s45: judging whether flame exists in the currently processed subblock according to the distance calculated in the step S44, wherein the method is as shown in the formula (16)
Wherein, TsAnd if at least one sub-block in the image is judged to have the flame, the frame image is considered to have the flame.
CN201510503877.7A 2015-08-14 2015-08-14 A kind of fire detection method based on flame notable figure and spatial pyramid histogram Expired - Fee Related CN105139429B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510503877.7A CN105139429B (en) 2015-08-14 2015-08-14 A kind of fire detection method based on flame notable figure and spatial pyramid histogram

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510503877.7A CN105139429B (en) 2015-08-14 2015-08-14 A kind of fire detection method based on flame notable figure and spatial pyramid histogram

Publications (2)

Publication Number Publication Date
CN105139429A CN105139429A (en) 2015-12-09
CN105139429B true CN105139429B (en) 2018-03-13

Family

ID=54724761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510503877.7A Expired - Fee Related CN105139429B (en) 2015-08-14 2015-08-14 A kind of fire detection method based on flame notable figure and spatial pyramid histogram

Country Status (1)

Country Link
CN (1) CN105139429B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105741481B (en) * 2016-04-21 2018-07-06 大连理工大学 A kind of fire monitoring positioning device and fire monitoring localization method based on binocular camera
CN106546351A (en) * 2016-11-08 2017-03-29 天津艾思科尔科技有限公司 A kind of explosion-proof detector with temperature detection function
CN107016679B (en) * 2017-02-23 2019-10-11 中国南方电网有限责任公司超高压输电公司广州局 A kind of mountain fire detection method based on single picture
CN107328392A (en) * 2017-07-05 2017-11-07 贵州大学 Flame abnormal signal extracting method based on Euler's video enhancement techniques
CN110084160B (en) * 2019-04-16 2021-08-10 东南大学 Video forest smoke and fire detection method based on motion and brightness significance characteristics
CN110334685A (en) * 2019-07-12 2019-10-15 创新奇智(北京)科技有限公司 Flame detecting method, fire defector model training method, storage medium and system
CN111898549B (en) * 2020-07-31 2024-07-12 平安国际智慧城市科技股份有限公司 Fire monitoring method and device based on artificial intelligence, computer equipment and medium
CN112949453B (en) * 2021-02-26 2023-12-26 南京恩博科技有限公司 Training method of smoke and fire detection model, smoke and fire detection method and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2000998A2 (en) * 2007-05-31 2008-12-10 Industrial Technology Research Institute Flame detecting method and device
CN104504382A (en) * 2015-01-13 2015-04-08 东华大学 Flame identifying algorithm based on image processing technologies
CN104809463A (en) * 2015-05-13 2015-07-29 大连理工大学 High-precision fire flame detection method based on dense-scale invariant feature transform dictionary learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2000998A2 (en) * 2007-05-31 2008-12-10 Industrial Technology Research Institute Flame detecting method and device
CN104504382A (en) * 2015-01-13 2015-04-08 东华大学 Flame identifying algorithm based on image processing technologies
CN104809463A (en) * 2015-05-13 2015-07-29 大连理工大学 High-precision fire flame detection method based on dense-scale invariant feature transform dictionary learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Research on Evaluation of Fire Detection Algorithms;LI Jian等;《火灾科学》;20050731;第14卷(第3期);第144-149页 *
应用GMM的快速火焰检测;唐岩岩 等;《计算机科学》;20121130;第39卷(第11期);第283-285,297页 *

Also Published As

Publication number Publication date
CN105139429A (en) 2015-12-09

Similar Documents

Publication Publication Date Title
CN105139429B (en) A kind of fire detection method based on flame notable figure and spatial pyramid histogram
CN103871029B (en) A kind of image enhaucament and dividing method
CN110516609B (en) Fire disaster video detection and early warning method based on image multi-feature fusion
CN103942557B (en) A kind of underground coal mine image pre-processing method
CN107016357B (en) Video pedestrian detection method based on time domain convolutional neural network
CN105404847B (en) A kind of residue real-time detection method
CN109409242B (en) Black smoke vehicle detection method based on cyclic convolution neural network
CN113537099B (en) Dynamic detection method for fire smoke in highway tunnel
CN111695514B (en) Vehicle detection method in foggy days based on deep learning
CN107330390B (en) People counting method based on image analysis and deep learning
CN105975929A (en) Fast pedestrian detection method based on aggregated channel features
CN108022258B (en) Real-time multi-target tracking method based on single multi-frame detector and Kalman filtering
CN110490043A (en) A kind of forest rocket detection method based on region division and feature extraction
Ahmad et al. Overhead view person detection using YOLO
CN110874592A (en) Forest fire smoke image detection method based on total bounded variation
CN111368660A (en) Single-stage semi-supervised image human body target detection method
CN111860143B (en) Real-time flame detection method for inspection robot
CN104732543A (en) Infrared weak small target fast detecting method under desert and gobi background
CN111915558A (en) Pin state detection method for high-voltage transmission line
CN105469054A (en) Model construction method of normal behaviors and detection method of abnormal behaviors
CN113688830A (en) Deep learning target detection method based on central point regression
CN111461076A (en) Smoke detection method and smoke detection system combining frame difference method and neural network
CN109325426B (en) Black smoke vehicle detection method based on three orthogonal planes time-space characteristics
Bai et al. Moving object detection based on adaptive loci frame difference method
CN107729811B (en) Night flame detection method based on scene modeling

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180313

CF01 Termination of patent right due to non-payment of annual fee