CN110298297B - Flame identification method and device - Google Patents

Flame identification method and device Download PDF

Info

Publication number
CN110298297B
CN110298297B CN201910558817.3A CN201910558817A CN110298297B CN 110298297 B CN110298297 B CN 110298297B CN 201910558817 A CN201910558817 A CN 201910558817A CN 110298297 B CN110298297 B CN 110298297B
Authority
CN
China
Prior art keywords
flame
block
candidate
frame
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910558817.3A
Other languages
Chinese (zh)
Other versions
CN110298297A (en
Inventor
雷帮军
陈鹏
徐光柱
王峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Jiugan Technology Co ltd
China Three Gorges University CTGU
Original Assignee
Hubei Jiugan Technology Co ltd
China Three Gorges University CTGU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Jiugan Technology Co ltd, China Three Gorges University CTGU filed Critical Hubei Jiugan Technology Co ltd
Priority to CN201910558817.3A priority Critical patent/CN110298297B/en
Publication of CN110298297A publication Critical patent/CN110298297A/en
Application granted granted Critical
Publication of CN110298297B publication Critical patent/CN110298297B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a flame identification method and a device, wherein the method comprises the following steps: performing color detection and morphological change detection on any frame of image to obtain a candidate flame region of any frame of image; dividing any frame image based on the candidate flame region of any frame image to obtain candidate flame blocks of any frame image; inputting any candidate flame block of any frame image into a flame characteristic analysis model, acquiring a flame identification mark output by the flame characteristic analysis model, and taking the flame identification mark as the candidate flame block with flame as a preferable flame block; acquiring an optical flow histogram of any one of the preferable flame blocks of any one of the frame images, and acquiring entropy of the optical flow histogram; and acquiring a flame identification result based on the entropy of any preferable flame block. The method and the device provided by the embodiment of the invention improve the accuracy and the reliability of flame identification.

Description

Flame identification method and device
Technical Field
The invention relates to the technical field of computer vision, in particular to a flame identification method and device.
Background
Fire is one of the most common disasters in daily life, and often causes huge loss of lives and properties of people. The prevention and detection of fires has been a focus of attention in human beings in combating fires.
The data or information collected by the traditional fire detector is single, and is influenced by various factors such as space height, dust, air flow speed, corrosive environment and the like, and the problems of false alarm, delayed alarm, missing alarm or failure and the like are easy to occur. It is not only still to be improved in terms of sensitivity and reliability, but also is not adapted to the increasingly stringent fire safety requirements due to the inability to react to the initial signals of the fire.
With the development of computer vision technology, flame image recognition technology has been developed. Flame image identification is a novel fire detection method based on digital image processing and analysis. The method is characterized in that a camera is used for monitoring the scene, the shot continuous images are converted into digital images by an image acquisition card and are input into a computer, and the digital images are processed and analyzed according to the main image characteristics of the flame, such as the shape, the color and the like, so that the purpose of detecting fire is achieved. Flame image recognition technology pertinently overcomes the main weakness of the conventional fire detection technology, and is widely focused.
However, the flame image cannot be effectively identified simply by means of color detection or shape detection, so that if the reliability and accuracy of flame identification are improved, a problem still needs to be solved.
Disclosure of Invention
The embodiment of the invention provides a flame identification method and device, which are used for solving the problems of low reliability and accuracy of the existing flame identification.
In a first aspect, an embodiment of the present invention provides a flame identification method, including:
performing color detection and morphological change detection on any frame of image to obtain a candidate flame region of any frame of image;
dividing any frame image based on the candidate flame region of any frame image to obtain candidate flame blocks of any frame image;
inputting any candidate flame block of any frame image into a flame characteristic analysis model, acquiring a flame identification mark output by the flame characteristic analysis model, and taking the flame identification mark as the candidate flame block with flame as a preferable flame block; the flame characteristic analysis model is obtained based on sample flame blocks and sample flame identification marks;
acquiring an optical flow histogram of any preferable flame block of any frame image, and acquiring entropy of the optical flow histogram;
and acquiring a flame identification result based on the entropy of any preferable flame block.
In a second aspect, an embodiment of the present invention provides a flame identification device, including:
the color and form detection unit is used for carrying out color detection and form change detection on any frame of image and obtaining a candidate flame area of any frame of image;
the flame block candidate generating unit is used for dividing any frame image based on the flame region candidate of any frame image to obtain flame blocks candidate of any frame image;
the flame characteristic acquisition unit is used for inputting any candidate flame block of any frame image into the flame characteristic analysis model, acquiring a flame identification mark output by the flame characteristic analysis model, and taking the flame identification mark as the candidate flame block with flame as a preferable flame block; the flame characteristic analysis model is obtained based on sample flame blocks and sample flame identification marks;
an entropy obtaining unit, configured to obtain an optical flow histogram of any one of the preferred flame blocks of any one of the frame images, and obtain entropy of the optical flow histogram;
and the identification unit is used for acquiring a flame identification result based on the entropy of any preferable flame block.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor, a communication interface, a memory, and a bus, where the processor, the communication interface, and the memory are in communication with each other through the bus, and the processor may invoke logic instructions in the memory to perform the steps of the method as provided in the first aspect.
In a fourth aspect, embodiments of the present invention provide a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method as provided by the first aspect.
According to the flame identification method and device provided by the embodiment of the invention, the static color and the form of the flame are respectively analyzed, the flame characteristics are analyzed through the flame characteristic analysis model on the basis, the entropy of the optical flow histogram is obtained to detect the dynamic irregular movement, finally, the flame identification result is obtained, and the accuracy and the reliability of the flame identification are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a flame identification method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a candidate flame block according to an embodiment of the present invention;
FIG. 3 is a flow chart of a flame identification method according to another embodiment of the present invention;
FIG. 4 is a schematic diagram of a flame identification device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The flame image cannot be effectively identified simply by means of color detection or shape detection, and therefore the embodiment of the invention provides a flame identification method. Fig. 1 is a schematic flow chart of a flame identification method according to an embodiment of the present invention, as shown in fig. 1, the method includes:
and 110, performing color detection and morphological change detection on any frame of image to obtain a candidate flame region of the frame of image.
Specifically, the frame image is one frame image in the video acquired for fire monitoring. After any frame of image is obtained, carrying out color detection on the frame of image, and selecting pixels which are possibly flame in the frame of image from the image colors; and simultaneously, detecting the morphological change of the frame image, comparing the morphological change with the previous frame or the previous multi-frame image of the frame image in the video stream, and further selecting the pixels which are possibly flame in the frame image from the change of the pixel values. And combining the color detection result and the morphological change detection result to obtain the candidate flame region of the frame image.
Here, the flame candidate region is constituted by pixels in the frame image each of which is judged to be likely to be a flame by color detection and morphological change detection.
Step 120, based on the candidate flame region of any frame of image, dividing the frame of image, and obtaining the candidate flame block of the frame of image.
Specifically, after the candidate flame region is obtained, the frame image is segmented according to a preset segmentation rule, and among the segmented blocks, the block with the overlap ratio of the block to the candidate flame region being larger than a preset overlap ratio threshold is used as the candidate flame block. It should be noted that, for any frame of image, one or more candidate flame blocks may be provided.
Step 130, inputting any candidate flame block of any frame of image into a flame characteristic analysis model, obtaining a flame identification mark output by the flame characteristic analysis model, and taking the flame identification mark as a candidate flame block with flame as a preferable flame block; the flame characteristic analysis model is obtained based on sample flame blocks and sample flame identification marks.
Specifically, the flame characteristic analysis model is a model for analyzing flame characteristics contained in an input candidate flame block, thereby identifying whether or not a flame exists in the candidate flame block. Here, the flame characteristic may be at least one of an area change rate, a circularity, a red-green component ratio, and an offset of the flame. Flame identification marks are either flame present or flame not present.
And after the flame identification mark is obtained, if the flame identification mark of any candidate flame block is flame, the candidate flame block is taken as a preferable flame block. Here, the preferred flame block is marked as a candidate flame block with flames by the flame signature analysis module.
In addition, before executing step 130, the flame characteristic analysis model may be trained in advance, specifically, may be trained as follows: firstly, collecting a large number of sample flame blocks and sample flame identification marks thereof; the sample flame block is a candidate flame block selected from a large number of frame images of the fire monitoring video, and the sample flame identification mark is obtained by analyzing flame characteristics of the sample flame block by researchers. And training the initial model based on the sample flame block and the sample flame identification mark thereof, thereby obtaining a flame characteristic analysis model. The initial model may be a single neural network model or a combination of multiple neural network models, and the embodiment of the invention does not specifically limit the type and structure of the initial model.
Step 140, acquiring an optical flow histogram of any preferred flame block of any frame of image, and acquiring entropy of the optical flow histogram.
Specifically, the optical flow is a representation form of the motion speed of a pixel point corresponding to a moving object in a moving image, and the motion information of the object is calculated by using the corresponding relation and the relation between each image and adjacent frames of each image to obtain the change condition of each pixel point in an image sequence on a time domain, so that the instantaneous speed of the corresponding pixel of the moving object in the two-dimensional space of the image at the moment is reflected, and the motion speed and the motion direction of the moving object are reflected.
The optical flow histogram is used as the characteristic of the preferable flame block in the frame image, the motion information of each pixel in the preferable flame block can be reflected to a certain extent, and entropy is calculated on the optical flow histogram, so that whether irregular motion exists in the corresponding preferable flame block can be obtained.
And step 150, acquiring a flame identification result based on the entropy of the preferable flame block.
Specifically, based on the entropy of the optical flow histogram obtained in step 140, a flame identification result may be obtained. Here, the flame recognition result may be used to indicate whether the corresponding preferred flame block is a flame block, i.e., whether a flame exists in the corresponding preferred flame block. Based on the above, whether flames exist in any frame image can be obtained according to the flame identification result of each preferable flame block in the frame image, and whether fire exists is judged according to whether flames exist in a plurality of continuous frame images.
According to the method provided by the embodiment of the invention, the static color and the form of the flame are respectively analyzed, the flame characteristics are analyzed through the flame characteristic analysis model on the basis, the entropy of the optical flow histogram is obtained to detect the dynamic irregular movement, and finally the flame identification result is obtained, so that the accuracy and the reliability of the flame identification are improved.
Based on the above embodiment, in the method, step 110 specifically includes:
step 111, color balance is performed on any frame of image.
Specifically, the color balance is estimated luminance and chrominance by a scale factor specification. Assuming that there is a good color distribution in the frame image, the frame image may be color-balanced by a light source estimation method such as a gray world algorithm (Gray World Algorithm).
For example, the average value of the image intensities in the R, G, and B planes in any one frame image is calculated, and the resulting vector of three intensities is referred to as the light ray value of the image. Each R, G and B plane is then scaled independently using a multiplication factor that converts the gray values to the average intensity of the frames in the R, G and B planes.
Step 112, marking each motion pixel in the frame image by a dynamic background model based on the frame image and the first two frame images of the frame image.
Specifically, assuming that a vision sensor for fire detection is fixed at a certain position and direction, keeping the field of view and the background scene fixed, the motion pixels in each frame can be marked by a dynamic background model, and then the segmentation of the motion region is realized.
Here, assuming that the current frame image is the t-th frame, the intensity of any pixel (x, y) in the t-th frame image is represented by I (x, y, t), and the estimated background intensity of any pixel (x, y) in the t-th frame image is represented as B (x, y, t). To determine if any pixel (x, y) is moving, D1 (x, y, t), D2 (x, y, t), and D3 (x, y, t) are first calculated:
wherein I (x, y, t-1) and I (x, y, t-2) are the intensities of the pixels (x, y) in the t-1 th frame image and the t-2 th frame image, respectively.
Let F (x, y, t) be a binary image, determine if its specified pixel (x, y) is significantly moving over frame t. F (x, y, t) is defined as:
wherein F (x, y, t) is a preset adaptive motion threshold. Is based on updating rules
It should be noted that, the estimated background intensity and the adaptive motion threshold are obtained by dynamically updating according to the updating rule:
where α is a preset update parameter, and α is a positive real number.
And 113, performing color filtering on each motion pixel in the frame image to obtain a candidate flame region.
Specifically, in step 112, F (x, y, t) =1, i.e. the motion pixel obtained by the dynamic background model, the pixel may be a pixel of the flame candidate region, and it is further determined whether the pixel is a pixel of the flame candidate region, i.e. the flame candidate pixel, through color filtering.
The color of the flame is typically in the red-yellow or red range. In the color filtering process, a filter is set to select pixels having a predetermined Hue in a HIS (Hue, saturation, brightness) color space. In the filter, any pixel from 0 to 60 degrees outside the red-yellow range is excluded from the tone, so that the intensity of the pixel obtained after passing through the filter is red, green and blue channels are R, G and B respectively. Wherein R is more than or equal to G, and G is more than or equal to B.
To prevent selection of low brightness and low saturation red pixels, the filter further filters out low red and low saturation pixels. For the red channel intensity of the pixel, a red intensity threshold R is preset T :R≥R T
Wherein R is T Is an empirical threshold for the red level determined by calibrating the training video.
For saturation, the following rule is applied for filtering:
wherein S is T Is a predetermined saturation threshold.
In addition, the filter also requires that the filtering intensity be based on an intensity threshold I T Is provided.
In summary, each motion pixel is color filtered, and the motion pixels satisfying the color filtering condition are used as candidate flame pixels, and each candidate flame pixel forms a candidate flame region.
Based on any of the above embodiments, the method further includes step 120:
step 121, dividing any frame image into a plurality of block areas.
Specifically, the frame image is divided according to a preset division rule. For example, a frame image is divided into 8×6 small blocks, each small block being a block area.
Step 122, obtaining the coincidence of any block area and the candidate flame area.
Specifically, after the image segmentation is completed, the overlap ratio of the block area and the candidate flame area is calculated for any block. Before the coincidence degree calculation, the candidate flame region in the frame image can be enhanced, and the candidate flame region can be extracted based on a background difference method so as to accurately calculate the coincidence degree.
Here, the enhancement processing may be realized as follows: and scanning each block area by using a 3x3 square template, calculating the difference between the central pixel of any block area and surrounding pixel points, and multiplying the gray value of the central pixel point by a preset enhancement coefficient if the difference is larger than a preset threshold value so as to enhance the gray value of the pixel, otherwise, keeping the gray value of the pixel point unchanged. By processing each block by the method, the image enhancement can be realized, the problem that the boundaries of all areas in the frame image are not obvious is solved, and the segmentation effect is greatly improved.
The background differencing method can be implemented as follows: and carrying out differential operation on any frame of image and the corresponding background image to obtain a gray level image, and carrying out thresholding on the gray level image to extract a motion region. Here, in order to avoid the influence of the change in ambient light, the background image is updated according to the corresponding frame image. If the influence of noise n (x, y, t) is not considered, the single block region I (x, y, t) can be regarded as being composed of the background image b (x, y, t) and the candidate flame region m (x, y, t):
I(x,y,t)=b(x,y,t)+m(x,y,t)
candidate flame regions m (x, y, t) are thus obtained:
m(x,y,t)=I(x,y,t)-b(x,y,t)
in practice, however, the above equation does not lead to a true candidate flame region due to noise effects, but rather a differential image d (x, y, t) consisting of the candidate flame region and noise, namely:
d(x,y,t)=I(x,y,t)-b(x,y,t)+n(x,y,t)
and step 123, if the overlap ratio is greater than the preset overlap threshold, confirming the block area as a candidate flame block.
Specifically, fig. 2 is a schematic diagram of a candidate flame block provided in an embodiment of the present invention, as shown in fig. 2, the frame image is divided into 8×6 block areas, where the block areas A, B, C and D are candidate flame blocks.
Based on any of the above embodiments, the method further includes, before step 130: training based on the sample flame blocks and the sample flame identification marks to obtain a plurality of weak classifiers; combining the weak classifiers into a strong classifier, and taking the strong classifier as a flame characteristic analysis model.
Specifically, the flame characteristic analysis model is obtained by a sample learning method based on Adaboost. Adaboost is an iterative algorithm, and the main idea is to use a component classifier, adaptively change the weight of a sample according to the output result after training of the classifier, assign a larger weight to the wrongly divided sample, iterate again and again, and finally give a final decision result by weighted voting according to the performance of the classifier. The Adaboost algorithm will train each sample flame block. The training sample aims to obtain a category representation of the sample, namely a flame identification mark, which can be expressed by a function called as 'hypothesis', and is marked as H: X-Y, wherein each time the sample is subjected to learning training in the learning process, a hypothesis is correspondingly obtained, which is called as a weak classifier, and finally all the weak classifiers obtained through learning are combined to obtain a final discriminant function strong classifier which is marked as H (X). Defining a training sample (x 1 ,y 1 )、(x 2 ,y 2 )、…、(x n ,y n ) Wherein x is i Is the characteristic observation value of the sample, namely the sample flame block, belonging to the target class characteristic space X, y i Is x i Category labels, i.e. sample flame identificationA mark satisfying y i =f(x i ) F is the set of targets for which the classifier is to be trained. The respective weights of the samples are w (i) =1/n, i=1, 2, …, n, satisfying the followingAfter training, the prediction result H output by the weak classifiers is X-to-1, +1, and the assumed space set of all the weak classifiers for classifying the samples is marked as H, namely the strong classifier. The strong classifier satisfies that if and only if for arbitrarily small epsilon and sample distribution w, the classifier can output a classification hypothesis with probability 1-delta (0 < delta < 0.5).
Based on any one of the above embodiments, the method in step 140 specifically includes:
and step 141, performing foreground detection on any preferable flame block based on the multi-Gaussian model to obtain a foreground region of the preferable flame block.
Specifically, to detect moving foreground objects appearing in a scene, a background modeling technique based on multiple gaussians is employed. When processing a color image, it is assumed that the three color channels of the image pixel R, G, B are independent of each other and have the same variance. Observation dataset { X for random variable X 1 ,X 2 ,…,X N },X t =(r t ,g t ,b t ) For a sample of the pixel at time t, a single sample point X t The probability density function of the mixed Gaussian distribution obeyed by the method:
where k is the total number of distribution patterns, η (x ti,ti,t ) At time ti Gaussian distributions, μ i,t For its mean value τ i,t For its covariance matrix, delta i,t Variance, I is three-dimensional identity matrix, w i,t The weight of the ith Gaussian distribution at the moment t. The model is built progressively by processing historical video frame images. If the current pixel value does not conform to the multi-Gaussian model above, then the pixel is considered to be a foreground pixel point. Whereby a foreground region of the preferred flame block is obtained.
And 142, performing differential detection on the preferred flame block to obtain a moving area of the preferred flame block.
Specifically, for any pixel point X of the preferable flame block t Gray value of (2)Calculating the gray value +.>Difference gray value +.>To judge whether the pixel point moves or not, the judgment formula is as follows:
wherein D is a threshold for motion determination, T is a time difference between the reference frame and the current frame, and the value of T may be selected in association with the pixel value.
It is thus possible to determine whether or not each pixel in the preferred flame block is shifted, and thus to construct a shift region based on each shifted pixel.
Step 143, performing optical flow field calculation on each pixel in the preferred flame block to obtain an optical flow of each pixel.
Specifically, for any pixel in the preferred flame block, the optical flow of the pixel with respect to the reference frame, i.e., the history frame image having a frame time difference T from the current frame image, is calculated. The embodiment of the invention adopts bidirectional optical flow calculation, namely, from a reference frame to a current frame and from the current frame to the reference frame.
In addition, in optical flow calculation, in order to improve the calculation efficiency, the embodiment of the invention makes the following improvements: 1, adopting the simplest incremental local quick search mode; 2, searching for other adjacent points only when the parity comparison exceeds a certain threshold value, starting with the offset of (0, 0); and 3, based on the motion smoothness assumption, a certain step-changing consideration is performed during searching.
Step 144, obtaining a fusion foreground of the preferred flame block based on the foreground region, the moving region and the optical flow of each pixel of the preferred flame block.
Specifically, since foreground computation based on background modeling is often affected by illumination variation, and the motion area detection mode based on difference often cannot detect the variation condition inside the motion area, in the embodiment of the present invention, the detection result is combined, and based on the foreground area detected in step 141, the variation judgment obtained in step 143 is assisted, and further step 142 uses the motion area obtained by difference as a boundary, so as to obtain the fusion prospect of the preferred flame block. Here, the fusion prospect is the prospect obtained by fusion.
Step 145, obtaining an optical flow histogram of the fusion foreground.
Specifically, performing connection calculation of 8 neighborhoods on the fusion prospect to obtain a prospect block of the preferable flame block. For each foreground block Q, assumeIs the center of the block, the optical flow histogram calculation is divided into m directions, x qi E Q (i= … n) is all pixels belonging to block Q, while its optical flow component in direction j (j= … m) is +.>The optical flow component of block Q in direction j is:
and obtaining an optical flow histogram based on the formula.
Step 146, obtaining the entropy of the optical flow histogram.
Specifically, after obtaining the optical flow histogram, the entropy of the optical flow histogram is calculated. And if the entropy is larger than the preset entropy threshold, confirming that irregular movement exists in the corresponding preferable flame block.
Based on any of the above embodiments, the method in step 150 specifically includes: if the entropy of any preferable flame block is larger than a preset entropy threshold value, determining that the flame identification result is flame.
Based on any of the above embodiments, the method further comprises step 160: and acquiring a fire detection result based on flame identification results of a preset number of continuous frame images.
Specifically, the fire monitoring results herein are the presence or absence of a fire.
In order to increase the reliability of each preferred flame block identified as a flame block, the above-mentioned detection in steps 110 to 150 must be continued for a period of time, and when each preferred flame block is continuously detected to contain a flame, that is, when the flame identification result of the same preferred flame block in the continuous multiple frame images is that a flame exists, it is finally determined that a fire occurs in the video, that is, the final output result is confirmed by the continuity determination on the time axis. Due to the chance, in the persistence detection process, it is allowed that the flame recognition result of the preferable flame block in which one frame image or two frame images exist among the continuous plurality of frame images is a case in which no flame exists, but when the frame image in which the flame exists is detected to occupy a very large portion, the existence of a fire can be confirmed.
Based on any of the above embodiments, fig. 3 is a schematic flow chart of a flame identification method according to another embodiment of the present invention, as shown in fig. 3, the method includes:
step 310, for any frame of image in the video, performing color detection and morphological change detection, and obtaining a candidate flame region of the frame of image.
Step 320, dividing the frame image into a plurality of block areas, obtaining the coincidence degree of any block area and the candidate flame area, and if the coincidence degree is larger than a preset coincidence threshold value, confirming the block area as the candidate flame block.
Step 330, the method of sample learning based on Adaboost classifies flame features in each candidate flame block, and obtains classification results, namely flame identification marks. The flame identification is marked as a candidate flame block with flame as a preferred flame block.
Step 340, processing the preferred flame block, calculating an optical flow histogram thereof, obtaining entropy of the optical flow histogram after obtaining the optical flow histogram, and if the entropy value is greater than a preset entropy threshold value, determining that the preferred flame block has irregular motion, namely determining that flames exist in the preferred flame block.
In step 350, after the detection in steps 310 to 340 is continued for a period of time, when a flame is continuously detected for each preferable flame block, it is finally determined that a fire occurs in the video, that is, the final output result is confirmed through the continuity determination on the time axis.
According to the method provided by the embodiment of the invention, the static color and the form of the flame are respectively analyzed, the flame characteristics are analyzed through the flame characteristic analysis model on the basis, the entropy of the optical flow histogram is obtained to detect the dynamic irregular movement, and finally the flame identification result is obtained, so that the accuracy and the reliability of the flame identification are improved.
Based on any one of the above embodiments, fig. 4 is a schematic structural diagram of a flame identification device according to an embodiment of the present invention, and as shown in fig. 4, the device includes a color form detection unit 410, a candidate flame block generation unit 420, a flame characteristic acquisition unit 430, an entropy acquisition unit 440, and an identification unit 450;
the color and form detection unit 410 is configured to perform color detection and form change detection on any frame of image, and obtain a candidate flame region of the any frame of image;
the candidate flame block generating unit 420 is configured to segment any one of the frame images based on a candidate flame region of the any one of the frame images, and obtain a candidate flame block of the any one of the frame images;
the flame characteristic obtaining unit 430 is configured to input any candidate flame block of any one of the frame images to a flame characteristic analysis model, obtain a flame identification mark output by the flame characteristic analysis model, and use the candidate flame block with the flame identification mark as a flame as a preferred flame block; the flame characteristic analysis model is obtained based on sample flame blocks and sample flame identification marks;
the entropy obtaining unit 440 is configured to obtain an optical flow histogram of any preferred flame block of any of the frame images, and obtain entropy of the optical flow histogram;
the identification unit 450 is configured to obtain a flame identification result based on the entropy of any preferred flame block.
According to the device provided by the embodiment of the invention, the static color and the form of the flame are respectively analyzed, the flame characteristics are analyzed through the flame characteristic analysis model on the basis, the entropy of the optical flow histogram is obtained to detect the dynamic irregular movement, finally, the flame identification result is obtained, and the accuracy and the reliability of the flame identification are improved.
Based on any of the above embodiments, in the apparatus, the color morphology detection unit 410 is specifically configured to:
performing color balance on any frame image;
marking each motion pixel in any frame image through a dynamic background model based on the any frame image and the first two frame images of the any frame image;
and carrying out color filtering on each motion pixel in any frame of image to obtain a candidate flame region.
Based on any of the above embodiments, in the apparatus, the candidate flame block generating unit 420 is specifically configured to:
dividing any frame of image into a plurality of block areas;
acquiring the coincidence ratio of any block area and the candidate flame area;
and if the overlap ratio is greater than a preset overlap threshold, confirming that any block area is a candidate flame block.
Based on any of the above embodiments, the apparatus further comprises a model training unit; the model training unit is used for:
training to obtain a plurality of weak classifiers based on the sample flame blocks and the sample flame identification marks;
combining the weak classifiers into a strong classifier, and taking the strong classifier as the flame characteristic analysis model.
Based on any of the above embodiments, in the apparatus, the entropy obtaining unit 440 is specifically configured to:
performing foreground detection on any one of the preferred flame blocks based on a multi-Gaussian model to obtain a foreground region of the any one of the preferred flame blocks;
performing differential detection on any preferable flame block to obtain a moving area of the any preferable flame block;
performing optical flow field calculation on each pixel in any preferable flame block to obtain an optical flow of each pixel;
acquiring a fusion foreground of any preferred flame block based on the foreground region, the moving region and the optical flow of each pixel of the any preferred flame block;
acquiring an optical flow histogram of the fusion foreground;
and obtaining entropy of the optical flow histogram.
Based on any of the above embodiments, in the apparatus, the identifying unit 450 is specifically configured to:
and if the entropy of any preferable flame block is larger than a preset entropy threshold value, determining that the flame identification result is flame.
Based on any of the above embodiments, the apparatus further comprises a fire detection unit; the fire detection unit is used for:
and acquiring a fire detection result based on the flame identification result of the preset number of continuous frame images.
Fig. 5 is a schematic entity structure diagram of an electronic device according to an embodiment of the present invention, where, as shown in fig. 5, the electronic device may include: a processor (processor) 501, a communication interface (Communications Interface) 502, a memory (memory) 503 and a communication bus 504, wherein the processor 501, the communication interface 502, and the memory 503 communicate with each other via the communication bus 504. The processor 501 may invoke a computer program stored in the memory 503 and executable on the processor 501 to perform the flame identification method provided in the above embodiments, for example, including: performing color detection and morphological change detection on any frame of image to obtain a candidate flame region of any frame of image; dividing any frame image based on the candidate flame region of any frame image to obtain candidate flame blocks of any frame image; inputting any candidate flame block of any frame image into a flame characteristic analysis model, acquiring a flame identification mark output by the flame characteristic analysis model, and taking the flame identification mark as the candidate flame block with flame as a preferable flame block; the flame characteristic analysis model is obtained based on sample flame blocks and sample flame identification marks; acquiring an optical flow histogram of any one of the preferable flame blocks of any one of the frame images, and acquiring entropy of the optical flow histogram; and acquiring a flame identification result based on the entropy of any preferable flame block.
Further, the logic instructions in the memory 503 described above may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand alone product. Based on such understanding, the technical solution of the embodiments of the present invention may be embodied in essence or a part contributing to the prior art or a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Embodiments of the present invention also provide a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the flame identification method provided in the above embodiments, for example, including: performing color detection and morphological change detection on any frame of image to obtain a candidate flame region of any frame of image; dividing any frame image based on the candidate flame region of any frame image to obtain candidate flame blocks of any frame image; inputting any candidate flame block of any frame image into a flame characteristic analysis model, acquiring a flame identification mark output by the flame characteristic analysis model, and taking the flame identification mark as the candidate flame block with flame as a preferable flame block; the flame characteristic analysis model is obtained based on sample flame blocks and sample flame identification marks; acquiring an optical flow histogram of any one of the preferable flame blocks of any one of the frame images, and acquiring entropy of the optical flow histogram; and acquiring a flame identification result based on the entropy of any preferable flame block.
The system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. A flame identification method, comprising:
performing color detection and morphological change detection on any frame of image to obtain a candidate flame region of any frame of image;
dividing any frame image based on the candidate flame region of any frame image to obtain candidate flame blocks of any frame image;
inputting any candidate flame block of any frame image into a flame characteristic analysis model, acquiring a flame identification mark output by the flame characteristic analysis model, and taking the flame identification mark as the candidate flame block with flame as a preferable flame block; the flame characteristic analysis model is obtained based on sample flame blocks and sample flame identification marks;
acquiring an optical flow histogram of any one of the preferable flame blocks of any one of the frame images, and acquiring entropy of the optical flow histogram;
acquiring a flame identification result based on the entropy of any preferable flame block;
the acquiring an optical flow histogram of any one of the preferable flame blocks of any one of the frame images, and acquiring entropy of the optical flow histogram, specifically includes:
performing foreground detection on any one of the preferred flame blocks based on a multi-Gaussian model to obtain a foreground region of the any one of the preferred flame blocks;
performing differential detection on any preferable flame block to obtain a moving area of the any preferable flame block;
performing optical flow field calculation on each pixel in any preferable flame block to obtain an optical flow of each pixel;
acquiring a fusion foreground of any preferred flame block based on the foreground region, the moving region and the optical flow of each pixel of the any preferred flame block;
acquiring an optical flow histogram of the fusion foreground;
and obtaining entropy of the optical flow histogram.
2. The flame identification method according to claim 1, wherein the performing color detection and morphological change detection on any frame of image, and obtaining the candidate flame region of any frame of image specifically comprises:
performing color balance on any frame image;
marking each motion pixel in any frame image through a dynamic background model based on the any frame image and the first two frame images of the any frame image;
and carrying out color filtering on each motion pixel in any frame of image to obtain a candidate flame region.
3. The flame identification method according to claim 1, wherein the dividing the arbitrary frame image based on the candidate flame region of the arbitrary frame image to obtain the candidate flame block of the arbitrary frame image specifically comprises:
dividing any frame of image into a plurality of block areas;
acquiring the coincidence ratio of any block area and the candidate flame area;
and if the overlap ratio is larger than a preset overlap threshold, confirming that any block area is a candidate flame block.
4. The flame identification method according to claim 1, wherein the inputting any candidate flame block of any one of the frame images into a flame signature analysis model, obtaining a flame identification mark output by the flame signature analysis model, and marking the flame identification mark as the candidate flame block with flame as a preferable flame block, further comprises:
training to obtain a plurality of weak classifiers based on the sample flame blocks and the sample flame identification marks;
combining the weak classifiers into a strong classifier, and taking the strong classifier as the flame characteristic analysis model.
5. The flame identification method according to claim 1, wherein the obtaining a flame identification result based on the flame characteristics and entropy of any preferred flame block specifically comprises:
and if the entropy of any preferable flame block is larger than a preset entropy threshold value, determining that the flame identification result is flame.
6. The flame identification method of any of claims 1 to 5, wherein the obtaining a flame identification result based on the flame characteristics and entropy of the any preferred flame block further comprises:
and acquiring a fire detection result based on the flame identification result of the preset number of continuous frame images.
7. A flame identification device, comprising:
the color and form detection unit is used for carrying out color detection and form change detection on any frame of image and obtaining a candidate flame area of any frame of image;
the flame block candidate generating unit is used for dividing any frame image based on the flame region candidate of any frame image to obtain flame blocks candidate of any frame image;
the flame characteristic acquisition unit is used for inputting any candidate flame block of any frame image into the flame characteristic analysis model, acquiring a flame identification mark output by the flame characteristic analysis model, and taking the flame identification mark as the candidate flame block with flame as a preferable flame block; the flame characteristic analysis model is obtained based on sample flame blocks and sample flame identification marks;
an entropy obtaining unit, configured to obtain an optical flow histogram of any one of the preferred flame blocks of any one of the frame images, and obtain entropy of the optical flow histogram;
the identification unit is used for acquiring a flame identification result based on the entropy of any preferable flame block;
the entropy obtaining unit is specifically configured to:
performing foreground detection on any one of the preferred flame blocks based on a multi-Gaussian model to obtain a foreground region of the any one of the preferred flame blocks;
performing differential detection on any preferable flame block to obtain a moving area of the any preferable flame block;
performing optical flow field calculation on each pixel in any preferable flame block to obtain an optical flow of each pixel;
acquiring a fusion foreground of any preferred flame block based on the foreground region, the moving region and the optical flow of each pixel of the any preferred flame block;
acquiring an optical flow histogram of the fusion foreground;
and obtaining entropy of the optical flow histogram.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor performs the steps of the flame identification method of any of claims 1 to 6 when the program is executed.
9. A non-transitory computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the flame identification method according to any of claims 1 to 6.
CN201910558817.3A 2019-06-26 2019-06-26 Flame identification method and device Active CN110298297B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910558817.3A CN110298297B (en) 2019-06-26 2019-06-26 Flame identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910558817.3A CN110298297B (en) 2019-06-26 2019-06-26 Flame identification method and device

Publications (2)

Publication Number Publication Date
CN110298297A CN110298297A (en) 2019-10-01
CN110298297B true CN110298297B (en) 2023-07-18

Family

ID=68028852

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910558817.3A Active CN110298297B (en) 2019-06-26 2019-06-26 Flame identification method and device

Country Status (1)

Country Link
CN (1) CN110298297B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110910402B (en) * 2019-11-01 2022-07-29 武汉纺织大学 Night outdoor flame detection method
CN110866941B (en) * 2019-11-11 2022-10-25 格蠹信息科技(上海)有限公司 Flame recognition system based on visible light
CN111414514B (en) * 2020-03-19 2024-01-19 山东雷火网络科技有限公司 System and method for flame detection in Shandong Jinan environment
CN111598905A (en) * 2020-05-13 2020-08-28 云垦智能科技(上海)有限公司 Method for identifying type of blast furnace flame by using image segmentation technology
CN111814617B (en) * 2020-06-28 2023-01-31 智慧眼科技股份有限公司 Fire determination method and device based on video, computer equipment and storage medium
CN114093116A (en) * 2020-08-25 2022-02-25 中国电信股份有限公司 Method, device and system for fire detection
CN112508894B (en) * 2020-11-27 2023-10-03 江苏徐工工程机械研究院有限公司 Spraying flame flow online detection method, device and system, industrial personal computer and storage medium
CN113177467A (en) * 2021-04-27 2021-07-27 上海鹰觉科技有限公司 Flame identification method, system, device and medium
CN113159001A (en) * 2021-05-26 2021-07-23 国网信息通信产业集团有限公司 Image detection method, system, storage medium and electronic equipment
CN114566028B (en) * 2022-02-21 2024-05-07 招商蛇口数字城市科技有限公司 Electric vehicle charging risk monitoring method, device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103514430A (en) * 2012-06-29 2014-01-15 华为技术有限公司 Method and device for detecting flame
CN106650584A (en) * 2016-09-29 2017-05-10 广东安居宝数码科技股份有限公司 Fire flame detection method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI420423B (en) * 2011-01-27 2013-12-21 Chang Jung Christian University Machine vision flame identification system and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103514430A (en) * 2012-06-29 2014-01-15 华为技术有限公司 Method and device for detecting flame
CN106650584A (en) * 2016-09-29 2017-05-10 广东安居宝数码科技股份有限公司 Fire flame detection method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《火焰与烟雾的融合特征提取和分类算法研究》;张盼盼;《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》;20170215;第1-54页 *

Also Published As

Publication number Publication date
CN110298297A (en) 2019-10-01

Similar Documents

Publication Publication Date Title
CN110298297B (en) Flame identification method and device
CN107423690B (en) Face recognition method and device
KR101964397B1 (en) Information processing apparatus and information processing method
JP6330385B2 (en) Image processing apparatus, image processing method, and program
KR20160143494A (en) Saliency information acquisition apparatus and saliency information acquisition method
CN108416291B (en) Face detection and recognition method, device and system
CN110929593A (en) Real-time significance pedestrian detection method based on detail distinguishing and distinguishing
KR100572768B1 (en) Automatic detection method of human facial objects for the digital video surveillance
Liu et al. Smoke-detection framework for high-definition video using fused spatial-and frequency-domain features
JP6448212B2 (en) Recognition device and recognition method
Song et al. Background subtraction based on Gaussian mixture models using color and depth information
KR101343623B1 (en) adaptive color detection method, face detection method and apparatus
CN110910497B (en) Method and system for realizing augmented reality map
CN110084160B (en) Video forest smoke and fire detection method based on motion and brightness significance characteristics
CN111402185B (en) Image detection method and device
Jindal et al. Sign Language Detection using Convolutional Neural Network (CNN)
Zheng et al. Shadow removal for pedestrian detection and tracking in indoor environments
Sarkar et al. Universal skin detection without color information
Boroujeni et al. Robust moving shadow detection with hierarchical mixture of MLP experts
CN113780222A (en) Face living body detection method and device, electronic equipment and readable storage medium
Popa et al. Real time trajectory based hand gesture recognition
CN112949367A (en) Method and device for detecting color of work clothes based on video stream data
Jarraya et al. Adaptive moving shadow detection and removal by new semi-supervised learning technique
Das et al. A novel shadow detection method using fuzzy rule based model
Gandhi et al. Image based sign language recognition on android

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant