CN112447020B - Efficient real-time video smoke flame detection method - Google Patents
Efficient real-time video smoke flame detection method Download PDFInfo
- Publication number
- CN112447020B CN112447020B CN202011478888.1A CN202011478888A CN112447020B CN 112447020 B CN112447020 B CN 112447020B CN 202011478888 A CN202011478888 A CN 202011478888A CN 112447020 B CN112447020 B CN 112447020B
- Authority
- CN
- China
- Prior art keywords
- image
- smoke
- detection frame
- feature
- cutting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 239000000779 smoke Substances 0.000 title claims abstract description 69
- 238000001514 detection method Methods 0.000 title claims abstract description 48
- 238000000034 method Methods 0.000 claims abstract description 32
- 238000005520 cutting process Methods 0.000 claims abstract description 31
- 239000013598 vector Substances 0.000 claims abstract description 21
- 238000013507 mapping Methods 0.000 claims abstract description 3
- 238000012549 training Methods 0.000 claims description 31
- 238000004364 calculation method Methods 0.000 claims description 24
- 150000001875 compounds Chemical class 0.000 claims description 6
- 235000008694 Humulus lupulus Nutrition 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 229910002056 binary alloy Inorganic materials 0.000 abstract description 2
- 238000012360 testing method Methods 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000009826 distribution Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000002485 combustion reaction Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B17/00—Fire alarms; Alarms responsive to explosion
- G08B17/12—Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
- G08B17/125—Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Image Analysis (AREA)
- Fire-Detection Mechanisms (AREA)
Abstract
The invention discloses an efficient real-time video smoke flame detection method which is high in detection accuracy rate and low in false alarm rate and can run on a general CPU in real time. The method inputs the picture to be predicted and outputs the picture after the smoke flame is calibrated. Firstly, using a square window with the length and the width of W to slide and calculate the texture feature of a local binary system mode and the edge feature of a gradient histogram on an input picture in a mode of moving S pixel points each time, and generating the resolution as 1/S of an original image 2 The characteristic image of (2). And mapping the detection frame onto the feature image, and then performing tiling and surrounding cutting to obtain a plurality of rectangular small blocks. And calculating the statistics such as mean, variance, skewness degree, kurtosis and the like among the small rectangular blocks, combining the statistics into a feature vector, and inputting the feature vector into a cascaded Adaboost classifier to obtain a classification result. In order to further reduce the false alarm rate, the method of the invention cascades a plurality of self-adaptive boosting classifiers, and the system makes a decision when all the sub-classifiers are identified as smoke flames.
Description
Technical Field
The invention relates to the technical field of image recognition, in particular to a high-efficiency real-time video smoke flame detection method.
Background
The fire is an uncontrolled combustion, which poses great threat to the safety of people in production and life. In recent years, serious disasters such as casualties and property loss caused by fire are frequently seen. But compared with other natural disasters, the fire disaster can be relatively restrained and controlled in time, and the earlier the fire disaster is discovered, the serious loss caused by the fire disaster can be reduced. The smoke is one of important image characteristics appearing in the initial stage of fire, so that the method has important practical significance for accurately and timely detecting the smoke of the monitoring videos in various environments.
In the early fire monitoring technology, fire smoke detection devices implemented based on sensors were generally classified into three types, light sensing, smoke sensing, and temperature sensing. At present, a plurality of methods for identifying fire by adopting multi-sensor fusion have achieved better effect. Yuan provides a smoke identification method fusing two characteristics of a gradient histogram and a local binary mode, and a remarkable identification effect is achieved, but the application scene of the algorithm is single, and when the algorithm is in a more complex and dynamic environment, the performance of the algorithm is rapidly reduced. With the development of deep learning, the video smoke identification method based on deep learning is rapidly developed. Different from the traditional frame, the deep learning method is used for extracting target features based on a large amount of data, so that a more accurate recognition result can be obtained, and interference of a large amount of backgrounds can be eliminated. Such as Mao et al, propose the use of a multi-channel convolutional neural network to identify flames. The method proposes to use an original image as a static texture, use an optical flow sequence of the original image as a dynamic texture, and fuse two texture information to construct a cascaded convolutional neural network. Yin et al propose deeper levels of 14-layer convolutional neural networks, which speed up the training process and avoid problems such as overfitting by replacing convolutional layers in traditional convolutional neural networks with batch normalized convolutional layers. Muhammad et al propose a network structure similar to GoogleNet, and utilize a transfer learning method to improve the detection accuracy and reduce the convolution complexity. However, the deep learning method has a severe requirement on the sample data size and a high requirement on computing resources, and cannot achieve the effect of real-time detection. In a fire early warning scene, a very serious result can be caused by delaying for one minute, so that a high requirement is put forward on the real-time performance of a video smoke detection algorithm.
Disclosure of Invention
The invention aims to provide an efficient real-time video smoke flame detection method based on an image processing technology and a machine learning algorithm aiming at the defects of the prior art, and the method is applied to fire prevention, control and early warning of forests and factories.
The purpose of the invention is realized by the following technical scheme: an efficient real-time video smoke flame detection method comprises the following steps:
(1) collecting a plurality of images containing smoke and images without smoke from a video to form a data set;
(2) using a square window with the length and the width of W to slide and calculate texture features and gradient histogram edge features of a local binary system mode on an image of a data set in a mode of moving S pixel points each time, and generating an original image with the resolution ratio of 1/S 2 The characteristic image of (1). And setting the size of the smoke detection frame as W multiplied by H, and mapping the detection frame to the generated characteristic image.
(3) Cutting the feature image contained in the mapped detection frame; the specific cutting mode is as follows:
a. laying and cutting: cutting the feature image into N rectangular blocks with different aspect ratios;
b. and (3) surrounding cutting: the feature image is cut into M rectangular rings at different intervals.
Summing each cut rectangular block and rectangular ring, then solving the statistical characteristic quantities of mean value, variance, skewness degree, kurtosis and the like of all the rectangular blocks and the rectangular rings, and finally connecting all the statistical characteristic quantities into a one-dimensional characteristic vector.
(4) Cascading a plurality of Adaboost classifiers, inputting the feature vectors obtained in the step (3) into the Adaboost classifiers for training, and judging the final smoke flame when all the classifiers are identified as smoke flames, namely, the smoke flames are in the images corresponding to the input feature vectors;
(5) and (3) processing the video image to be identified in the steps (2) and (3) to obtain a one-dimensional feature vector, inputting the one-dimensional feature vector into the cascaded Adaboost classifier, and judging whether the video contains smoke and flame.
Further, the gradient histogram edge feature calculation formula is as follows:
in the formula (I), the compound is shown in the specification,is the amplitude of the gradient image;is the phase angle, g, of the gradient image x Is a differential image in the x direction, g y A differential image in the y direction; w i Being window images, bin k Is the index value of the gradient histogram.
Further, the local binary pattern texture feature calculation formula is as follows:
in the formula (I), the compound is shown in the specification,representing the number of 0/1 hops in the LBP value; g i For the ith adjacent gray pixel value, g c Is the center gray pixel value, and p is the number of adjacent pixels in calculating LBP.
After obtaining the LBP value, calculating a statistical histogram, wherein the formula is as follows:
in the formula, W i For the windowed image, LBP (x, y) is the LBP value at (x, y), bin l Is the index value of the statistical histogram. Finally obtaining dimension bin k +bin l Resolution of 1/S of the original image 2 The characteristic image of (1).
Further, the smoke detection frame in the step (2) is used for sliding on the image and marking the detected smoke part. The length and width of the detection frame are respectively reduced by 1/S times and mapped to the generated characteristic image, and then the frame is detectedAndrespectively moving in the horizontal and vertical directions to traverse the complete characteristic image.
Further, in the step (3), each step of the smoke detection frame calculates the tiled and surrounding cutting statistical characteristic quantity of the characteristic image, and the recognition result of the smoke detection frame is obtained by using the statistical characteristic quantity.
Further, the calculation process of the statistical characteristic quantity in the step (3) specifically includes: firstly, calculating an Integral Image (Integral Image) of a smoke detection frame for reducing the calculation amount when summing pixel points, then, cutting a characteristic Image contained in the smoke detection frame into N rectangular blocks in a tiled mode and M rectangular rings in a surrounding mode in different proportions in the horizontal direction and the vertical direction, summing each cut rectangular block or rectangular ring by using the Integral Image, and then calculating statistics among the rectangular blocks or rectangular rings: including mean, variance, skew, and kurtosis. Each cutting proportion represents a different cutting mode, different blocks can be obtained between each cutting mode, and therefore the statistic is different; each cutting ratio will result in a different set of statistics, and the statistics from all cutting modes are combined to obtain the final statistical feature of the tile and surround cuts.
Further, the specific process of training the cascade Adaboost classifier in the step (4) is as follows: setting three expected indexes of the overall false alarm rate, the overall recognition rate and the overall recall rate; setting an expected index of accuracy and a false alarm rate for a single Adaboost classifier, training the single Adaboost classifier, and finishing the training when the index of the classifier reaches an expected value; and calculating the current overall false alarm rate, recognition rate and recall rate, comparing with the expected index, ending the whole training process if the overall false alarm rate, recognition rate and recall rate are better than the expected index, and otherwise adding a new Adaboost classifier to continue training.
Further, if the classification result of one classifier in the cascade Adaboost classifiers is positive, displaying the prediction frame on the original image, otherwise, skipping the current calculation and entering the classification calculation of the next prediction frame. Only if all classifiers are predicted to be positive, the final output result is positive; otherwise, once one classifier predicts negative, the output is negative.
The invention has the beneficial effects that:
1. the recognition rate is high: the accuracy rate of the self-made smoke data set (35317 pictures in total, wherein 13201 smoke samples and 22116 non-smoke samples) reaches 93 percent, and the recall rate reaches 96 percent; testing 10 videos, wherein 6 videos with smoke exist, 4 videos without smoke exist, each video is 5 minutes averagely, the frame rate is 25fps, the number of smoke frames is 16254, and the number of non-smoke frames is 60639; the detection rate is 93.7%, the false alarm rate is 3.4%, and the detection rate is basically matched with the test result on the data set.
2. The operation efficiency is high: a picture with the size of 320x240 is input, the average running time on an embedded device (ARMCortex-A55@1.8GHz) is 31.3ms, and the average running time is converted into a frame rate of 32fps, which greatly exceeds 25fps required by real-time.
3. Insensitive to input scale: the method can detect the pictures with any input scale, and increases the flexibility of algorithm application.
Drawings
FIG. 1 is a diagram of the recognition effect of the system of the present invention.
Fig. 2 is a schematic view of tiling and cutting according to the present invention.
FIG. 3 is a schematic view of the circular cutting of the present invention.
Fig. 4 is a flow chart of statistical characteristic calculation according to the present invention.
FIG. 5 is a flowchart of the overall calculation of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the spirit of the invention. All falling within the scope of the invention.
As shown in fig. 5, the method for detecting smoke and flame with video in real time and high efficiency provided by the present invention comprises the following specific steps:
(1) collecting a plurality of images containing smoke and images without smoke from a video to form a data set;
(2) an edge feature Gradient Histogram (Histogram of ordered Gradient) is calculated by sliding on the image of the data set in a manner of moving S pixel points each time by using a square window with the length and the width of W, and each window image can obtain one bin k The vector of dimensions, the calculation formula is as follows.
In the formula (I), the compound is shown in the specification,is the magnitude of the gradient image;is the phase angle, g, of the gradient image x Is a differential image in the x direction, g y A differential image in the y direction; w i Being window images, bin k Is the division value of the histogram (generally 8).
At the same time, texture features need to be calculated: local Binary Pattern (Local Binary Pattern):
in the formula (I), the compound is shown in the specification,representing the number of 0/1 hops in the LBP value; g is a radical of formula i For the ith adjacent gray pixel value, g c Is the center gray pixel value, and p is the number of adjacent pixels in calculating LBP.
After obtaining the LBP value, a statistical histogram is calculated:
W i for the windowed image, LBP (x, y) is the LBP value at (x, y), and p is the division value of the histogram.
The dimensionality of bin is obtained through the calculation k + p, resolution 1/S of original image 2 Feature Map (Feature Map).
(3) The size of the smoke detection frame is set to W × H, and the smoke detection frame is mainly used to slide on the input image and mark out the detected smoke portion, as shown in fig. 1. The length and width of the detection frame are respectively reduced by 1/S times and mapped on the characteristic image obtained by the calculation, and then the frame is detectedAndthe step length of the detection frame is respectively moved in the horizontal direction and the vertical direction, the complete characteristic image is traversed, the following two types of characteristics are calculated for the detection frame in each step, and the identification result of the current detection frame is obtained by utilizing the characteristics.
a. Tiling and cutting statistical characteristic quantity: firstly, an Integral Image (Integral Image) of the detection frame is calculated, wherein the Integral Image is used for reducing the calculation amount when the pixel points are summed, then the characteristic Image of the detection frame is divided into a plurality of rectangular blocks in a horizontal direction and a vertical direction in a flatwise manner at different proportions, as shown in fig. 2, each rectangular small block is summed by using the Integral Image, and then the statistic between the rectangular small blocks is calculated: mean, variance, skew, and kurtosis. Each cutting proportion represents a different cutting mode, different blocks can be obtained between each cutting mode, and therefore the statistic is different; in short, each cutting ratio will obtain a different set of statistics, and the statistics obtained from all cutting modes are combined to obtain the final statistical feature quantity of the tiled cutting, and the calculation flow is shown in fig. 4.
b. Surround cut statistical characteristic quantity: the calculation process of the surround cut statistic is basically the same as that of the tile cut statistic, and the only difference is that the detection frame image is subjected to surround division into a plurality of rectangular strips, as shown in fig. 3, and the surround cut statistical feature quantity is summed and calculated. And finally, combining the two types of statistical characteristic quantities to obtain a characteristic vector for training and identifying the classifier. The four statistics are calculated as follows:
in the formula, S bi Σ f (x, y) is the sum of each rectangular patch, and Q is the number of rectangular patches.
Smoke detection is a typical binary problem, so that the overall calculation efficiency of the algorithm can be greatly improved by using an efficient binary model to replace a deep neural network model. The Adaboost algorithm is an adaptive boosting algorithm, which learns a series of weak classifiers from training data and combines the weak classifiers into a strong classifier. The method sets a simple classification model of a double-layer decision tree as a base classifier of an Adaboost algorithm, and then adjusts the weight of the classification result and the distribution of training data by the base classifier according to a training error rate until the training error rate reaches a set expected value.
In the prediction process, the Adaboost classifier receives the feature vector obtained in the above step as input, and obtains a classification result after forward calculation. And if the classification result is positive, displaying the prediction frame on the original image, otherwise, skipping the current calculation and entering the classification calculation of the next prediction frame. In addition, in order to further improve the recognition rate of the system and reduce the false alarm rate, the method of the invention cascades a plurality of Adaboost classifiers, and only when all the classifiers are predicted to be positive, the final output result is positive; otherwise, once one classifier predicts negative, the output is negative. The method fully utilizes the characteristics of high recall rate and relatively low recognition rate of the Adaboost classifier, and improves the overall recognition accuracy of the system.
Example (b):
step (1) preparing a data set; the present invention prepared a total of 35317 picture samples of size 50x50, of which 13201 smoke samples and 22116 non-smoke samples. The data set is divided into a training set and a test set according to the ratio of 8:2, wherein the training set is used for training an Adaboost classifier, and the test set is used for evaluating a current model in the training process.
And (2) traversing the whole data set and writing the sample path and the label value into a text file for use in subsequent model training.
And (3) analyzing the text file obtained in the step (2), reading the picture and the label value, calculating the feature vectors stated in the technical scheme, wherein each sample corresponds to one feature vector, n vectors can be obtained in total, and all the feature vectors are combined into a two-dimensional array of n rows and m columns, wherein n is the number of samples, and m is the number of components of the one-dimensional feature vector. And storing the calculated two-dimensional sample characteristic array in a memory, and writing the sample characteristic array into a disk for use in future training.
Step (4), training an Adaboost classifier;firstly, initializing the weight distribution of the sample characteristics obtained in the step (3): d 1 =(w 11 ,w 12 ,…,w 1n ),When n is initialized, each sample is endowed with the same weight, the sample weight is changed through an error rate in the training process, the weight of the sample which is easy to learn is reduced, the weight of the sample which is difficult to learn is increased, and the learning efficiency is improved; a base classifier G is then trained on the sample feature set x X is 1,2, …, X is the number of base classifiers and its error rate on the training data set is calculated:and G X Coefficient of (2)
Then, the weight distribution of the sample characteristic data is continuously adjusted, and the weight value is updated according to the formula
Wherein Z x To normalize the factors
Finally, a final Adaboost classifier expression is constructed
Setting three expected indexes of the false alarm rate, the recognition rate and the recall rate of the whole algorithm; setting an expected index of accuracy and a false alarm rate for a single Adaboost classifier, training the Adaboost classifier by using the method in the step (4), and finishing training when the index of the classifier reaches an expected value; and calculating the current overall false alarm rate, recognition rate and recall rate, comparing the false alarm rate, recognition rate and recall rate with the expected index, ending the whole training process if the false alarm rate, recognition rate and recall rate are superior to the expected index, and otherwise, adding a new Adaboost classification into the system to continue training.
Step (6) reading a section of video into a memory, extracting a frame of image, converting the frame of image into a gray value image, setting the size of a prediction frame to be 50 multiplied by 50, and calculating a characteristic image by using the method set forth in the technical scheme; in order to fully utilize computer hardware resources, dividing the characteristic image into a plurality of parts according to lines and distributing the parts into a plurality of threads, wherein each thread executes the same tiling and surrounding statistical characteristic quantity calculation; loading the model trained in the step (5), and performing primary prediction on the feature vector of each prediction frame to obtain a result; and finally, displaying the prediction result on the video frame image. Through experimental tests, the algorithm can achieve the recognition speed of 31.3ms of one frame on a general CPU @1.8GHz, an expensive GPU server does not need to be deployed like a deep neural network, and the actual application prospect is good.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.
Claims (5)
1. An efficient real-time video smoke flame detection method is characterized by comprising the following steps:
(1) collecting a plurality of images containing smoke and images without smoke from a video to form a data set;
(2) using a square window with the length and the width of W to slide and calculate the texture feature of a local binary mode and the edge feature of a gradient histogram on the image of the data set in a mode of moving S pixel points each time, and generating the image with the resolution of 1/S of the original image 2 The characteristic image of (1); setting the size of the smoke detection frame as W × H, and mapping the smoke detection frame to the generated signatureCharacterizing the image; the smoke detection frame is used for sliding on the image and marking out the detected smoke part; the length and width of the smoke detection frame are respectively reduced by 1/S and mapped to the generated characteristic image, and then the smoke detection frame is used for detecting smokeAndthe step lengths of the image frames are respectively moved in the horizontal direction and the vertical direction, and the complete characteristic image is traversed;
(3) cutting the feature image contained in the mapped smoke detection frame; the specific cutting mode is as follows:
a. laying and cutting: cutting the feature image into N rectangular blocks with different aspect ratios;
b. and (3) surrounding cutting: cutting the feature image into M rectangular rings at different intervals;
calculating the statistical characteristic quantity of the tiled and surrounding cutting of the characteristic image by the smoke detection frame in each step, and obtaining the identification result of the smoke detection frame by using the statistical characteristic quantity; tiling and cutting the statistical characteristic quantity to firstly calculate an integral image of the smoke detection frame, wherein the integral image is used for reducing the calculation quantity when summing pixel points, then the smoke detection frame characteristic image is divided into a plurality of rectangular blocks in a tiling way in different proportions in the horizontal direction and the vertical direction, the integral image of the smoke detection frame is firstly calculated by using the integral image to sum each rectangular block, the integral image of the smoke detection frame is firstly calculated by using the surrounding and cutting statistical characteristic quantity to reduce the calculation quantity when summing the pixel points, then the characteristic image contained in the smoke detection frame is cut into a plurality of rectangular rings in a surrounding way in different proportions in the horizontal direction and the vertical direction, each cut rectangular ring is summed by using the integral image, then the statistical characteristic quantities of the mean value, the variance, the skewness and the kurtosis of all the rectangular blocks and the rectangular rings are solved, and finally all the statistical characteristic quantities are connected into a one-dimensional characteristic vector;
(4) cascading a plurality of Adaboost classifiers, inputting the feature vectors obtained in the step (3) into the Adaboost classifiers for training, and judging the final smoke flame when all the classifiers are identified as smoke flames, namely, the smoke flames are in the images corresponding to the input feature vectors;
(5) and (3) processing the image of the data set to be identified in the steps (2) and (3) to obtain a one-dimensional feature vector, inputting the one-dimensional feature vector into the cascaded Adaboost classifier, and judging whether the video contains smoke and flame.
2. The method for efficient real-time video smoke flame detection as claimed in claim 1, wherein the gradient histogram edge feature calculation formula is as follows:
in the formula (I), the compound is shown in the specification,is the amplitude of the gradient image;is the phase angle, g, of the gradient image x Is a differential image in the x direction, g y A differential image in the y direction; w i Being window images, bin k Is the index value of the gradient histogram.
3. An efficient real-time video smoke flame detection method as claimed in claim 2 wherein the local binary pattern texture feature calculation formula is as follows:
in the formula (I), the compound is shown in the specification,representing the number of 0/1 hops in the LBP value; g is a radical of formula i Is the ith adjacentGray pixel value, g c Is the central gray pixel value, and p is the number of adjacent pixels when LBP is calculated;
after obtaining the LBP value, calculating a statistical histogram, wherein the formula is as follows:
in the formula, W i For the window image, LBP (x, y) is the LBP value at (x, y), bin l Is the division value of the statistical histogram; finally obtaining dimension bin k +bin l Resolution of 1/S of the original image 2 The characteristic image of (1).
4. The method for detecting the video smoke flame in the efficient real-time manner as claimed in claim 1, wherein the specific process of training the cascade Adaboost classifier in the step (4) is as follows: setting three expected indexes of the whole false alarm rate, the recognition rate and the recall rate; setting expected indexes of accuracy and false alarm rate for a single Adaboost classifier, training the single Adaboost classifier, and finishing training when the classifier indexes reach the corresponding expected indexes; and calculating the current overall false alarm rate, recognition rate and recall rate, comparing with the corresponding expected indexes, ending the whole training process if the current overall false alarm rate, recognition rate and recall rate are superior to the corresponding expected indexes, and otherwise, adding a new Adaboost classifier to continue training.
5. The method of claim 1, wherein if the classification result of one of the cascaded Adaboost classifiers is positive, the prediction frame is displayed on the original image, otherwise the current calculation is skipped and the next prediction frame is classified; only if all classifiers are predicted to be positive, the final output result is positive; otherwise, once one classifier predicts negative, the output is negative.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011478888.1A CN112447020B (en) | 2020-12-15 | 2020-12-15 | Efficient real-time video smoke flame detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011478888.1A CN112447020B (en) | 2020-12-15 | 2020-12-15 | Efficient real-time video smoke flame detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112447020A CN112447020A (en) | 2021-03-05 |
CN112447020B true CN112447020B (en) | 2022-08-23 |
Family
ID=74739362
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011478888.1A Active CN112447020B (en) | 2020-12-15 | 2020-12-15 | Efficient real-time video smoke flame detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112447020B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102136059A (en) * | 2011-03-03 | 2011-07-27 | 苏州市慧视通讯科技有限公司 | Video- analysis-base smoke detecting method |
CN103150549A (en) * | 2013-02-05 | 2013-06-12 | 长安大学 | Highway tunnel fire detecting method based on smog early-stage motion features |
CN104616034A (en) * | 2015-02-15 | 2015-05-13 | 北京化工大学 | Smoke detection method |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101833838B (en) * | 2010-05-27 | 2012-06-06 | 王巍 | Large-range fire disaster analyzing and early warning system |
US20140169663A1 (en) * | 2012-12-19 | 2014-06-19 | Futurewei Technologies, Inc. | System and Method for Video Detection and Tracking |
CN103761529B (en) * | 2013-12-31 | 2017-06-13 | 北京大学 | A kind of naked light detection method and system based on multicolour model and rectangular characteristic |
-
2020
- 2020-12-15 CN CN202011478888.1A patent/CN112447020B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102136059A (en) * | 2011-03-03 | 2011-07-27 | 苏州市慧视通讯科技有限公司 | Video- analysis-base smoke detecting method |
CN103150549A (en) * | 2013-02-05 | 2013-06-12 | 长安大学 | Highway tunnel fire detecting method based on smog early-stage motion features |
CN104616034A (en) * | 2015-02-15 | 2015-05-13 | 北京化工大学 | Smoke detection method |
Non-Patent Citations (2)
Title |
---|
红外图像的预处理与目标分割;经一平等;《国防科技大学学报》;19910630;第13卷(第2期);第87-89页 * |
采用金字塔纹理和边缘特征的图像烟雾检测;李红娣等;《中国图象图形学报》;20150630;第20卷(第6期);第0773-0779页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112447020A (en) | 2021-03-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109147254B (en) | Video field fire smoke real-time detection method based on convolutional neural network | |
CN113705478B (en) | Mangrove single wood target detection method based on improved YOLOv5 | |
Krstinić et al. | Histogram-based smoke segmentation in forest fire detection system | |
CN109740676B (en) | Object detection and migration method based on similar targets | |
CN112836713A (en) | Image anchor-frame-free detection-based mesoscale convection system identification and tracking method | |
CN108985192A (en) | A kind of video smoke recognition methods based on multitask depth convolutional neural networks | |
CN108537215A (en) | A kind of flame detecting method based on image object detection | |
CN108960047B (en) | Face duplication removing method in video monitoring based on depth secondary tree | |
CN101770644A (en) | Forest-fire remote video monitoring firework identification method | |
CN112861635A (en) | Fire and smoke real-time detection method based on deep learning | |
CN114399719B (en) | Transformer substation fire video monitoring method | |
CN112132005A (en) | Face detection method based on cluster analysis and model compression | |
CN112633174B (en) | Improved YOLOv4 high-dome-based fire detection method and storage medium | |
Bloshchinskiy et al. | Snow and cloud detection using a convolutional neural network and low-resolution data from the Electro-L No. 2 Satellite | |
CN113537226A (en) | Smoke detection method based on deep learning | |
CN114821102A (en) | Intensive citrus quantity detection method, equipment, storage medium and device | |
CN112149665A (en) | High-performance multi-scale target detection method based on deep learning | |
CN111640087B (en) | SAR depth full convolution neural network-based image change detection method | |
CN112149664A (en) | Target detection method for optimizing classification and positioning tasks | |
CN104463909A (en) | Visual target tracking method based on credibility combination map model | |
CN115311601A (en) | Fire detection analysis method based on video analysis technology | |
CN106815567B (en) | Flame detection method and device based on video | |
Wen et al. | Multi-scene citrus detection based on multi-task deep learning network | |
CN110826485A (en) | Target detection method and system for remote sensing image | |
Asrol et al. | Real-Time Oil Palm Fruit Grading System Using Smartphone and Modified YOLOv4 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |