CN116189037A - Flame detection identification method and device and terminal equipment - Google Patents

Flame detection identification method and device and terminal equipment Download PDF

Info

Publication number
CN116189037A
CN116189037A CN202211664661.5A CN202211664661A CN116189037A CN 116189037 A CN116189037 A CN 116189037A CN 202211664661 A CN202211664661 A CN 202211664661A CN 116189037 A CN116189037 A CN 116189037A
Authority
CN
China
Prior art keywords
flame
image
detection
information
area image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211664661.5A
Other languages
Chinese (zh)
Inventor
李炜
吴锦松
钱鼎智
李勇猷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Taiji Shuzhi Technology Co ltd
Original Assignee
Shenzhen Taiji Shuzhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Taiji Shuzhi Technology Co ltd filed Critical Shenzhen Taiji Shuzhi Technology Co ltd
Priority to CN202211664661.5A priority Critical patent/CN116189037A/en
Publication of CN116189037A publication Critical patent/CN116189037A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
    • Y02A40/28Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture specially adapted for farming

Abstract

The application provides a flame detection identification method, a flame detection identification device and terminal equipment, which are applicable to the technical field of data processing, wherein the method comprises the following steps: based on the training image, obtaining a first flame movement region image through background difference processing; based on the training image, obtaining a second flame movement region image through inter-frame difference processing; determining a detection area image based on the first flame movement area image and the second flame movement area image; based on a flame characteristic library, performing corner detection on the detection area image to obtain first flame information; inputting the detection area image into a back propagation neural network model to obtain second flame information; and performing flame detection based on the first flame information and the second flame information to obtain a fire result. The method can detect fire in real time and accurately find fire safety hidden trouble; has stronger practicability and usability.

Description

Flame detection identification method and device and terminal equipment
Technical Field
The application belongs to the technical field of data processing, and particularly relates to a flame detection identification method, a flame detection identification device and terminal equipment.
Background
The existing technology for detecting fire is mostly implemented by manual inspection and video monitoring, and most of the technology needs to manually identify flames so as to determine the occurrence of the fire. The method consumes large labor cost and has low efficiency, and the method is easy to cause human errors and can not accurately and timely acquire the fire.
Disclosure of Invention
In view of this, the embodiments of the present application provide a flame detection identification method, apparatus, and terminal device, which can solve the problem that the conventional fire detection method cannot accurately find the fire safety hidden trouble in real time.
A first aspect of an embodiment of the present application provides a method for identifying flame detection, including:
based on the training image, obtaining a first flame movement region image through background difference processing;
based on the training image, obtaining a second flame movement region image through inter-frame difference processing;
determining a detection area image based on the first flame movement area image and the second flame movement area image;
based on a flame characteristic library, performing corner detection on the detection area image to obtain first flame information;
inputting the detection area image into a back propagation neural network model to obtain second flame information;
And performing flame detection based on the first flame information and the second flame information to obtain a fire result.
In a possible implementation manner of the first aspect, before the obtaining, by background difference processing, the first flame-moving region image based on the training image, the method includes:
acquiring an image set to be detected, and dividing the image set to be detected into a training image set and a test image set according to a preset proportion;
wherein the set of images to be detected comprises at least 5000 original images.
In a possible implementation manner of the first aspect, before the obtaining, by background difference processing, the first flame-moving region image based on the training image, the method includes:
and preprocessing the original images in the training image set to obtain the training image.
In a possible implementation manner of the first aspect, before the obtaining, by background difference processing, the first flame-moving region image based on the training image, the method includes:
and carrying out illumination equalization processing on the original image to obtain the training image.
In a possible implementation manner of the first aspect, before the obtaining, by background difference processing, the first flame-moving region image based on the training image, the method includes:
Carrying out illumination equalization treatment on the original image to obtain a first training image;
and preprocessing the first training image to obtain the training image.
In a possible implementation manner of the first aspect, the determining a detection area image based on the first flame moving area image and the second flame moving area image includes:
combining the first flame motion area image and the second flame motion area image to obtain a flame motion area image;
and determining an overlapping area image of the first flame movement area image and the second flame movement area image as the detection area image based on the flame movement area image.
In a possible implementation manner of the first aspect, the performing flame detection based on the first flame information and the second flame information to obtain a fire result includes:
processing the first flame information according to a first preset weight to obtain a first identification result;
processing the second flame information according to a second preset weight to obtain a second identification result;
determining a third recognition result based on the first recognition result and the second recognition result;
If the third identification result is larger than a preset threshold value, determining that the fire result is fire;
wherein the sum of the first preset weight and the second preset weight is 1.
A second aspect of embodiments of the present application provides an identification device for flame detection, including:
the first processing module is used for obtaining a first flame movement area through background difference processing based on the training image;
the second processing module is used for obtaining a second flame movement area through interframe difference processing based on the training image;
a determining module configured to determine a detection area image based on the first flame movement area image and the second flame movement area image;
the third processing module is used for carrying out corner detection on the detection area image based on the flame characteristic library to obtain first flame information;
the fourth processing module is used for inputting the detection area image into a back propagation neural network model to obtain second flame information;
and the detection module is used for detecting the flame based on the first flame information and the second flame information to obtain a fire result.
A third aspect of the embodiments of the present application provides a terminal device, the terminal device comprising a memory, a processor, the memory having stored thereon a computer program executable on the processor, the processor executing the computer program to implement the steps of the flame detection identification method as described in any of the first aspects above.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium comprising: a computer program is stored which, when executed by a processor, implements the steps of the method of identifying flame detection as set forth in any of the first aspects above.
A fifth aspect of embodiments of the present application provides a computer program product, which when run on a terminal device, causes the terminal device to perform the method of identifying flame detection as described in any of the first aspects above.
Compared with the prior art, the embodiment of the application has the beneficial effects that: based on the training image, obtaining a first flame movement region image through background difference processing; based on the training image, obtaining a second flame movement region image through inter-frame difference processing; and determining a detection area image based on the first flame movement area image and the second flame movement area image, thereby further ensuring the accuracy of the detection area. Based on the flame characteristic library, carrying out corner detection on the detection area image to obtain first flame information; inputting the detection area image into a back propagation neural network model to obtain second flame information; and performing flame detection based on the first flame information and the second flame information to obtain a fire result. The first flame information and the second flame information are used for automatically detecting and identifying flames, so that the identification effect of flame detection is improved, and a fire result is obtained in time; has stronger usability and practicability.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of an implementation of a flame detection identification method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of an implementation of a flame detection identification method according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of an implementation of a flame detection identification method according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of an implementation of a flame detection identification method according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of an implementation of a flame detection identification method according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a flame detection identification device according to an embodiment of the present application;
fig. 7 is a schematic diagram of a terminal device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
First, some terms in the embodiments of the present application are explained for easy understanding by those skilled in the art.
In order to illustrate the technical solutions described in the present application, the following description is made by specific examples.
Fig. 1 and fig. 5 show a flowchart of an implementation of a flame detection identification method according to an embodiment of the present application, which is described in detail below: a method of identifying flame detection, comprising:
step S101, obtaining a first flame moving region image through background difference processing based on the training image.
In one embodiment, the step S101 includes: performing differential operation on the training image and the background image to obtain a first area image; and thresholding the first area image to obtain the first flame movement area image. The first area image of the flame can be determined by performing differential operation on the training image and the background image; further, by thresholding, a first flame movement region image is determined, and an image region of the flame is determined.
In one embodiment, the background image is acquired prior to the step S101. Specifically, the background image is determined based on the training image and a first preset threshold. The first preset threshold is a preset foreground gray level threshold.
Illustratively, prior to said step S101, a video is acquired, wherein the video comprises a number of video frames. Before the step S101, selecting one of the video frames as the training image; selecting the first video frames of the video frames, or selecting the last video frames of the video frames; an average image of the plurality of video frames is acquired, and the average image is used as a first reference image.
The training image may be marked with It, the first reference image may be marked with Bt, and a position of one pixel point selected from the training image and the first reference image is marked with (x, y). And carrying out differential operation on the training image and the first reference image for the positions of the pixel points to obtain a first operation result. Specifically, the pixel point corresponding to the training image is labeled It (x, y); the pixel points corresponding to the first reference image are marked as Bt (x, y), and then, for the same position of the pixel points, the pixel points corresponding to the training image and the pixel points corresponding to the first reference image are subtracted, and absolute values are taken, so that a first operation result is I (x, y) -Bt (x, y) I.
Further, based on the first operation result and a preset foreground gray level threshold value, the first reference image is determined to be the background image. Specifically, based on the first operation result and the preset pre-Jing Huidu threshold, the pixel point corresponding to the first reference image is determined to be the pixel point of the background image, and further, the first reference image is determined to be the background image. For example, for the same positions of the pixels, if the first operation result is greater than the preset foreground gray level threshold, determining the pixel corresponding to the first reference image as the pixel corresponding to the background image. Specifically, for the same positions of the pixels, a preset foreground gray threshold is marked as T1, and at this time, if a first operation result It (x, y) -Bt (x, y) | > T1, the pixel Bt (x, y) corresponding to the first reference image is determined to be the pixel of the background image, and then the first reference image is determined to be the background image.
In the above embodiment, the background image corresponding to the training image can be determined through the training image and the preset foreground gray threshold, so that updating of the background images of different training images is realized, and it is ensured that the first flame moving region image is obtained after the background difference processing is performed on the training image through the background image. Step S101 can process the real-time training image and the relatively stable background image to obtain a first flame movement region with a relatively large region, which is simple and easy to operate.
Prior to the step S101, the method comprises:
acquiring an image set to be detected, and dividing the image set to be detected into a training image set and a test image set according to a preset proportion; wherein the set of images to be detected comprises at least 5000 original images.
In one embodiment, a first image set is acquired by an image acquisition device; collecting a second image set through a network; and taking the first image set and the second image set as the image set to be detected. The image capturing apparatus may be an image capturing apparatus, for example. The second image may be a network picture, for example. The image capturing apparatus may be a video camera, for example. Specifically, images required to be detected and identified for different places and different objects in a park to exist fire are collected through a camera and network pictures according to a ratio of 1:1, so that an image set to be detected is formed. In the above embodiment, the image acquisition device may be installed at a preset position in the venue, so as to perform fire detection and identification on different preset positions and different objects in the venue.
In one embodiment, the preset ratio may be 4:1. specifically, the image set to be detected is processed according to 4:1 into the training image set and the test image set. Further, the image set to be detected is processed according to 4:1 into the training image set and the test image set, so that the training image set is used for model training and parameter adjustment of the test method later, and the test image set can be used for performance test later.
In one embodiment, the set of images to be detected comprises at least 5000 raw images. Specifically, at least 5000 original images can be detected for different fires, so that the diversity of training data sets and the generalization capability of model training results are ensured.
Referring to fig. 2, before the step S101, the method includes:
step S1001, preprocessing the original image in the training image set to obtain the training image.
In one embodiment, the preprocessing the original image in the training image set to obtain the training image includes: and carrying out gray level transformation processing on the original images in the training image set to obtain the training image. The training image is a gray image, so that the training image is a monochromatic image with pixel brightness continuously changed from black to white, and the pixels only contain brightness information and do not contain color information such as hue, saturation and the like, thereby reducing the processing capacity and storage capacity of data information, reducing time cost and improving identification accuracy.
In one embodiment, the preprocessing the original image in the training image set to obtain the training image includes: and carrying out random horizontal overturning treatment on the original images in the training image set to obtain the training image.
In one embodiment, the preprocessing the original image in the training image set to obtain the training image includes: and carrying out random clipping processing on the original images in the training image set to obtain the training image.
In one embodiment, the preprocessing the original image in the training image set to obtain the training image includes: and carrying out random angle rotation processing on the original images in the training image set to obtain the training image.
In one embodiment, the preprocessing the original image in the training image set to obtain the training image includes: and carrying out contrast change processing on the original images in the training image set to obtain the training image.
In one embodiment, the preprocessing the original image in the training image set to obtain the training image includes: and carrying out saturation change processing on the original images in the training image set to obtain the training image.
Through the steps, the original images in the training image set are preprocessed to obtain the training images, so that the image data are expanded, the models obtained through subsequent training have high robustness, and the accuracy of detection results is further ensured.
Referring to fig. 3, before the step S101, the method includes:
step S1002, performing illumination equalization processing on the original image, to obtain the training image.
In one embodiment, the training image is obtained by performing illumination equalization processing on the original image by improving the color equalization retinal cortex theory (Retinex) algorithm.
Specifically, in step S1011, three-color channel data matrices of the original image, which are red (R) channel matrices, respectively, are acquired
Figure BDA0004014282380000081
Green (G) channel matrix
Figure BDA0004014282380000082
Blue (B) channel matrix->
Figure BDA0004014282380000083
Wherein (1)>
Figure BDA0004014282380000084
Pixel values for positions (x, y) corresponding to pixel points of the first channel matrix; />
Figure BDA0004014282380000085
Pixel values for positions (x, y) corresponding to pixel points of the second channel matrix; />
Figure BDA0004014282380000086
Is a pixel value corresponding to the position (x, y) of the pixel point of the second channel matrix. The number of the transverse pixel points of the original image is m, and the number of the longitudinal pixel points of the original image is n. Specifically, a three-color channel data matrix of an original image can be acquired through a computer vision processing open source software library (OpenCV).
Step S1012, obtaining channel pixel level frequency of original image
Figure BDA0004014282380000087
Determining a pixel level cumulative distribution based on the channel pixel level frequency >
Figure BDA0004014282380000088
Where k=1, 2, r; r=256; t=r, G, B. Wherein (1)>
Figure BDA0004014282380000091
For the kth pixel level of the t-channel matrix, -, is->
Figure BDA0004014282380000092
Is->
Figure BDA0004014282380000093
Frequency of occurrence in the t-channel matrix, +.>
Figure BDA0004014282380000094
N is the number of pixel points of the kth pixel level of the t-channel matrix R For the total number of pixels of the t-channel matrix, r is the number of stages of total pixels of 256.
Step S1013, determining an equalization function based on the pixel level cumulative distribution
Figure BDA0004014282380000095
Where int (x) denotes rounding x according to a rounding rule.
Step S1014, determining a loss Gaussian signal conversion matrix corresponding to the original image based on the equalization function
Figure BDA0004014282380000096
Wherein Gauss (i, j) is a two-dimensional Gaussian function, LG t (i, j) is the value of the position (i, j) of the lossy gaussian signal transform matrix corresponding to the t-channel matrix.
Step S1015, determining an equalized tri-color channel data matrix based on the equalization function and the loss Gaussian signal matrix corresponding to the original image
Figure BDA0004014282380000097
Wherein, appearance (A) t (i,j))=μT(A t (i,j))+(1-μ)·LG(A t (i,j));A' t (i, j) is the pixel value of matrix position (i, j) of the equalized t-channel data matrix; t (A) t (i, j)) is an equalization function value of a matrix position (i, j) of the t-channel data matrix; mu is a super parameter and satisfies mu E [0,1 ]]. Illustratively, μ is 0.5.
Step S1016, performing matrix combination processing on the equalized three-color channel data matrix to obtain a training image. For example, the equalized three-color channel data matrix may be subjected to matrix merging processing through OpenCV to obtain a training image.
Through the steps, in a specific application scene, imaging influence of illumination intensity on an original image in different time is reduced, illumination uniformity of an obtained training image is guaranteed, quality of the training image is improved, and further follow-up recognition effect on flames in the training image is guaranteed.
Referring to fig. 4, before the step S101, the method includes:
step S1003, carrying out illumination equalization processing on the original image to obtain a first training image;
step S1004, preprocessing the first training image to obtain the training image.
In one embodiment, step S1003 may use the method of the foregoing embodiment of step S1002 to perform illumination equalization processing on the original image to obtain a first training image; and, step S1004 includes preprocessing the first training image by using the method of the foregoing embodiment of step S1001 to obtain a training image. Therefore, the quality of the original image is kept uniform under different environments and different illumination intensities through illumination balance treatment, and the first training image is preprocessed to further ensure that the size of the first training image is uniform, so that uniform treatment operation is convenient to carry out.
Step S102, obtaining a second flame movement region image through inter-frame difference processing based on the training image.
In one embodiment, the step S102 includes: performing differential operation on the training image and the inter-frame image to obtain a second area image; and thresholding the second area image to obtain the second flame movement area image. The step can determine a second area image of the flame by performing differential operation on the training image and the inter-frame image; further, by thresholding, a second flame movement region image is determined, and an image region of the flame is determined.
In one embodiment, the inter-frame image is acquired prior to the step S102.
Illustratively, prior to said step S102, a video is acquired, wherein the video comprises a number of video frames. And selecting one of the video frames as the training image. And selecting adjacent frames of the video frame as the inter-frame images, or selecting one frame before the video frame as the inter-frame images, or selecting one frame after the video frame as the inter-frame images.
In the step S102, a difference operation is performed between the training image and the inter-frame image, so as to obtain a second area image. Specifically, performing differential operation on the training image and the inter-frame image to obtain a second operation result; and determining the second region image based on the second operation result and a second preset threshold value. The second preset threshold value is a differential image binarization threshold value.
Wherein the kth frame (k is a positive integer) in the video is selected as the inter-frame image, the kth+1st frame is selected as the training image, and the inter-frame image can be marked as f k The training image may be labeled f k+1 And selecting the position mark of one pixel point as (x, y) in the inter-frame image and the training image. And carrying out differential operation on the training image and the inter-frame image for the positions of the pixel points to obtain a second operation result. Specifically, the pixel point corresponding to the inter-frame image is labeled f k (x, y); the pixel point corresponding to the training image is marked as f k+1 (x, y), further, for the same position of the pixel point, subtracting the pixel point corresponding to the training image from the pixel point corresponding to the inter-frame image, and taking the absolute value to obtain a second operation result of |f k+1 (x,y)-f k (x,y)|。
Further, the second region image is determined based on the second operation result and a differential image binarization threshold. Illustratively, a differential image is determined based on the second operation result and the differential image binarization threshold; the second region image is determined based on the difference image, the training image, and the inter-frame image.
Specifically, the differential image may be labeled D, and the differential image binarization threshold may be labeled T2; for the same position of the pixel pointThe pixel values of the pixels of the differential image may be labeled D (x, y). Determining the differential image based on an inter-frame difference formula, the second operation result and the differential image binarization threshold, wherein the inter-frame difference formula is as follows
Figure BDA0004014282380000111
Specifically, for the same positions of the pixel points, if the second operation result is greater than the binarization threshold value of the differential image, determining that the pixel corresponding to the pixel point of the differential image is 1; otherwise, the pixel corresponding to the pixel point of the differential image is determined to be 0. Namely: if the second operation result |f k+1 (x,y)-f k (x, y) | > T2, then D (x, y) is determined to be 1; otherwise, D (x, y) is determined to be 0.
Acquiring the position of a pixel point with a pixel of 1 in the differential image; and based on the positions of the pixels, taking the pixels corresponding to the training image or the pixels of the inter-frame image as the pixels of the second area image to obtain the second area image. Specifically, if f k+1 (x,y)-f k (x, y) > T2, then taking the pixel corresponding to the training image as the pixel of the second region image; if f k (x,y)-f k+1 (x, y) > T2, the pixel corresponding to the inter image is taken as the pixel of the second region image.
In the above embodiment, the second region image can be determined by the training image, the inter-frame image, and the second preset threshold. Specifically, determining a differential image through the training image, the inter-frame image and a second preset threshold value, and further determining the position of a pixel point where flame is located; and extracting pixel points at corresponding positions of the training image and the inter-frame image through the differential image, so as to obtain a second area image, and further determining a flame movement area. The method has the advantages of small operand, lower time complexity, adaptability to dynamic environment changes, less systematic error and noise influence and improvement of recognition accuracy.
Step S103, determining a detection area image based on the first flame moving area image and the second flame moving area image.
In one embodiment, step S101, obtaining a first flame moving area image through background difference processing based on a training image and a preset inter-frame interval; step S102, obtaining a second flame movement region image through inter-frame difference processing based on the training image and a preset inter-frame interval. Specifically, the foregoing embodiments of step S101 and step S102 are combined: before step S101, video is acquired; selecting one of the video frames as a training image; in step S101, selecting a plurality of video frames according to a preset inter-frame interval; taking one video frame, close to the training image, in the plurality of video frames as an inter-frame image in the step S102; thus, the background difference processing is performed in step S102, and the inter-frame difference processing is performed in step S101, which has the same inter-frame space, so that the unified operation processing is facilitated.
In one embodiment, acquiring an overlapping area of a first flame moving area image and a second flame moving area image, and performing cutting processing on the first flame moving area image and the second flame moving area image corresponding to the overlapping area; and combining the cut first flame moving area image and the cut second flame moving area image to obtain a detection area image.
By the above steps in combination with steps S101 to S103, by being able to adjust the first flame moving region image and the second flame moving region image by repeatedly executing S101 and S102, and by determining the overlapping flame regions by step S103, the accuracy of the detected region image is ensured by cutting and merging the images.
In one embodiment, the step S103 includes:
combining the first flame motion area image and the second flame motion area image to obtain a flame motion area image;
and determining an overlapping area image of the first flame movement area image and the second flame movement area image as the detection area image based on the flame movement area image.
Illustratively, the first flame moving region image and the second flame moving region image may be combined through OpenCV to obtain a flame moving region image; and determining a superposition area of the first flame motion image and the second flame motion image on the flame motion area image, and taking the flame motion area image corresponding to the superposition area as the detection area image.
Through the steps, the first flame moving area image obtained through background difference processing and the second flame moving area image obtained through inter-frame difference processing are cut and extracted from the overlapping area, so that a detection area image is obtained, and the accuracy of the detection area is improved. In the actual flame detection process, the background differential processing has high requirements on the background acquisition of the image, and the image acquisition equipment is easy to shake or other moving objects cause interference. The inter-frame difference processing is suitable for detecting flame by the fixed camera, and can avoid the phenomenon of causing background motion and more pseudo-motion pixels in the difference image caused by the background motion. In the method, the image acquisition equipment is considered to be fixed-position image acquisition equipment in an actual application scene, so that the two processing methods can be combined, inter-frame differential processing can be used, and errors caused by pseudo motion caused by background motion can be reduced; the background difference processing is used, so that the moving area of the image can be simply and quickly judged; the flame moving region images obtained by using the two different processing methods can make up the defect of independent processing, and the structure shape images are integrated to obtain the overlapping region, so that the accuracy of image segmentation is improved.
Step S104, based on the flame characteristic library, performing corner detection on the detection area image to obtain first flame information.
In one embodiment, angular point detection is carried out on the detection area image through a multi-feature fusion recognition algorithm to obtain feature points corresponding to the detection area image; and comparing the region image with the features of the flame images in the flame feature library to obtain first flame information. The first flame information is numerical information and is used for indicating the similarity between the regional image where the characteristic points are located and the characteristics of the flame image.
In one embodiment, prior to step S104, the method comprises: and constructing a flame characteristic library according to the flame image characteristics. The flame image features may include features in terms of flame area, flame shape, flame color, flame texture, and the like, among others. By setting different types of flame image features, the feature recognition is carried out on different types of flames, and a flame feature library is accumulated, so that the hit accuracy of feature comparison is improved, and the accuracy of the recognition of different types of flames is improved.
Illustratively, before constructing the flame signature library from the flame image signatures, the method includes: collecting at least 4 flame dynamic characteristic diagrams; calculating a local binary pattern (LBP, local Binary Pattern) feature vector of the flame texture by improving a three primary color (RGB, red Green Blue) spatial flame color feature formula; extracting a flame saliency map according to the local binary pattern feature vector of the flame texture and the flame dynamic feature map; and extracting flame image features according to the flame saliency map. The method has higher accuracy and stronger robustness.
Step S105, inputting the detection area image into a back propagation neural network model to obtain second flame information.
In one embodiment, the Back-propagation neural network model (BP Neural Network, back-Propagation Neural Network) comprises at least an input layer and a hidden layer, wherein the input layer comprises at least 12 first neuron nodes comprising a circularity neuron node, a color first-order moment neuron node, a color second-order moment neuron node, an area growth neuron node; the hidden layer includes at least 25 second neuron nodes. The back propagation neural network model further includes an output layer, and thus, the detection region image is input to the input layer of the back propagation neural network model, flame identification is performed on the detection region image through the hidden layer of the back propagation neural network model, and the second flame information is output through the output layer of the back propagation neural network model. The second flame information is numerical value information and is used for indicating the similarity of flames in the image of the detection area.
Through the steps, flame identification is carried out on the detection area image, the back propagation neural network occupies less memory, has a good identification effect, is convenient for carrying out pattern identification and classification on the flame, and has strong practicability and usability.
And step S106, based on the first flame information and the second flame information, performing flame detection to obtain a fire result.
In one embodiment, the first flame information and the second flame information are numerical information. And combining the step S104 and the step S105, namely, the first flame information obtained by carrying out corner detection on the detection area image and the second flame information obtained by inputting the detection area image into the back propagation neural network model, and respectively processing the detection area image through two different recognition algorithms to improve the accuracy of flame recognition, thereby improving the accuracy of fire early warning.
For example, when the first flame information is not within the first preset value range and the second flame information is not within the second preset value range, the fire result is determined to be a fire. Otherwise, determining that the fire result is no fire. Therefore, the first flame information and the second flame information can be combined to detect the flame, early warning analysis of the fire result is carried out, and the fire information is monitored in time. Further, after determining that the fire result is a no fire, the method further comprises: and carrying out misjudgment detection based on the fire result to obtain a misjudgment result. If the erroneous judgment result is that there is an erroneous judgment, the process returns to step S101. Therefore, the model in the method can be trained and parameters can be adjusted, and the accuracy and stability of the fire result can be improved.
In one embodiment, the step S106 includes:
processing the first flame information according to a first preset weight to obtain a first identification result;
processing the second flame information according to a second preset weight to obtain a second identification result;
determining a third recognition result based on the first recognition result and the second recognition result;
if the third identification result is larger than a preset threshold value, determining that the fire result is fire;
wherein the sum of the first preset weight and the second preset weight is 1.
In the above steps, the first preset weight and the second preset weight can be adjusted according to different scenes, so that the first recognition result and the second recognition result can be adjusted, the method can be suitable for different application scenes, and the accuracy of the adjustment recognition result is ensured. Detecting the flame condition based on the third identification result, and determining a fire result; can judge the condition of a fire in time and monitor the condition of a fire in time.
Illustratively, the first preset weight and the second preset weight are zero or positive real numbers.
Illustratively, the first preset weight is 40%, the second preset weight is 60%, and the preset threshold is 80. Specifically, the first flame information accounts for 40%, and a first identification result is obtained; and (5) taking the second flame information to 60% to obtain a second identification result. And adding the first recognition result and the second recognition result to obtain a third recognition result. And if the third recognition result is greater than 80, confirming that the fire result is fire. In the above embodiment, the first preset weight, the second preset weight and the preset threshold value may be obtained by adjusting and fitting according to a specific application scene, so as to adapt to the application scene, improve the accuracy of flame identification and early warning detection, and monitor the fire in time.
In one embodiment, after determining that the fire result is a fire, the method further comprises: and outputting fire early warning information. The fire early warning information may be text information, picture information or audio information, and specifically, the fire early warning information may be output through an application interface. The fire early warning information may be audio information, and may be played through a broadcasting device. Therefore, the fire is monitored in time, and the fire early warning information is timely broadcast, so that the occurrence of fire is reduced.
In the step, a first flame movement region image is obtained through background difference processing based on a training image; based on the training image, obtaining a second flame movement region image through inter-frame difference processing; and determining a detection area image based on the first flame movement area image and the second flame movement area image, thereby further ensuring the accuracy of the detection area. Based on the flame characteristic library, carrying out corner detection on the detection area image to obtain first flame information; inputting the detection area image into a back propagation neural network model to obtain second flame information; and performing flame detection based on the first flame information and the second flame information to obtain a fire result. The first flame information and the second flame information are used for automatically detecting and identifying flames, so that the identification effect of flame detection is improved, and a fire result is obtained in time; has stronger usability and practicability.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Corresponding to the method of the above embodiments, fig. 6 shows a block diagram of the structure of the flame detection recognition device provided in the embodiment of the present application, and for convenience of explanation, only the portion relevant to the embodiment of the present application is shown. The flame detection recognition device illustrated in fig. 6 may be an execution subject of the flame detection recognition method provided in the first embodiment.
Referring to fig. 6, the flame detection recognition device 60 includes:
a first processing module 61, configured to obtain a first flame movement region through background differential processing based on the training image;
a second processing module 62, configured to obtain a second flame movement region through inter-frame differential processing based on the training image;
a determining module 63 for determining a detection area image based on the first flame movement area image and the second flame movement area image;
the third processing module 64 is configured to perform corner detection on the detection area image based on a flame feature library, so as to obtain first flame information;
A fourth processing module 65, configured to input the detection area image into a back propagation neural network model, to obtain second flame information;
the detecting module 66 is configured to perform flame detection based on the first flame information and the second flame information, so as to obtain a fire result.
The process of implementing respective functions by each module in the flame detection identification device provided in the embodiment of the present application may refer to the description of the first embodiment shown in fig. 1, and will not be repeated here.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance. It will also be understood that, although the terms "first," "second," etc. may be used in this document to describe various elements in some embodiments of the present application, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first table may be named a second table, and similarly, a second table may be named a first table without departing from the scope of the various described embodiments. The first table and the second table are both tables, but they are not the same table.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 7, the terminal device 70 of this embodiment includes: at least one processor 71 (only one is shown in fig. 7), a memory 72, said memory 72 having stored therein a computer program 73 executable on said processor 71. The processor 71, when executing the computer program 73, implements the steps of the above-described respective flame detection identification method embodiments, such as steps 101 to 106 shown in fig. 1. Alternatively, the processor 71 may perform the functions of the modules/units of the apparatus embodiments described above, such as the functions of the modules 61 to 66 shown in fig. 6, when executing the computer program 73.
The terminal device 70 may be a desktop computer, a notebook computer, a palm computer, a cloud server, or the like. The terminal device may include, but is not limited to, a processor 71, a memory 72. It will be appreciated by those skilled in the art that fig. 7 is merely an example of a terminal device 70 and is not limiting of the terminal device 70, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the terminal device may also include an input transmitting device, a network access device, a bus, etc.
The processor 71 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 72 may in some embodiments be an internal storage unit of the terminal device 70, such as a hard disk or a memory of the terminal device 70. The memory 72 may also be an external storage device of the terminal device 70, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device 70. Further, the memory 72 may also include both an internal storage unit and an external storage device of the terminal device 70. The memory 72 is used to store an operating system, application programs, boot Loader (Boot Loader), data, and other programs, such as program code for the computer program. The memory 72 may also be used to temporarily store data that has been transmitted or is to be transmitted.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The embodiment of the application also provides a terminal device, which comprises at least one memory, at least one processor and a computer program stored in the at least one memory and capable of running on the at least one processor, wherein the processor executes the computer program to enable the terminal device to realize the steps in any of the method embodiments.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps that may implement the various method embodiments described above.
The present embodiments provide a computer program product which, when run on a terminal device, causes the terminal device to perform steps that enable the respective method embodiments described above to be implemented.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each method embodiment described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. A method of identifying flame detection, comprising:
based on the training image, obtaining a first flame movement region image through background difference processing;
based on the training image, obtaining a second flame movement region image through inter-frame difference processing;
determining a detection area image based on the first flame movement area image and the second flame movement area image;
based on a flame characteristic library, performing corner detection on the detection area image to obtain first flame information;
inputting the detection area image into a back propagation neural network model to obtain second flame information;
and performing flame detection based on the first flame information and the second flame information to obtain a fire result.
2. The method of identifying flame detection as in claim 1, wherein prior to the obtaining a first flame moving region image based on the training image by background differencing processing, the method comprises:
acquiring an image set to be detected, and dividing the image set to be detected into a training image set and a test image set according to a preset proportion;
wherein the set of images to be detected comprises at least 5000 original images.
3. The method of identifying flame detection as in claim 2, wherein prior to the obtaining a first flame moving region image based on the training image by background differencing processing, the method comprises:
and preprocessing the original images in the training image set to obtain the training image.
4. The method of identifying flame detection as in claim 1, wherein prior to the obtaining a first flame moving region image based on the training image by background differencing processing, the method comprises:
and carrying out illumination equalization processing on the original image to obtain the training image.
5. The method of identifying flame detection as in claim 1, wherein prior to the obtaining a first flame moving region image based on the training image by background differencing processing, the method comprises:
carrying out illumination equalization treatment on the original image to obtain a first training image;
and preprocessing the first training image to obtain the training image.
6. The method of identifying flame detection as recited in claim 1 wherein said determining a detection zone image based on said first flame moving zone image and said second flame moving zone image comprises:
Combining the first flame motion area image and the second flame motion area image to obtain a flame motion area image;
and determining an overlapping area image of the first flame movement area image and the second flame movement area image as the detection area image based on the flame movement area image.
7. The method for identifying flame detection as defined in claim 1, wherein said performing flame detection based on said first flame information and said second flame information to obtain a fire result comprises:
processing the first flame information according to a first preset weight to obtain a first identification result;
processing the second flame information according to a second preset weight to obtain a second identification result;
determining a third recognition result based on the first recognition result and the second recognition result;
if the third identification result is larger than a preset threshold value, determining that the fire result is fire;
wherein the sum of the first preset weight and the second preset weight is 1.
8. An identification device for flame detection, comprising:
the first processing module is used for obtaining a first flame movement area through background difference processing based on the training image;
The second processing module is used for obtaining a second flame movement area through interframe difference processing based on the training image;
a determining module configured to determine a detection area image based on the first flame movement area image and the second flame movement area image;
the third processing module is used for carrying out corner detection on the detection area image based on the flame characteristic library to obtain first flame information;
a fourth processing module for inputting the detection region image into a back propagation neural network model to obtain second flame information
And the detection module is used for detecting the flame based on the first flame information and the second flame information to obtain a fire result.
9. A terminal device, characterized in that it comprises a memory, a processor, on which a computer program is stored which is executable on the processor, the processor executing the computer program to carry out the steps of the method according to any one of claims 1 to 7.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 7.
CN202211664661.5A 2022-12-23 2022-12-23 Flame detection identification method and device and terminal equipment Pending CN116189037A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211664661.5A CN116189037A (en) 2022-12-23 2022-12-23 Flame detection identification method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211664661.5A CN116189037A (en) 2022-12-23 2022-12-23 Flame detection identification method and device and terminal equipment

Publications (1)

Publication Number Publication Date
CN116189037A true CN116189037A (en) 2023-05-30

Family

ID=86439397

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211664661.5A Pending CN116189037A (en) 2022-12-23 2022-12-23 Flame detection identification method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN116189037A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116977634A (en) * 2023-07-17 2023-10-31 应急管理部沈阳消防研究所 Fire smoke detection method based on laser radar point cloud background subtraction

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116977634A (en) * 2023-07-17 2023-10-31 应急管理部沈阳消防研究所 Fire smoke detection method based on laser radar point cloud background subtraction
CN116977634B (en) * 2023-07-17 2024-01-23 应急管理部沈阳消防研究所 Fire smoke detection method based on laser radar point cloud background subtraction

Similar Documents

Publication Publication Date Title
CN108615226B (en) Image defogging method based on generation type countermeasure network
Pan et al. Exposing image splicing with inconsistent local noise variances
Jia et al. A two-step approach to see-through bad weather for surveillance video quality enhancement
CN112308095A (en) Picture preprocessing and model training method and device, server and storage medium
Li et al. A multi-scale fusion scheme based on haze-relevant features for single image dehazing
CN109753878B (en) Imaging identification method and system under severe weather
Liu et al. Digital image forgery detection using JPEG features and local noise discrepancies
WO2007076890A1 (en) Segmentation of video sequences
Vosters et al. Background subtraction under sudden illumination changes
AU2011265429A1 (en) Method and system for robust scene modelling in an image sequence
Liu et al. Survey of natural image enhancement techniques: Classification, evaluation, challenges, and perspectives
Mai et al. Back propagation neural network dehazing
Zhang et al. Distinguishing photographic images and photorealistic computer graphics using visual vocabulary on local image edges
Wang et al. Coarse-to-fine-grained method for image splicing region detection
CN115880177A (en) Full-resolution low-illumination image enhancement method for aggregating context and enhancing details
CN116189037A (en) Flame detection identification method and device and terminal equipment
Fang et al. Image quality assessment on image haze removal
KR101215666B1 (en) Method, system and computer program product for object color correction
Moghimi et al. Shadow detection based on combinations of HSV color space and orthogonal transformation in surveillance videos
CN112560734B (en) Deep learning-based reacquired video detection method, system, equipment and medium
Li et al. Distinguishing computer graphics from photographic images using a multiresolution approach based on local binary patterns
CN110728692A (en) Image edge detection method based on Scharr operator improvement
Kocdemir et al. TMO-Det: Deep tone-mapping optimized with and for object detection
Ince et al. Fast video fire detection using luminous smoke and textured flame features
Kumar et al. Haze elimination model-based color saturation adjustment with contrast correction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination