CN111310662B - Flame detection and identification method and system based on integrated deep network - Google Patents

Flame detection and identification method and system based on integrated deep network Download PDF

Info

Publication number
CN111310662B
CN111310662B CN202010095917.XA CN202010095917A CN111310662B CN 111310662 B CN111310662 B CN 111310662B CN 202010095917 A CN202010095917 A CN 202010095917A CN 111310662 B CN111310662 B CN 111310662B
Authority
CN
China
Prior art keywords
flame
video frame
image
target
binary image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010095917.XA
Other languages
Chinese (zh)
Other versions
CN111310662A (en
Inventor
高尚兵
陈浩霖
相林
黄子赫
蔡创新
李文婷
汪长春
李凤
丁海林
袁涛
邱千禧
乔德金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiyin Institute of Technology
Original Assignee
Huaiyin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Institute of Technology filed Critical Huaiyin Institute of Technology
Priority to CN202010095917.XA priority Critical patent/CN111310662B/en
Publication of CN111310662A publication Critical patent/CN111310662A/en
Application granted granted Critical
Publication of CN111310662B publication Critical patent/CN111310662B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Fire-Detection Mechanisms (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a flame detection and identification method and system based on an integrated depth network, which are suitable for detecting and identifying flame in a video in real time. The method comprises the steps of training U-net and Resnet-50 networks through public flame video data to obtain a model for detecting and identifying a flame area in a video frame image; then, obtaining a flame candidate region by using a U-net model for a video frame image acquired by a camera, and obtaining a flame candidate target by using a neighborhood segmentation method for the flame candidate region; further identifying the flame candidate target by utilizing a Resnet-50 model, and deleting according to the identification confidence coefficient to obtain the flame target; and finally, recording and marking the position of the flame target in the original video frame. And when the flame target is continuously monitored at the same position of the video stream, a flame alarm is sent out. The method can be used for detecting and identifying the flame and the position thereof in the video, the identification rate reaches 91.08%, the flame detection and identification speed is kept at 25-31 frames/s, and the method has better robustness on lamplight and small targets.

Description

Flame detection and identification method and system based on integrated deep network
Technical Field
The invention relates to the technical field of image processing and fire prevention, in particular to a flame detection and identification method and system based on an integrated depth network.
Background
Flame detection and identification are various, a traditional fire detection method based on a sensor is limited in detection range and single in information, and the detection speed is delayed greatly. With the continuous breakthrough of theories and technologies in the fields of computers and image processing, researchers use machine learning algorithms in combination with image processing to realize flame detection. The method mainly utilizes static characteristics such as color attributes of flames and dynamic characteristics such as flame circularity, sharp angles, contour change and flickering characteristics to identify, however, the manual characteristic extraction algorithm is difficult and time-consuming to design according to priori knowledge, and the generalization capability of the method is often insufficient when the method faces different complex environments and changeable flame types. Methods based on superpixel flame detection and identification proposed by Andrew j.dunnings and Toby p.breckon exist, but there is a problem that real-time detection and identification cannot be performed.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems of easy interference and high false detection rate of the traditional flame detection and identification and poor real-time performance under a deep neural network, the invention provides a flame detection and identification method and system based on an integrated deep network, which improve the accuracy of flame detection and identification and solve the problem of real-time detection by a method of fusing a U-net neural network and a Resnet-50 neural network.
The technical scheme is as follows: in order to achieve the purpose, the invention adopts the following technical scheme:
a flame detection and identification method based on an integrated deep network comprises the following steps:
(1) acquiring a video frame image, and preprocessing and normalizing the video frame image;
(2) detecting the normalized video frame image by using the trained U-net model, and finding out a binary image of the video frame image corresponding to the flame candidate region;
(3) clustering and partitioning the binary image of the flame candidate region according to a distance rule by using a neighborhood partitioning method to obtain a binary image of a candidate target of the flame in a video frame;
(4) finding a flame candidate target image corresponding to the flame candidate target binary image in the video frame, further identifying the flame candidate target image by using a trained Resnet-50 model, removing a non-flame target to obtain a flame target, marking the flame target in the original video frame, and storing the position of the flame target;
(5) and when the number of continuous frames in the same area at the central position of the flame target in the video stream reaches a set threshold value, a flame alarm is given out.
Further, a sample data set for training the U-net model and the Resnet-50 model is obtained by processing the existing public flame video data, and specifically comprises the following steps:
extracting a video frame set from a video by using a frame taking method for an open flame video, and marking the position of flame by using an image marking tool for the video frame set to construct a label data set;
setting the flame position in the binary image to be 1 and the other parts to be 0 according to the label data set of the binary image corresponding to each video frame image in the training data set to form a U-net binary image label data set, and finally constructing the U-net data set by using the video frame set and the binary image label data set;
and reducing all video frame images in the video frame set by a certain proportion, setting corresponding supervision values of flame images in the video frame images as positive supervision values, setting other images as negative supervision values, constructing a ResNet-50 label data set, and constructing a ResNet-50 data set by using the video frame set and the ResNet-50 label data set.
Further, the preprocessing in step (1) includes performing fisheye correction on the video frame image by extracting camera intrinsic parameters including a tangential error, a radial error and an optical center error.
Further, the video frame image with the normalized size is input into the trained U-net model in the step (2), the normalized video frame image is detected, a flame candidate area binary image corresponding to the video frame is obtained, the pixel value of the flame in the binary image is set to be 1, and the pixel value of the non-flame area is set to be 0.
Further, the step (3) includes the steps of:
(31) normalizing the size of the binary image of the flame candidate region;
(32) storing coordinate positions of all nonzero values in the normalized flame candidate region binary image in the image;
(33) classifying the stored coordinate positions so that any two types of coordinates satisfy the following formula:
Figure GDA0003061560510000021
wherein A represents the set of all classes, Abj1X-axis coordinate, Ab, representing the jth element in class b coordinatesj2Y-axis coordinate, Aa, representing jth data in class b coordinatesi1X-axis coordinate, Aa, representing the ith datum in class a coordinatesi2Y-axis coordinate, l, representing the ith data in the a-th coordinate0Represents a preset threshold value, n represents the number of elements in Aa, and m represents the number of elements in Ab;
(34) taking the maximum value and the minimum value in each type of coordinates as indexes, and taking out sub-images of the binary images of the flame candidate areas;
(35) Returning all sub-images, wherein each sub-image element represents a flame candidate target binary image;
further, the step (4) comprises the steps of:
(41) finding out a flame candidate target corresponding to the flame candidate target binary image in the video frame;
(42) normalizing all flame candidate targets to a uniform size;
(43) sending the flame target into a trained Resnet-50 model for recognition, and removing non-flame targets according to the confidence degree to obtain flame targets;
(44) returning the positions of all flame targets;
(45) and marking the position of the returned flame area in the video frame to realize the visualization of flame tracking.
Based on the same inventive concept, the invention provides a flame detection and identification system based on an integrated deep network, which comprises:
an image preprocessing module: the video frame image preprocessing and normalization unit is used for reading the video frame image and preprocessing and normalizing the video frame image;
a flame detection module: the device is used for detecting the normalized video frame image by using the trained U-net model to obtain a binary image of the flame candidate area;
a neighborhood segmentation module: the method is used for clustering and segmenting the flame candidate region binary image according to the distance rule by using a domain segmentation method to obtain a flame candidate target binary image in the video frame, and then returning the corresponding flame candidate target image of the flame candidate target binary image in the original video frame;
a flame identification module: the flame candidate target image obtained by the neighborhood segmentation module is further identified by using the trained Resnet-50 model, a non-flame target is removed to obtain a flame target, and the position of the flame target is stored;
a flame region visualization module: the flame tracking module is used for marking the corresponding flame target in the original video image according to the flame target position stored by the flame identification module so as to realize the visualization of flame tracking;
and, a flame alarm module: the system is used for continuously monitoring videos, and when the number of frames of the flame target center position continuously appearing in the same area reaches a preset threshold value, a flame alarm is sent out to prompt a user.
Based on the same inventive concept, the flame detection and identification system based on the integrated deep network comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the computer program realizes the flame detection and identification method based on the integrated deep network when being loaded to the processor.
Has the advantages that: compared with the prior art, the invention has the following beneficial effects: 1. the U-net model architecture is utilized, and the problems of difficult feature extraction and low recognition speed are effectively solved. 2. The method adopts the mode of fusing the U-net network and the Resnet-50 network to identify the flame, overcomes the problem of easy interference, and improves the speed and the accuracy of flame identification. 3. The flame detection recognition speed of the algorithm can be kept at 25-31 frames/s, and the recognition rate reaches 91.08%.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a U-net detection diagram;
FIG. 3 is a neighborhood segmentation graph;
FIG. 4 is a diagram of the visual effect of flame detection identification positioning.
Detailed Description
The invention will be further described with reference to the accompanying drawings. The variables involved in this example will now be described as follows, as shown in table 1.
Table 1 description of variables
Figure GDA0003061560510000041
Figure GDA0003061560510000051
The video data used in the embodiment of the invention is flame video database data disclosed by MIVIA laboratories, the video comprises flames with different colors, flames with different shapes, small target flames, special flames and light with colors close to the colors of the flames. The following first describes the model training process of the U-net model M1 and Resnet-50 model M2 according to the present invention.
After flame videos disclosed by MIVIA laboratories are preprocessed, a U-net network and a Resnet-50 network are trained to obtain a U-net model M1 and a Resnet-50 model M2. The method specifically comprises the following steps:
1) a training data set P1 is constructed by cutting one Frame from thirty frames of a flame video disclosed by MIVIA laboratory by using a Frame taking method, wherein P1 is { Frame }1,Frame2,…,FrameN},FrameNFor the N video frames intercepted, each video frame is normalized to 448 x 448 pixels in size, and a Label dataset L1 is constructed for the training dataset by labeling the location of the flame using the labelImg image labeling tool, L1 being { Label } Label1,Label2,…,LabelN},LabelNIs FrameNPosition of middle flame, each Label is (x)1,y1,x2,y2) Wherein (x)1,y1) Represents the position of the upper left corner of the flame, (x)2,y2) Representing the position of the lower right hand corner of the flame.
2) Taking P1 as a training set of U-net, setting the flame position in the binary image to be 1 and the other parts to be 0 according to a label Data set L1 of the binary image corresponding to each video frame image in the training Data set P1 to form a label Data set L2 of the U-net, wherein the size of each binary image in L2 is 448 x 448, and finally constructing a Data set Data1 of the U-net by using P1 and L2.
3) Reducing all video frame images in P1 to 64 x 64 size forms P3, the dimension size of the image in P3 is (64, 64, 3), the flame picture corresponding supervision value in P3 is set to [0, 1], other pictures are set to [1, 0], a tag Data set L3 of ResNet-50 is constructed, wherein the dimension of each supervision value is 1 x 2, and P3 and L3 construct a Data set Data2 of ResNet-50.
4) The pre-trained weights are set to random values, the input dimensions of the U-net network are set to (448, 448, 3), and the input dimensions of the ResNet-50 network are set to (64, 64, 3).
5) Setting U-net network parameters, including: adam gradient descent method was used and learning rate was set to 1 × 10-4The number of downsampling times is set to 4, and the loss function is set to be a cross entropy function.
6) Setting ResNet-50 network parameters, including: learning rate was set to 1 × 10 using Adam gradient descent method-4Setting the residual block as 4 and setting the loss function as a cross entropy function.
7) A model M1 is obtained by training in a U-net model network by taking P1 in Data1 as an input value and L2 as a supervision value.
8) A model M2 is obtained by training a ResNet-50 network by taking P3 in Data2 as an input value and L3 as a supervision value.
As shown in fig. 1, a flame detection and identification method based on an integrated deep network disclosed in an embodiment of the present invention includes the following steps:
(1) the method comprises the steps of obtaining a video frame image IMG, preprocessing the video frame image IMG to obtain an IMG0, and normalizing the size of the IMG0 to obtain an image IMG 1. The method comprises the following steps:
(11) extracting a video frame image in a video;
(12) extracting internal parameters, tangential errors, radial errors and optical center errors of the camera;
(13) performing fisheye correction on the video frame image, so that the definition and the error of the image are ensured while the next operation is facilitated;
(14) and normalizing the size of the IMG0 obtained after the preprocessing to obtain an image IMG 1. This example is normalized to a size of 448 x 448 pixels to serve as an input to the next step model M1.
(2) And detecting the image IMG1 by using the model M1 to obtain a flame candidate region binary image IMG2 corresponding to the video frame image. The method comprises the following steps:
(21) the normalized size image is passed into model M1.
(22) The pixel value of the flame region is set to 1, the pixel value of the non-flame region is set to 0, and the binary image IMG2 of the flame candidate region of 448 × 448 is output.
(3) And (3) segmenting the binary image IMG2 by using a neighborhood segmentation method, and clustering and segmenting the IMG2 according to a distance rule to obtain a binary image IMG3 of the flame candidate target in the video frame. The method comprises the following steps:
(31) the size of the binary image IMG2 is reduced by 14 times, and its size is reduced from 448 × 448 to 32 × 32.
(32) The coordinate positions in the image where all non-zero values in the scaled binary image IMG2 are located are saved in a matrix B, where each element in B represents a piece of coordinate information.
(33) And taking out the 1 st row element in the matrix B and adding the element into the temporary storage matrix C.
(34) Traversing the B elements will take all elements from B that satisfy the following formula to join matrix C.
Figure GDA0003061560510000071
Wherein C isj1Representing the x-axis coordinate of the jth element of the matrix C, Cj2Y-axis coordinate representing the jth data in matrix C, Bi1X-axis coordinate representing ith data in matrix B, Bi2Y-axis coordinate, l, representing the ith data in matrix B0Representing a predetermined threshold value, is set in the method0N represents the number of elements in B, and m represents the number of elements in C.
(35) Repeating (34) until no data in B satisfies the above formula.
(36) And (3) magnifying each value in the matrix C to the reduced multiple of the step (31), and finding the maximum value and the minimum value of the element in the matrix C on the x-axis coordinate and the y-axis coordinate, wherein the minimum value of the element in the matrix C on the x-axis coordinate is min (C)j1) The maximum value of the element in C on the x-axis coordinate is max (C)j1) The minimum value of the element in C on the y-axis coordinate is min (C)j2) The maximum value of the element in C on the y-axis coordinate is max (C)j2)。
(37) C in the amplified Cj1And Cj2Is used as the index of IMG2, and the min (C) in IMG2 is used as the index ofx) Column to max (C)x) And min (C)y) Go to max (C)y) The sub-image of the line is added to the flame candidate region binary image array output and the array is initialized to null.
(38) Repeating (33) - (36) until no element is present in B.
(39) All image elements in output are returned, where each image element represents a flame candidate target binary image IMG 3.
(4) Finding a flame candidate region IMG3 corresponding to the flame candidate target binary image IMG3 in the preprocessed and normalized video frame IMG1, further identifying the flame candidate target IMG4 by using a model M2, eliminating non-flame targets, marking the flame targets in the original video frame, and storing the positions of the flame targets. The method comprises the following steps:
(41) and finding the corresponding flame candidate target IMG4 of the flame candidate target binary image IMG3 in the preprocessed and normalized video frame IMG 1.
(42) All IMG4 sizes were normalized to a size of 64 x 64.
(43) The normalized image is placed into Resnet-50 for recognition.
(44) The location of the flame as a result of the recognition is added to the array F1 storing the location of the flame.
(45) According to F1, the flame target position is marked in the preprocessed and normalized video frame IMG1, and a flame identification effect map IMG5 is obtained, so that the flame tracking visualization is realized.
(5) And when the number of frames of the flame target continuously appearing at the same position in the video stream reaches a set threshold value, a flame alarm is given out. In this example, if F1 appears in the same range of the center position in 10 consecutive frames, a flame alarm is issued, otherwise, the determination is continued. The method specifically comprises the following steps:
(51) judging whether the flame is flame according to the following formula:
xk=F1k3-F1k1
yk=F1k4-F1k2
max(x)-min(x)<lx
max(y)-min(y)<ly
wherein xkIs the x value, y of the midpoint coordinate of the flame target of the kth framekAnd the y value of the middle point coordinate of the flame target of the k-th frame is represented by k, wherein k represents the number of frames, and k is 1,2, … …,9 and 10. F1kFlame target positions F1, F1 for the k-th framek1For the value of the upper left corner coordinate x of the flame target in the k frame, F1k3Is the k frame flame eyePlotting the value of the lower right-hand corner coordinate x, F1k2For the value of the upper left corner coordinate y of the flame target in the k frame, F1k4Is the value of the coordinate y at the lower right corner of the flame target of the kth frame. lxAnd lyWhich are preset thresholds 40 and 50, respectively. max (x) is the maximum value of all flame target midpoint coordinates x in 10 frames, min (x) is the minimum value of all flame target midpoint coordinates x in 10 frames, max (y) is the maximum value of all flame target midpoint coordinates y in 10 frames, and min (y) is the minimum value of all flame target midpoint coordinates y in 10 frames.
(52) If the above formula is satisfied, a flame alarm is issued.
Based on the same inventive concept, the embodiment of the invention discloses a flame detection and identification system based on an integrated deep network, which mainly comprises the following modules: an image preprocessing module: the method is used for reading the video frame image and preprocessing and normalizing the video frame image. A flame detection module: and the method is used for segmenting the normalized video frame image by using the trained U-net model to obtain a binary image of the flame candidate area. A neighborhood segmentation module: the method is used for clustering and segmenting the flame candidate region binary image of the flame detection module according to the distance rule to obtain a flame candidate target binary image in the video frame, and then returning the flame candidate target binary image corresponding to the flame candidate target binary image in the original video frame. A flame identification module: and the method is used for further identifying the flame candidate target image obtained by the neighborhood segmentation module by using the trained Resnet-50 model, removing the non-flame target to obtain the flame target, and storing the position of the flame target. A flame region visualization module: and the flame target identification module is used for marking the corresponding flame target in the original video image according to the flame target position stored by the flame detection identification module, so that the visualization of flame detection tracking is realized. A flame alarm module: the system is used for continuously monitoring videos, and when the number of frames of the flame target center position continuously appearing in the same area reaches a preset threshold value, a flame alarm is sent out to prompt a user. The system can realize the flame detection and identification method based on the integrated deep network, belongs to the same inventive concept, and specific details refer to the method embodiment and are not repeated herein.
Based on the same inventive concept, the embodiment of the invention discloses a flame detection and identification system based on an integrated deep network, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the computer program realizes the flame detection and identification method based on the integrated deep network when being loaded to the processor.

Claims (8)

1. A flame detection and identification method based on an integrated deep network is characterized by comprising the following steps:
the method comprises the following steps of (1) acquiring a video frame image, and preprocessing and normalizing the video frame image;
detecting the normalized video frame image by using the trained U-net model, and finding out a binary image of a flame candidate region corresponding to the video frame image;
step (3) clustering and segmenting the binary image of the flame candidate region according to a distance rule by using a neighborhood segmentation method to obtain a flame candidate target binary image in a video frame;
step (4) finding a flame candidate target image corresponding to the flame candidate target binary image in the preprocessed video frame, further identifying the flame candidate target image by using a trained Resnet-50 model, removing a non-flame target to obtain a flame target, marking the flame target in the original video frame, and storing the position of the flame target;
and (5) when the number of frames continuously appearing in the same area at the center position of the flame target in the video stream reaches a set threshold value, sending a flame alarm.
2. The integrated deep network-based flame detection and identification method according to claim 1, wherein the sample data set for training the U-net model and the Resnet-50 model is obtained by processing existing public flame video data, and specifically comprises:
extracting a video frame set from a video by using a frame taking method for an open flame video, and marking the position of flame by using an image marking tool for the video frame set to construct a label data set;
setting the flame position in the binary image to be 1 and the other parts to be 0 according to the label data set of the binary image corresponding to each video frame image in the training data set to form a U-net binary image label data set, and finally constructing the U-net data set by using the video frame set and the binary image label data set;
and reducing all video frame images in the video frame set by a certain proportion, setting corresponding supervision values of flame images in the video frame images as positive supervision values, setting other images as negative supervision values, constructing a ResNet-50 label data set, and constructing a ResNet-50 data set by using the video frame set and the ResNet-50 label data set.
3. The integrated depth network-based flame detection and identification method of claim 1, wherein the preprocessing in the step (1) comprises performing fisheye correction on the video frame image by using camera internal parameters including tangential error, radial error and optical center error.
4. The flame detection and identification method based on the integrated depth network as claimed in claim 1, wherein in the step (2), the video frame image with the normalized size is input to a trained U-net model, the normalized video frame image is detected to obtain a binary image of a flame candidate region corresponding to the video frame, the binary image of the flame candidate region sets the pixel value of the flame to 1, and the pixel value of a non-flame region is set to 0.
5. The integrated deep network-based flame detection and identification method according to claim 1, wherein the step (3) comprises the following steps:
(31) normalizing the size of the binary image of the flame candidate region;
(32) storing coordinate positions of all nonzero values in the binary image of the normalized flame candidate region in the image;
(33) classifying the stored coordinate positions so that any two types of coordinates satisfy the following formula:
Figure FDA0003061560500000021
wherein A represents the set of all classes that are sorted out, Abj1X-axis coordinate, Ab, representing the jth element in class b coordinatesj2Y-axis coordinate, Aa, representing jth data in class b coordinatesi1X-axis coordinate, Aa, representing the ith datum in class a coordinatesi2Y-axis coordinate, l, representing the ith data in the a-th coordinate0Represents a preset threshold value, n represents the number of elements in Aa, and m represents the number of elements in Ab;
(34) taking the maximum value and the minimum value in each type of coordinates as indexes, and taking out sub-images of the binary images of the flame candidate areas;
(35) All sub-images are returned, wherein each sub-image element represents a flame candidate target binary image.
6. The integrated deep network-based flame detection and identification method according to claim 1, wherein the step (4) comprises the following steps:
(41) finding out a flame candidate target image corresponding to the flame candidate target binary image in the video frame;
(42) normalizing all the flame candidate target images to a uniform size;
(43) sending the flame target into a trained Resnet-50 model for recognition, and removing non-flame targets according to the confidence degree to obtain flame targets;
(44) returning the positions of all flame targets;
(45) and marking the position of the returned flame area in the video frame to realize the visualization of flame tracking.
7. A flame detection and identification system based on an integrated deep network is characterized by comprising:
an image preprocessing module: the video frame image preprocessing and normalization unit is used for reading the video frame image and preprocessing and normalizing the video frame image;
a flame detection module: the device is used for detecting the normalized video frame image by using the trained U-net model to obtain a binary image of the flame candidate area;
a neighborhood segmentation module: the method is used for clustering and segmenting the flame candidate region binary image according to the distance rule by using a domain segmentation method to obtain a flame candidate target binary image in the video frame, and then returning the corresponding flame candidate target image of the flame candidate target binary image in the original video frame;
a flame identification module: the flame candidate target image obtained by the neighborhood segmentation module is further identified by using the trained Resnet-50 model, a non-flame target is removed to obtain a flame target, and the position of the flame target is stored;
a flame region visualization module: the flame tracking module is used for marking the corresponding flame target in the original video image according to the flame target position stored by the flame identification module so as to realize the visualization of flame tracking;
and, a flame alarm module: the system is used for continuously monitoring videos, and when the number of frames of the flame target center position continuously appearing in the same area reaches a preset threshold value, a flame alarm is sent out to prompt a user.
8. An integrated deep network based flame detection and identification system comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the computer program when loaded into the processor implements the integrated deep network based flame detection and identification method according to any of claims 1-6.
CN202010095917.XA 2020-02-17 2020-02-17 Flame detection and identification method and system based on integrated deep network Active CN111310662B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010095917.XA CN111310662B (en) 2020-02-17 2020-02-17 Flame detection and identification method and system based on integrated deep network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010095917.XA CN111310662B (en) 2020-02-17 2020-02-17 Flame detection and identification method and system based on integrated deep network

Publications (2)

Publication Number Publication Date
CN111310662A CN111310662A (en) 2020-06-19
CN111310662B true CN111310662B (en) 2021-08-31

Family

ID=71145046

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010095917.XA Active CN111310662B (en) 2020-02-17 2020-02-17 Flame detection and identification method and system based on integrated deep network

Country Status (1)

Country Link
CN (1) CN111310662B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738769B (en) * 2020-06-24 2024-02-20 湖南快乐阳光互动娱乐传媒有限公司 Video processing method and device
CN112329549A (en) * 2020-10-16 2021-02-05 国电大渡河枕头坝发电有限公司 Flame identification method for water power plant area
CN112633061B (en) * 2020-11-18 2023-03-24 淮阴工学院 Lightweight FIRE-DET flame detection method and system
CN112396026A (en) * 2020-11-30 2021-02-23 北京华正明天信息技术股份有限公司 Fire image feature extraction method based on feature aggregation and dense connection
CN113052226A (en) * 2021-03-22 2021-06-29 淮阴工学院 Time-sequence fire identification method and system based on single-step detector
CN113762162A (en) * 2021-09-08 2021-12-07 合肥中科类脑智能技术有限公司 Fire early warning method and system based on semantic segmentation and recognition
CN116152667B (en) * 2023-04-14 2023-06-30 英特灵达信息技术(深圳)有限公司 Fire detection method and device, electronic equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100862409B1 (en) * 2007-05-31 2008-10-08 대구대학교 산학협력단 The fire detection method using video image
KR101432440B1 (en) * 2013-04-29 2014-08-21 홍익대학교 산학협력단 Fire smoke detection method and apparatus
CN106250845A (en) * 2016-07-28 2016-12-21 北京智芯原动科技有限公司 Flame detecting method based on convolutional neural networks and device
CN106845410B (en) * 2017-01-22 2020-08-25 西安科技大学 Flame identification method based on deep learning model
CN108537215B (en) * 2018-03-23 2020-02-21 清华大学 Flame detection method based on image target detection
CN109190481B (en) * 2018-08-06 2021-11-23 中国交通通信信息中心 Method and system for extracting road material of remote sensing image
CN110414472A (en) * 2019-08-06 2019-11-05 湖南特致珈物联科技有限公司 A kind of multidimensional fire disaster intelligently detection system based on video

Also Published As

Publication number Publication date
CN111310662A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
CN111310662B (en) Flame detection and identification method and system based on integrated deep network
US11195051B2 (en) Method for person re-identification based on deep model with multi-loss fusion training strategy
CN109284733B (en) Shopping guide negative behavior monitoring method based on yolo and multitask convolutional neural network
WO2019218824A1 (en) Method for acquiring motion track and device thereof, storage medium, and terminal
CN107153817B (en) Pedestrian re-identification data labeling method and device
CN112686812B (en) Bank card inclination correction detection method and device, readable storage medium and terminal
US20210034840A1 (en) Method for Recognzing Face from Monitoring Video Data
CN110033040B (en) Flame identification method, system, medium and equipment
CN111445459B (en) Image defect detection method and system based on depth twin network
CN111680632A (en) Smoke and fire detection method and system based on deep learning convolutional neural network
CN109785366B (en) Related filtering target tracking method for shielding
CN110175615B (en) Model training method, domain-adaptive visual position identification method and device
Zhang et al. Road recognition from remote sensing imagery using incremental learning
WO2020206850A1 (en) Image annotation method and device employing high-dimensional image
Molina-Moreno et al. Efficient scale-adaptive license plate detection system
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112862849A (en) Image segmentation and full convolution neural network-based field rice ear counting method
WO2023124278A1 (en) Image processing model training method and apparatus, and image classification method and apparatus
CN111444816A (en) Multi-scale dense pedestrian detection method based on fast RCNN
CN114863464A (en) Second-order identification method for PID drawing picture information
CN111611866B (en) Flame detection and identification method and system based on YCrCb and LAB color spaces
WO2022121025A1 (en) Certificate category increase and decrease detection method and apparatus, readable storage medium, and terminal
CN116704490B (en) License plate recognition method, license plate recognition device and computer equipment
CN116704270A (en) Intelligent equipment positioning marking method based on image processing
CN116503622A (en) Data acquisition and reading method based on computer vision image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20200619

Assignee: Huai'an Haoran Network Technology Co.,Ltd.

Assignor: HUAIYIN INSTITUTE OF TECHNOLOGY

Contract record no.: X2021980015746

Denomination of invention: A flame detection and recognition method and system based on integrated depth network

Granted publication date: 20210831

License type: Common License

Record date: 20211227

EE01 Entry into force of recordation of patent licensing contract