CN111797726A - Flame detection method and device, electronic equipment and storage medium - Google Patents

Flame detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111797726A
CN111797726A CN202010561819.0A CN202010561819A CN111797726A CN 111797726 A CN111797726 A CN 111797726A CN 202010561819 A CN202010561819 A CN 202010561819A CN 111797726 A CN111797726 A CN 111797726A
Authority
CN
China
Prior art keywords
flame
image
area
region
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010561819.0A
Other languages
Chinese (zh)
Inventor
高美
潘华东
殷俊
张兴明
李中振
戚璇月瞳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010561819.0A priority Critical patent/CN111797726A/en
Publication of CN111797726A publication Critical patent/CN111797726A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Fire-Detection Mechanisms (AREA)

Abstract

The embodiment of the application provides a flame detection method, a flame detection device, electronic equipment and a storage medium, which are used for improving the accuracy of flame detection and reducing the false alarm probability. The method comprises the following steps: obtaining foreground areas of at least two continuous frames of images to be processed, wherein the foreground areas are areas determined after color detection is carried out on the images to be processed; the method comprises the steps of obtaining dynamic characteristics of a foreground area, wherein the dynamic characteristics comprise a circularity characteristic, a profile roughness characteristic and a mass center displacement characteristic, the circularity characteristic is used for representing regularity of the shape of the foreground area, the profile roughness characteristic is used for representing randomness and roughness of the foreground area, and the mass center displacement characteristic is used for representing mass center displacement of the foreground area on two adjacent frames of images to be processed; determining the probability that the foreground region is the first flame region according to the circularity feature, the profile roughness feature and the centroid displacement feature; and when the probability is larger than a preset threshold value, determining the foreground area as a first flame area.

Description

Flame detection method and device, electronic equipment and storage medium
Technical Field
The invention relates to the fields of flame detection, fire detection and video monitoring, in particular to a flame detection method and device, electronic equipment and a storage medium.
Background
The fire disaster is a frequent disaster which seriously harms the life and property safety of people, has strong burst property and great difficulty in fighting, and once the fire behavior is spread because measures are not taken in time in the initial stage of the fire disaster, huge life and property loss can be brought to people, and irreparable damage can be caused to the natural environment. And early warning of the fire at the initial stage of the fire can effectively avoid the spread of the fire and reduce the harm caused by the fire to the maximum extent, so that the cost in the fire prevention and control work can be greatly reduced by timely and effective flame identification.
Along with the development of intelligent video monitoring system, flame detection technology based on flame detection is applied to early fire detection, and in actual use, because video acquisition can receive the influence of environment, often receives the interference of highlight, light in the detection of the condition of a fire, leads to the condition of false retrieval to the wrong condition of a fire alarm that has sent out.
Disclosure of Invention
The embodiment of the application provides a flame detection method, a flame detection device, electronic equipment and a storage medium, which are used for improving the accuracy of flame detection and reducing the false alarm probability.
In a first aspect, a method of flame detection is provided, the method comprising:
obtaining foreground areas of at least two continuous frames of images to be processed, wherein the foreground areas are areas determined after color detection is carried out on the images to be processed;
acquiring dynamic features of the foreground region, wherein the dynamic features comprise a circularity feature, a profile roughness feature and a centroid displacement feature, the circularity feature is used for representing regularity of the shape of the foreground region, the profile roughness feature is used for representing randomness and roughness of the foreground region, and the centroid displacement feature is used for representing centroid displacement of the foreground region on two adjacent frames of images to be processed;
determining the probability that the foreground region is a first flame region according to the circularity feature, the profile roughness feature and the centroid displacement feature;
and when the probability is larger than a preset threshold value, determining that the foreground area is a first flame area.
Optionally, the method further includes:
detecting whether a motion area exists in the at least two continuous frames of images to be processed by a time domain difference method;
when detecting that a motion area exists in the at least two continuous frames of images to be processed, determining the motion area as a second flame area;
determining a target flame region from the first flame region and the second flame region.
Optionally, determining a target flame region according to the first flame region and the second flame region includes:
judging whether the image to be processed corresponding to the first flame area and the image to be processed corresponding to the second flame area are the same frame of image or not;
when the image to be processed corresponding to the first flame area and the image to be processed corresponding to the second flame area are the same frame of image, determining that the first flame area or the second flame area is a target flame area on the image to be processed.
Optionally, the method further includes:
acquiring an image block corresponding to the minimum circumscribed rectangular area of the first flame area on the image to be processed;
identifying the image block through a preset flame identification model, and judging whether the image block comprises flame or not;
and if the image block is determined to comprise flames, determining that the first flame area is a target flame area.
Optionally, the method further includes:
and outputting the position coordinates of the image blocks in the image to be processed, and triggering an automatic alarm system to alarm the fire.
In a second aspect, there is provided a flame detection apparatus, the apparatus comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring foreground areas of at least two continuous frames of images to be processed, and the foreground areas are areas determined after color detection is carried out on the images to be processed;
the acquisition module is further configured to acquire dynamic features of the foreground region, where the dynamic features include a circularity feature, a profile roughness feature and a centroid displacement feature, the circularity feature is used to characterize regularity of a shape of the foreground region, the profile roughness feature is used to characterize randomness and roughness of the foreground region, and the centroid displacement feature is used to characterize a centroid displacement amount of the foreground region on two adjacent frames of images to be processed;
a processing module for determining a probability that the foreground region is a first flame region based on the circularity feature, the profile roughness feature, and the centroid displacement feature;
the processing module is further configured to determine that the foreground region is a first flame region when the probability is greater than a preset threshold.
Optionally, the processing module is further configured to:
detecting whether a motion region exists in the at least two continuous frames of images by a time domain difference method;
when detecting that a motion area exists in the at least two continuous frames of images, determining the motion area as a second flame area;
determining a target flame region from the first flame region and the second flame region.
Optionally, the processing module is specifically configured to:
judging whether the image to be processed corresponding to the first flame area and the image to be processed corresponding to the second flame area are the same frame of image or not;
when the image to be processed corresponding to the first flame area and the image to be processed corresponding to the second flame area are the same frame of image, determining that the first flame area or the second flame area is a target flame area on the image to be processed.
Optionally, the obtaining module is further configured to:
acquiring an image block corresponding to the minimum circumscribed rectangular area of the first flame area on the image to be processed;
the processing module is further configured to identify an image block corresponding to the minimum circumscribed rectangular area of the first flame area on the image to be processed through a preset flame identification model after the acquisition module acquires the image block, and judge whether the image block includes flames; and if the image block is determined to comprise flames, determining that the first flame area is a target flame area.
Optionally, the processing module is further configured to:
and outputting the position coordinates of the image blocks in the image to be processed, and triggering an automatic alarm system to alarm the fire.
In a third aspect, an electronic device is provided, which includes:
a memory for storing program instructions;
a processor for calling the program instructions stored in the memory and executing the steps comprised in any of the methods of the first aspect according to the obtained program instructions.
In a fourth aspect, there is provided a computer-readable storage medium having stored thereon computer-executable instructions for causing a computer to perform the steps included in the method of any one of the first aspects.
In a fifth aspect, a computer program product containing instructions is provided, which when run on a computer causes the computer to perform the flame detection method described in the various possible implementations described above.
In the embodiment of the application, at least two continuous frames of images to be processed are obtained from images shot by a camera, color detection is carried out on the images to be processed, so that foreground regions in the at least two continuous frames of images to be processed are obtained, the circularity feature, the profile roughness feature and the centroid displacement feature of the foreground regions are extracted, the probability that the foreground regions are first flame regions is determined according to the circularity feature, the profile roughness feature and the centroid displacement feature, and when the probability is larger than a preset threshold value, the foreground regions are determined to be the first flame regions.
That is to say, the application first obtains a foreground region including flames in each frame of image of at least two consecutive frames of images to be processed through static features (for example, color features) of the flames, then extracts dynamic features of each foreground region, and further determines whether the foreground region including the flames is a first flame region according to the dynamic features. Therefore, the static characteristics and the dynamic characteristics of the flame are fused while the relevance between the front frame and the rear frame of the video is considered, and the accuracy of flame detection can be effectively improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application.
FIG. 1 is a flow chart of a method for detecting a flame according to an embodiment of the present disclosure;
FIG. 2 is a block diagram of a flame detection device according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a computer device in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. In the present application, the embodiments and features of the embodiments may be arbitrarily combined with each other without conflict. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
The terms "first" and "second" in the description and claims of the present application and the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the term "comprises" and any variations thereof, which are intended to cover non-exclusive protection. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. The "plurality" in the present application may mean at least two, for example, two, three or more, and the embodiments of the present application are not limited.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this document generally indicates that the preceding and following related objects are in an "or" relationship unless otherwise specified.
For ease of understanding, the technical background of the embodiments of the present invention will be described below.
At present, flame detection based on a flame detection technology is easy to generate false detection, and one scheme is that a video image is collected through a camera, the video image is processed based on Gaussian filtering and an image segmentation method, then the temperature of a monitored object is obtained in real time through a laser thermometer device, the obtained temperature value is compared with a preset flame temperature threshold value, and when the monitored object is determined to be flame through temperature comparison, the position of the flame is judged by combining brightness information of the monitored object. The auxiliary equipment laser thermometer must coincide with the visual field optical axis of the camera, so when the laser thermometer and the visual field optical axis of the camera have slight deviation, a large temperature measurement error is usually caused, and meanwhile, when flame is detected only through the temperature and brightness information of a monitored object, the situation that false detection is caused due to the influence of other light sources under a complex environment is difficult to eliminate. For example, in midday, the illumination intensity of the sun is high, the ground temperature is high, at this time, if there is a deviation between the field of view of the laser thermometer and the camera, the measured temperature of the monitored object is far higher than the actual temperature (that is, a preset flame temperature threshold is reached), and due to the strong sunlight and the high brightness, the non-flame is easily determined as a flame, which causes false detection.
The other scheme is that a flame data set and a deep convolution network model are built, the deep convolution network model is subjected to model training by using the flame data set, then each frame of picture in a video image acquired by a camera is detected, whether flame exists in the image is judged, and when the flame exists in the video, the coordinates of the flame in the video image are output. When the video images collected by the camera are detected through the pre-trained model, the motion characteristics of flames and the incidence relation between the front frame and the rear frame of the video images are not considered, so that the accuracy of the final detection result is low, and meanwhile, the whole image is detected only by using the deep convolution network model, so that the detected data are more and the consumed time is larger.
In view of this, an embodiment of the present application provides a flame detection method, which obtains foreground regions of at least two consecutive frames of images to be processed and dynamic features of each foreground region through color detection, determines a probability that each foreground region is a first flame region according to the dynamic features corresponding to each foreground region, and determines that each foreground region is a first flame region when the probability is greater than a preset threshold. The incidence relation between the front frame and the rear frame of the video image, the static characteristic and the dynamic characteristic of the flame are considered, so that the accuracy of flame detection is effectively improved, and the probability of false alarm is reduced.
After introducing the design concept of the embodiment of the present application, some simple descriptions are provided below for application scenarios to which the technical solution of the embodiment of the present application can be applied, and it should be noted that the application scenarios described below are only used for describing the embodiment of the present application and are not limited. In specific implementation, the technical scheme provided by the embodiment of the application can be flexibly applied according to actual needs.
In the embodiment of the application, the provided flame detection method and flame detection device can be applied to a video monitoring system, wherein the video monitoring system comprises an image acquisition system and a data analysis system. The flame detection method provided by the application can be executed by an image acquisition system in a video monitoring system, or the image acquisition system sends an image to a data analysis system, and the data analysis system executes the image. The image capturing system provides a shooting device, such as a camera, an electronic device with a camera function, such as a mobile phone or a tablet, a monitoring device, and the like, which can shoot images, and is used for capturing images of a monitored area.
The flame detection method provided by the embodiment of the application is described below with reference to the attached drawings in the specification. Referring to fig. 1, a flow of the flame detection method in the embodiment of the present application is described as follows:
step 101: and acquiring foreground areas of at least two continuous frames of images to be processed.
In the embodiment of the application, at least two continuous frames of images to be processed are obtained from an image shot by a camera, and then color detection is performed on the obtained images to be processed according to color features of the flame images to obtain a foreground region of the images to be processed. The manner of acquiring at least two consecutive frames of images to be processed may be, for example, at least two consecutive frames of images from a first frame of image in the images captured by the camera, and at this time, each frame of image captured by the camera needs to be taken as an image to be processed to perform foreground region extraction. Another mode is frame skipping acquisition, for example, when three consecutive frames of images need to be acquired, three consecutive frames of images may be acquired at intervals, or the acquired video images may be segmented according to a certain number (for example, every five frames of images are divided into one segment), and after three consecutive frames of images are acquired from the first segment, three consecutive frames of images are acquired from the third segment. Therefore, when no flame exists in the video image, the number of processed images can be effectively reduced, the running frequency of the equipment is reduced, and the service life of the equipment is prolonged. It should be noted that the above-mentioned manner of acquiring the to-be-processed image is only exemplary, and the manner of acquiring the to-be-processed image in the embodiment of the present application is not particularly limited.
Common color characteristics are RGB, HSI, Luv and Lab. Where, R in RGB is Red (Red), G is Green (Green), B represents Blue (Blue), H in HSI is hue (hue), S is saturation (saturation), and I is brightness (Intensity). In a possible implementation manner, the RGB value of each pixel point in the image to be processed is obtained, the pixel point of each color component of the RGB value within a preset range is determined as a pixel point in the initial foreground region, the pixel point of each color component of the RGB value not within the preset range is determined as a pixel point in the initial background region, finally, a single connected region in the initial foreground region is determined, and the single connected region is determined as a foreground region. Or, the HSI information of each pixel point in the image to be processed is obtained, and the manner of determining the foreground region is the same as the manner of determining the foreground region in the RGB value of each pixel point, which is not described herein again.
In another possible implementation, the color detection of the image to be processed may be performed, for example, using at least two of the above-described color features. In the embodiment of the application, color detection is performed on an image to be processed by using two color features, namely RGB and HIS, first, the image to be processed is detected by using the RGB color features to determine a first foreground region, then, the image to be processed is detected according to the HIS color features to determine a second foreground region, finally, the first foreground region and the second foreground region are compared, the similarity between the first foreground region and the second foreground region is greater than a preset value, and when the images to be processed corresponding to the first foreground region and the second foreground region are the same, the first foreground region and the second foreground region are fused to obtain a final foreground region. Therefore, after the color detection is carried out on the image to be processed by the at least two color features, the obtained at least two results are compared and fused, and the accuracy of the judgment of the initial flame region can be enhanced.
Step 102: and acquiring the dynamic characteristics of the foreground area.
In embodiments of the present application, the dynamic features include a circularity feature, a profile roughness feature, and a centroid displacement feature. Each dynamic feature corresponds to a dynamic feature value, and the following describes the determination process of the dynamic feature value corresponding to each dynamic feature.
1. Considering that the shape of the flame is mostly circular or elliptical, the dynamic feature includes a circularity feature or an elliptical feature, and the circularity feature is mainly used as an example herein, and is used for representing regularity of the shape of the foreground region, in a possible implementation manner, a calculation formula of a dynamic feature value corresponding to the circularity feature is as follows:
Figure RE-GDA0002639635040000091
wherein L isK、CK、AKRespectively showing the perimeter, the circularity and the area of the kth region, wherein n shows the number of the regions, and the maximum circularity value obtained by final calculation is 1. When C is presentKA smaller value of (d) indicates a more irregular shape of the foreground region, indicating a more probable probability that the foreground region is a flame region, i.e., the foreground region is a flame region.
2. The rough contour features are used for characterizing the randomness and the roughness of the foreground area. As an example, assume that the foreground region has a convex hull perimeter of LCAnd the flame perimeter is L, and in a possible implementation manner, the calculation formula of the dynamic characteristic value corresponding to the profile roughness characteristic is as follows:
Figure RE-GDA0002639635040000092
wherein the perimeter of the convex hull meansThe minimum circumscribed polygon circumference of the foreground region, the flame circumference is the actual circumference of the foreground region, and the maximum value of the finally calculated profile roughness is 1. When B is presentrThe smaller the value of (b), the more likely the foreground region is to be a flame region, i.e., the greater the probability that the foreground region is a flame region.
3. The centroid displacement characteristics are used for representing the centroid displacement of the foreground area on the two adjacent frames of images to be processed. As an example, assume that the centroid coordinate in the foreground region corresponding to the jth frame image is (x)j,yj) And j is 0,1,2, …, m-1, the centroid coordinates of the foreground region corresponding to the ith frame image are:
Figure RE-GDA0002639635040000093
if the barycentric coordinate of the foreground area corresponding to the i-1 frame image is (x)i-1,yi-1) In one possible implementation, the centroid displacement feature of the area in the ith frame image corresponds to the feature value
Figure RE-GDA0002639635040000094
When the centroid displacement is larger, it indicates that the foreground region is more likely to be a flame region, i.e., the probability that the foreground region is a flame region is larger.
In a specific implementation process, because the flame moves all the time, the accuracy of flame detection can be effectively improved by acquiring the dynamic characteristics of the foreground area.
Step 103: and determining the probability that the foreground region is the first flame region according to the circularity feature, the profile roughness feature and the centroid displacement feature.
As mentioned above, when the dynamic features of the foreground region are obtained, a corresponding dynamic feature value is obtained for each dynamic feature (see step 102), and then the probability that the foreground region may be a flame region may be determined according to the magnitude of each dynamic feature value.
In the embodiment of the application, the probabilities corresponding to each dynamic feature are subjected to weighted fusion to obtain the probability that the foreground region is the first flame region. For example, the probability that the foreground region is possibly a flame region is calculated according to the circularity feature and is 94%, and the weight occupied by the circularity feature is 35%; calculating according to the profile roughness characteristics to obtain that the probability that the foreground region is possibly a flame region is 92%, and the weight of the profile roughness characteristics is 35%; the probability that the foreground region is possibly the flame region is 95% calculated according to the centroid displacement feature, the weight occupied by the centroid displacement feature is 30%, and the probability that the foreground region is the first flame region is 94% x 35% + 92% x 35% + 95% x 30% — 93.6%.
Step 104: and when the probability is larger than a preset threshold value, determining the foreground area as a first flame area.
Taking the probability that the foreground region determined in the foregoing step is the first flame region as an example, if the preset threshold is 90%, at this time, since 93.6% > 90%, the foreground region is determined to be the first flame region.
The above embodiment provides a process of determining whether a foreground region on an image to be processed is a first flame region. In the following embodiments, a way of verifying whether the first flame region is determined to be accurate is provided.
In a first possible implementation manner, a flame detection process in another manner is performed on at least two consecutive frames of images to be processed acquired in step 101, for example, whether a motion region exists in at least two frames of images to be processed may also be detected by a time domain difference method, and when a motion region exists in at least two frames of images to be processed, the region is determined as a second flame region. Therefore, different flame regions, namely, a first flame region and a second flame region, are determined by different flame detection modes, and at this time, a target flame region can be determined according to the first flame region and the second flame region.
The time domain difference method comprises the steps of firstly selecting three continuous frames of images in a video image sequence, respectively calculating difference images of two adjacent frames, then carrying out binarization processing on the difference images by selecting a proper threshold value to obtain a binarized image, and finally carrying out logic and operation on the binarized image obtained at each pixel point to obtain a common part, so as to obtain contour information of a moving target, wherein the contour of the moving target is a second flame region. Therefore, the motion area of at least two continuous frames of images to be processed is obtained by the time-domain difference method, the influence of the background exposed due to motion can be effectively eliminated, and the accurate motion area is extracted.
In some embodiments, the manner of determining the target flame region from the first flame region and the second flame region may be: and judging whether the image to be processed corresponding to the first flame area and the image to be processed corresponding to the second flame area are the same frame of image, and determining that the first flame area or the second flame area is a target flame area on the image to be processed when the image to be processed corresponding to the first flame area and the image to be processed corresponding to the second flame area are the same frame of image. Therefore, the accuracy of the target flame region detection can be effectively improved by executing and operating the initial flame regions (the first flame region and the second flame region) determined in the two modes respectively.
In a second possible implementation manner, after the first flame area is determined in step 104, an image block corresponding to the minimum circumscribed rectangular area of the first flame area on the image to be processed may be further obtained, then data corresponding to the image block is input into a preset flame identification model, the input image block is identified by the preset model, whether the image block includes flames or not is determined, and if the image block includes flames, the first flame area is determined to be the target flame area. The preset flame recognition model is obtained after model training is carried out on the deep convolutional neural network through the collected flame data set.
Therefore, by acquiring the image block corresponding to the minimum circumscribed rectangular area of the first flame area in the image to be processed and only inputting the data corresponding to the image block into the preset flame identification model, the time consumption of the algorithm can be effectively reduced. Meanwhile, the determined first flame area is further determined through the preset flame recognition model, so that the accuracy of flame recognition can be effectively improved.
The first or second modes can be used alone or in combination. When the first mode and the second mode are used in combination, a specific implementation process may include:
mode A: after the first flame area and the second flame area are executed and operated, a target flame area is obtained, an image block corresponding to the minimum circumscribed rectangular area of the target flame area in an image to be processed is obtained, then data corresponding to the image block are input into a preset flame identification model, the input image block is identified through the preset model, whether the image block comprises flames or not is judged, and if the image block comprises the flames, the target flame area is determined to be a final flame area. Therefore, three flame detections are performed on the obtained foreground region, so that the flame detection accuracy can be effectively improved, and the false detection probability is reduced.
Mode B: the method comprises the steps of firstly, respectively carrying out flame recognition on image blocks corresponding to the minimum circumscribed rectangular areas of a first flame area and a second flame area in an image to be processed through a preset flame model, then respectively judging whether the images to be processed corresponding to the first flame area and the second flame area after being recognized by the preset flame recognition model are the same frame of image, and when the images are determined to be the same frame of image, determining the first flame area or the second flame area as a final target flame area of the image to be processed.
After the first flame area is verified to be the target flame area through the mode, the position coordinate of the minimum circumscribed rectangular area of the target flame area in the image to be processed is output, and an automatic alarm system is triggered to carry out fire alarm. The triggering of the automatic alarm system for alarming can be that a fire alarm signal is sent to a control center, and after receiving the alarm signal, a worker in the control center calls the monitoring system to confirm again and dials an alarm phone when confirming that flame exists. Or, an automatic alarm system of the monitoring system is directly related to fire alarm, and the automatic alarm system directly alarms when flame is determined to exist.
Based on the same inventive concept, the embodiment of the application provides a flame detection device, and the flame detection device can realize the corresponding functions of the flame detection method. The flame detection device may be a hardware structure, a software module, or a hardware structure plus a software module. The flame detection device can be realized by a chip system, and the chip system can be formed by a chip and can also comprise the chip and other discrete devices. Referring to fig. 2, the flame detection device includes an acquisition module 201 and a processing module 202. Wherein:
an obtaining module 201, configured to obtain foreground regions of at least two consecutive frames of images to be processed, where the foreground regions are regions determined after color detection is performed on the images to be processed;
the obtaining module 201 is further configured to obtain dynamic features of the foreground region, where the dynamic features include a circularity feature, a profile roughness feature, and a centroid displacement feature, the circularity feature is used to represent regularity of a shape of the foreground region, the profile roughness feature is used to represent randomness and roughness of the foreground region, and the centroid displacement feature is used to represent a centroid displacement amount of the foreground region on two adjacent frames of images to be processed;
a processing module 202, configured to determine a probability that the foreground region is a first flame region according to the circularity feature, the profile roughness feature, and the centroid displacement feature;
the processing module 202 is further configured to determine that the foreground region is a first flame region when the probability is greater than a preset threshold.
Optionally, the processing module 202 is further configured to:
detecting whether a motion region exists in the at least two continuous frames of images by a time domain difference method;
when detecting that a motion area exists in the at least two continuous frames of images, determining the motion area as a second flame area;
determining a target flame region from the first flame region and the second flame region.
Optionally, the processing module 202 is specifically configured to:
judging whether the image to be processed corresponding to the first flame area and the image to be processed corresponding to the second flame area are the same frame of image or not;
when the image to be processed corresponding to the first flame area and the image to be processed corresponding to the second flame area are the same frame of image, determining that the first flame area or the second flame area is a target flame area on the image to be processed.
Optionally, the obtaining module 201 is further configured to:
acquiring an image block corresponding to the minimum circumscribed rectangular area of the first flame area on the image to be processed;
the processing module 202 is further configured to, after the obtaining module 201 obtains an image block corresponding to the minimum circumscribed rectangular area of the first flame area on the image to be processed, identify the image block through a preset flame identification model, and determine whether the image block includes flames; and if the image block is determined to comprise flames, determining that the first flame area is a target flame area.
Optionally, the processing module 202 is further configured to:
and outputting the position coordinates of the image blocks in the image to be processed, and triggering an automatic alarm system to alarm the fire.
All relevant contents of the steps involved in the foregoing embodiments of the flame detection method can be cited to the functional description of the functional module corresponding to the flame detection device in the embodiments of the present application, and are not repeated herein.
The division of the modules in the embodiments of the present application is schematic, and only one logical function division is provided, and in actual implementation, there may be another division manner, and in addition, each functional module in each embodiment of the present application may be integrated in one processor, may also exist alone physically, or may also be integrated in one module by two or more modules. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Based on the same inventive concept, the embodiment of the application provides electronic equipment. Referring to fig. 3, the electronic device includes at least one processor 301 and a memory 302 connected to the at least one processor, in this embodiment, a specific connection medium between the processor 301 and the memory 302 is not limited in this application, in fig. 3, the processor 301 and the memory 302 are connected through a bus 300 as an example, the bus 300 is represented by a thick line in fig. 3, and a connection manner between other components is only schematically illustrated and is not limited. The bus 300 may be divided into an address bus, a data bus, a control bus, etc., and is shown with only one thick line in fig. 3 for ease of illustration, but does not represent only one bus or type of bus.
In the embodiment of the present application, the memory 302 stores instructions executable by the at least one processor 301, and the at least one processor 301 may execute the steps included in the foregoing flame detection method by executing the instructions stored in the memory 302.
The processor 301 is a control center of the electronic device, and may connect various portions of the electronic device through various interfaces and lines, and perform various functions and process data of the electronic device by operating or executing instructions stored in the memory 302 and calling data stored in the memory 302, thereby performing overall monitoring on the electronic device. Optionally, the processor 301 may include one or more processing units, and the processor 301 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, application programs, and the like, and the modem processor mainly handles wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 301. In some embodiments, the processor 301 and the memory 302 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
The processor 301 may be a general-purpose processor, such as a Central Processing Unit (CPU), digital signal processor, application specific integrated circuit, field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like, that may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the flame detection method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
Memory 302, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 302 may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charge Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. The memory 302 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 302 in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
By programming the processor 301, the code corresponding to the flame detection method described in the foregoing embodiment may be fixed in the chip, so that the chip can perform the steps of the flame detection method when operating, and how to program the processor 301 is a technique known to those skilled in the art and will not be described herein again.
Based on the same inventive concept, embodiments of the present application also provide a computer-readable storage medium storing computer instructions, which, when executed on a computer, cause the computer to perform the steps of the flame detection method as described above.
In some possible embodiments, the various aspects of the flame detection method provided herein may also be implemented in the form of a program product comprising program code for causing a detection apparatus to perform the steps of the flame detection method according to various exemplary embodiments of the present application described above in this specification, when the program product is run on an electronic device.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method of flame detection, the method comprising:
obtaining foreground areas of at least two continuous frames of images to be processed, wherein the foreground areas are areas determined after color detection is carried out on the images to be processed;
acquiring dynamic features of the foreground region, wherein the dynamic features comprise a circularity feature, a profile roughness feature and a centroid displacement feature, the circularity feature is used for representing regularity of the shape of the foreground region, the profile roughness feature is used for representing randomness and roughness of the foreground region, and the centroid displacement feature is used for representing centroid displacement of the foreground region on two adjacent frames of images to be processed;
determining the probability that the foreground region is a first flame region according to the circularity feature, the profile roughness feature and the centroid displacement feature;
and when the probability is larger than a preset threshold value, determining that the foreground area is a first flame area.
2. The method of claim 1, wherein the method further comprises:
detecting whether a motion area exists in the at least two continuous frames of images to be processed by a time domain difference method;
when detecting that a motion area exists in the at least two continuous frames of images to be processed, determining the motion area as a second flame area;
determining a target flame region from the first flame region and the second flame region.
3. The method of claim 2, wherein determining a target flame region from the first flame region and the second flame region comprises:
judging whether the image to be processed corresponding to the first flame area and the image to be processed corresponding to the second flame area are the same frame of image or not;
when the image to be processed corresponding to the first flame area and the image to be processed corresponding to the second flame area are the same frame of image, determining that the first flame area or the second flame area is a target flame area on the image to be processed.
4. The method of claim 1, wherein the method further comprises:
acquiring an image block corresponding to the minimum circumscribed rectangular area of the first flame area on the image to be processed;
identifying the image block through a preset flame identification model, and judging whether the image block comprises flame or not;
and if the image block is determined to comprise flames, determining that the first flame area is a target flame area.
5. The method of claim 4, wherein the method further comprises:
and outputting the position coordinates of the image blocks in the image to be processed, and triggering an automatic alarm system to alarm the fire.
6. A flame detection device, the device comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring foreground areas of at least two continuous frames of images to be processed, and the foreground areas are areas determined after color detection is carried out on the images to be processed;
the acquisition module is further configured to acquire dynamic features of the foreground region, where the dynamic features include a circularity feature, a profile roughness feature and a centroid displacement feature, the circularity feature is used to characterize regularity of a shape of the foreground region, the profile roughness feature is used to characterize randomness and roughness of the foreground region, and the centroid displacement feature is used to characterize a centroid displacement amount of the foreground region on two adjacent frames of images to be processed;
a processing module for determining a probability that the foreground region is a first flame region based on the circularity feature, the profile roughness feature, and the centroid displacement feature;
the processing module is further configured to determine that the foreground region is a first flame region when the probability is greater than a preset threshold.
7. The apparatus of claim 6, wherein the processing module is further configured to:
detecting whether a motion region exists in the at least two continuous frames of images by a time domain difference method;
when detecting that a motion area exists in the at least two continuous frames of images, determining the motion area as a second flame area;
determining a target flame region from the first flame region and the second flame region.
8. The apparatus of claim 7, wherein the processing module is specifically configured to:
judging whether the image to be processed corresponding to the first flame area and the image to be processed corresponding to the second flame area are the same frame of image or not;
when the image to be processed corresponding to the first flame area and the image to be processed corresponding to the second flame area are the same frame of image, determining that the first flame area or the second flame area is a target flame area on the image to be processed.
9. An electronic device, comprising:
a memory for storing program instructions;
a processor for calling program instructions stored in said memory and for executing the steps comprised by the method of any one of claims 1 to 5 in accordance with the obtained program instructions.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a computer, cause the computer to perform the method according to any one of claims 1-5.
CN202010561819.0A 2020-06-18 2020-06-18 Flame detection method and device, electronic equipment and storage medium Pending CN111797726A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010561819.0A CN111797726A (en) 2020-06-18 2020-06-18 Flame detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010561819.0A CN111797726A (en) 2020-06-18 2020-06-18 Flame detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111797726A true CN111797726A (en) 2020-10-20

Family

ID=72803532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010561819.0A Pending CN111797726A (en) 2020-06-18 2020-06-18 Flame detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111797726A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821446A (en) * 2022-05-17 2022-07-29 江苏三棱智慧物联发展股份有限公司 Smoke and fire identification method and device, electronic equipment and storage medium
CN115359247A (en) * 2022-08-30 2022-11-18 新创碳谷控股有限公司 Flame detection method and device based on dynamic characteristics and storage medium
CN115394040A (en) * 2022-08-30 2022-11-25 新创碳谷控股有限公司 Flame detection method, computer equipment and storage medium
CN115713833A (en) * 2022-08-30 2023-02-24 新创碳谷集团有限公司 Flame detection method and device based on area characteristics and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408745A (en) * 2014-11-18 2015-03-11 北京航空航天大学 Real-time smog scene detection method based on video image
KR20160136948A (en) * 2015-05-21 2016-11-30 주식회사 에스원 Part-based flame detection device and method
CN108074234A (en) * 2017-12-22 2018-05-25 湖南源信光电科技股份有限公司 A kind of large space flame detecting method based on target following and multiple features fusion
CN109886227A (en) * 2019-02-27 2019-06-14 哈尔滨工业大学 Inside fire video frequency identifying method based on multichannel convolutive neural network
CN110263654A (en) * 2019-05-23 2019-09-20 深圳市中电数通智慧安全科技股份有限公司 A kind of flame detecting method, device and embedded device
CN110516609A (en) * 2019-08-28 2019-11-29 南京邮电大学 A kind of fire video detection and method for early warning based on image multiple features fusion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408745A (en) * 2014-11-18 2015-03-11 北京航空航天大学 Real-time smog scene detection method based on video image
KR20160136948A (en) * 2015-05-21 2016-11-30 주식회사 에스원 Part-based flame detection device and method
CN108074234A (en) * 2017-12-22 2018-05-25 湖南源信光电科技股份有限公司 A kind of large space flame detecting method based on target following and multiple features fusion
CN109886227A (en) * 2019-02-27 2019-06-14 哈尔滨工业大学 Inside fire video frequency identifying method based on multichannel convolutive neural network
CN110263654A (en) * 2019-05-23 2019-09-20 深圳市中电数通智慧安全科技股份有限公司 A kind of flame detecting method, device and embedded device
CN110516609A (en) * 2019-08-28 2019-11-29 南京邮电大学 A kind of fire video detection and method for early warning based on image multiple features fusion

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821446A (en) * 2022-05-17 2022-07-29 江苏三棱智慧物联发展股份有限公司 Smoke and fire identification method and device, electronic equipment and storage medium
CN115359247A (en) * 2022-08-30 2022-11-18 新创碳谷控股有限公司 Flame detection method and device based on dynamic characteristics and storage medium
CN115394040A (en) * 2022-08-30 2022-11-25 新创碳谷控股有限公司 Flame detection method, computer equipment and storage medium
CN115713833A (en) * 2022-08-30 2023-02-24 新创碳谷集团有限公司 Flame detection method and device based on area characteristics and storage medium

Similar Documents

Publication Publication Date Title
CN111797726A (en) Flame detection method and device, electronic equipment and storage medium
CN109325964B (en) Face tracking method and device and terminal
CN108256404B (en) Pedestrian detection method and device
CN109726620B (en) Video flame detection method and device
CN109086734B (en) Method and device for positioning pupil image in human eye image
CN109766779B (en) Loitering person identification method and related product
CN107437318B (en) Visible light intelligent recognition algorithm
CN109409238B (en) Obstacle detection method and device and terminal equipment
CN108009466B (en) Pedestrian detection method and device
CN109670383B (en) Video shielding area selection method and device, electronic equipment and system
CN113255606A (en) Behavior recognition method and device, computer equipment and storage medium
CN108734684B (en) Image background subtraction for dynamic illumination scene
CN112733629A (en) Abnormal behavior judgment method, device, equipment and storage medium
CN112733814A (en) Deep learning-based pedestrian loitering retention detection method, system and medium
CN110569770A (en) Human body intrusion behavior recognition method and device, storage medium and electronic equipment
CN113240880A (en) Fire detection method and device, electronic equipment and storage medium
CN111881741A (en) License plate recognition method and device, computer equipment and computer-readable storage medium
CN113989858A (en) Work clothes identification method and system
CN113065454B (en) High-altitude parabolic target identification and comparison method and device
CN112861676B (en) Smoke and fire identification marking method, system, terminal and storage medium
CN111753587A (en) Method and device for detecting falling to ground
CN113505643A (en) Violation target detection method and related device
CN111814617B (en) Fire determination method and device based on video, computer equipment and storage medium
CN116778673A (en) Water area safety monitoring method, system, terminal and storage medium
CN115984780A (en) Industrial solid waste warehouse-in and warehouse-out distinguishing method and device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination